id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
61186826
Bueno-Orovio–Cherry–Fenton model
Phenomenological ionic model for human ventricular cells The Bueno-Orovio–Cherry–Fenton model, also simply called Bueno-Orovio model, is a minimal ionic model for human ventricular cells. It belongs to the category of phenomenological models, because of its characteristic of describing the electrophysiological behaviour of cardiac muscle cells without taking into account in a detailed way the underlying physiology and the specific mechanisms occurring inside the cells. This mathematical model reproduces both single cell and important tissue-level properties, accounting for physiological action potential development and conduction velocity estimations. It also provides specific parameters choices, derived from parameter-fitting algorithms of the MATLAB Optimization Toolbox, for the modeling of epicardial, endocardial and myd-myocardial tissues. In this way it is possible to match the action potential morphologies, observed from experimental data, in the three different regions of the human ventricles. The Bueno-Orovio–Cherry–Fenton model is also able to describe reentrant and spiral wave dynamics, which occurs for instance during tachycardia or other types of arrhythmias. From the mathematical perspective, it consists of a system of four differential equations. One PDE, similar to the monodomain model, for an adimensional version of the transmembrane potential, and three ODEs that define the evolution of the so-called gating variables, i.e. probability density functions whose aim is to model the fraction of open ion channels across a cell membrane. Mathematical modeling. The system of four differential equations reads as follows: formula_0 where formula_1 is the spatial domain and formula_2 is the final time. The initial conditions are formula_3, formula_4, formula_5, formula_6. formula_7 refers to the Heaviside function centered in formula_8. The adimensional transmembrane potential formula_9 can be rescaled in mV by means of the affine transformation formula_10. formula_11, formula_12 and formula_13 refer to gating variables, where in particular formula_13 can be also used as an indication of intracellular calcium formula_14 concentration (given in the adimensional range [0, 1] instead of molar concentration). formula_15 and formula_16 are the fast inward, slow outward and slow inward currents respectively, given by the following expressions: formula_17 formula_18 formula_19 All the above-mentioned ionic density currents are partially adimensional and are expressed in formula_20. Different parameters sets, as shown in Table 1, can be used to reproduce the action potential development of epicardial, endocardial and mid-myocardial human ventricular cells. There are some constants of the model, which are not located in Table 1, that can be deduced with the following formulas: formula_21 formula_22 formula_23 formula_24 formula_25 formula_26 formula_27 where the temporal constants, i.e. formula_28 are expressed in seconds, whereas formula_29 and formula_30 are adimensional. The diffusion coefficient formula_31 results in a value of formula_32, which comes from experimental tests on human ventricular tissues. In order to trigger the action potential development in a certain position of the domain formula_1, a forcing term formula_33, which represents an externally applied density current, is usually added at the right hand side of the PDE and acts for a short time interval only. Weak formulation. Assume that formula_34 refers to the vector containing all the gating variables, i.e. formula_35, and formula_36 contains the corresponding three right hand sides of the ionic model. The Bueno-Orovio–Cherry–Fenton model can be rewritten in the compact form: formula_37 Let formula_38 and formula_39 be two generic test functions. To obtain the weak formulation: formula_42 formula_43 Finally the weak formulation reads: Find formula_44 and formula_45, formula_46, such that formula_47 Numerical discretization. There are several methods to discretize in space this system of equations, such as the finite element method (FEM) or isogeometric analysis (IGA). Time discretization can be performed in several ways as well, such as using a backward differentiation formula (BDF) of order formula_48 or a Runge–Kutta method (RK). Space discretization with FEM. Let formula_49 be a tessellation of the computational domain formula_1 by means of a certain type of elements (such as tetrahedrons or hexahedra), with formula_50 representing a chosen measure of the size of a single element formula_51. Consider the set formula_52 of polynomials with degree smaller or equal than formula_53 over an element formula_54. Define formula_55 as the finite dimensional space, whose dimension is formula_56. The set of basis functions of formula_57 is referred to as formula_58. The semidiscretized formulation of the first equation of the model reads: find formula_59 projection of the solution formula_60 on formula_61, formula_62, such that formula_63 with formula_64, formula_65 semidiscretized version of the three gating variables, and formula_66 is the total ionic density current. The space discretized version of the first equation can be rewritten as a system of non-linear ODEs by setting formula_67 and formula_68: formula_69 where formula_70, formula_71 and formula_72. The non-linear term formula_73 can be treated in different ways, such as using state variable interpolation (SVI) or ionic currents interpolation (ICI). In the framework of SVI, by denoting with formula_74 and formula_75 the quadrature nodes and weights of a generic element of the mesh formula_76, both formula_77 and formula_78 are evaluated at the quadrature nodes: formula_79 The equations for the three gating variables, which are ODEs, are directly solved in all the degrees of freedom (DOF) of the tessellation formula_49 separately, leading to the following semidiscrete form: formula_80 Time discretization with BDF (implicit scheme). With reference to the time interval formula_81, let formula_82 be the chosen time step, with formula_83 number of subintervals. A uniform partition in time formula_84 is finally obtained. At this stage, the full discretization of the Bueno-Orovio ionic model can be performed both in a monolithic and segregated fashion. With respect to the first methodology (the monolithic one), at time formula_85, the full problem is entirely solved in one step in order to get formula_86 by means of either Newton method or Fixed-point iterations: formula_87 where formula_88 and formula_89 are extrapolations of transmembrane potential and gating variables at previous timesteps with respect to formula_90, considering as many time instants as the order formula_48 of the BDF scheme. formula_91 is a coefficient which depends on the BDF order formula_48. If a segregated method is employed, the equation for the evolution in time of the transmembrane potential and the ones of the gating variables are numerically solved separately: formula_94 formula_96 Another possible segregated scheme would be the one in which formula_97 is calculated first, and then it is used in the equations for formula_92. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\n\\begin{cases}\n\\dfrac{\\partial u}{\\partial t} = \\nabla \\cdot (D \\nabla u) - ( J_{fi} + J_{so} + J_{si}) & \\text{in } \\Omega \\times (0, T) \\\\[5pt]\n\\dfrac{\\partial v}{\\partial t} = \\dfrac{(1 - H(u - \\theta_v)) (v_\\infty - v)}{\\tau_v^-} - \\dfrac{H(u - \\theta_v) v}{\\tau_v^+} & \\text{in } \\Omega \\times (0, T) \\\\[5pt]\n\\dfrac{\\partial w}{\\partial t} = \\dfrac{(1 - H(u - \\theta_w)) (w_\\infty - w)}{\\tau_w^-} - \\dfrac{H(u - \\theta_w) w}{\\tau_w^+} & \\text{in } \\Omega \\times (0, T) \\\\[5pt]\n\\dfrac{\\partial s}{\\partial t} = \\dfrac{1}{\\tau_s} \\left( \\dfrac{1 + \\tanh(k_s (u - u_s))}{2}-s \\right) & \\text{in } \\Omega \\times (0, T) \\\\[5pt]\n\\big( D \\nabla u \\big) \\cdot \\boldsymbol{N} = 0 & \\text{on } \\partial \\Omega \\times (0, T) \\\\[5pt]\nu = u_0, \\, v = v_0, \\, w = w_0, \\, s = s_0 & \\text{in } \\Omega \\times \\{0\\} \n\\end{cases}\n" }, { "math_id": 1, "text": "\\Omega" }, { "math_id": 2, "text": "T" }, { "math_id": 3, "text": "u_0 = 0" }, { "math_id": 4, "text": "v_0 = 1" }, { "math_id": 5, "text": "w_0 = 1" }, { "math_id": 6, "text": "s_0 = 0" }, { "math_id": 7, "text": "H(x - x_0)" }, { "math_id": 8, "text": "x_0" }, { "math_id": 9, "text": "u" }, { "math_id": 10, "text": " V_{mV} = 85.7 u - 84" }, { "math_id": 11, "text": "v" }, { "math_id": 12, "text": "w" }, { "math_id": 13, "text": "s" }, { "math_id": 14, "text": "{{Ca}_i}^{2+}" }, { "math_id": 15, "text": "J_{fi}, J_{so}" }, { "math_id": 16, "text": "J_{si}" }, { "math_id": 17, "text": "\nJ_{fi} = -\\frac{v H(u - \\theta_v) (u - \\theta_v) (u_u - u)}{\\tau_{fi}},\n" }, { "math_id": 18, "text": "\nJ_{so} = \\frac{(u - u_o) (1 - H(u - \\theta_w))}{\\tau_o} + \\frac{H(u - \\theta_w)}{\\tau_{so}},\n" }, { "math_id": 19, "text": "\nJ_{si} = -\\frac{H(u - \\theta_w) w s}{\\tau_{si}},\n" }, { "math_id": 20, "text": "\\dfrac{1}{\\text{seconds}}" }, { "math_id": 21, "text": "\n\\tau_v^- = (1 - H(u - \\theta_v^-)) \\tau_{v_1}^- + H(u - \\theta_v^-) \\tau_{v_2}^-,\n" }, { "math_id": 22, "text": "\n\\tau_w^- = \\tau_{w_1}^- + (\\tau_{w_2}^- - \\tau_{w_1}^-)(1 + \\tanh(k_w^-(u - u_w^-)))/2,\n" }, { "math_id": 23, "text": "\n\\tau_{so} = \\tau_{{so}_1} + (\\tau_{{so}_2} - \\tau_{{so}_1})(1 + \\tanh(k_{so} (u - u_{so})))/2,\n" }, { "math_id": 24, "text": "\n\\tau_s = (1 - H(u - \\theta_w)) \\tau_{s_1} + H(u - \\theta_w) \\tau_{s_2},\n" }, { "math_id": 25, "text": "\n\\tau_o = (1 - H(u - \\theta_o)) \\tau_{o_1} + H(u - \\theta_o) \\tau_{o_2},\n" }, { "math_id": 26, "text": "\nv_{\\infty} = 1 - H(u - \\theta_v^-),\n" }, { "math_id": 27, "text": "\nw_\\infty = (1 - H(u - \\theta_o))(1 - u/\\tau_{w_\\infty}) + H(u - \\theta_o) w_\\infty^*.\n" }, { "math_id": 28, "text": "\\tau_v^-, \\tau_w^-, \\tau_{so}, \\tau_s, \\tau_o" }, { "math_id": 29, "text": "v_{\\infty}" }, { "math_id": 30, "text": "w_{\\infty}" }, { "math_id": 31, "text": "D" }, { "math_id": 32, "text": "1.171 \\pm 0.221 \\dfrac{\\text{cm}^2}{\\text{seconds}}" }, { "math_id": 33, "text": "J_\\text{app}(\\boldsymbol x, t)" }, { "math_id": 34, "text": "\\boldsymbol{z}" }, { "math_id": 35, "text": "\\boldsymbol{z} = [v, w, s]^T" }, { "math_id": 36, "text": "\\boldsymbol{F}:\\mathbb{R}^4 \\rightarrow \\mathbb{R}^3" }, { "math_id": 37, "text": "\n\\begin{cases}\n\\dfrac{\\partial u}{\\partial t} - \\nabla \\cdot (D \\nabla u) + ( J_{fi} + J_{so} + J_{si} ) = 0 & \\text{in } \\Omega \\times (0, T) \\\\[5pt]\n\\dfrac{\\partial \\boldsymbol{z}}{\\partial t} = \\boldsymbol{F}(u, \\boldsymbol{z}) & \\text{in } \\Omega \\times (0, T) \\\\\n\\end{cases}\n" }, { "math_id": 38, "text": "p \\in U = H^1(\\Omega)" }, { "math_id": 39, "text": "\\boldsymbol{q} \\in \\boldsymbol{W}=[L^2(\\Omega)]^3" }, { "math_id": 40, "text": "p \\in U" }, { "math_id": 41, "text": "\\boldsymbol{q} \\in \\boldsymbol{W}" }, { "math_id": 42, "text": "\n\\begin{cases}\n\\displaystyle \\int_\\Omega \\dfrac{du(t)}{dt} p \\,d\\Omega - \\int_\\Omega \\nabla \\cdot (D \\nabla u(t)) p \\,d\\Omega + \\int_\\Omega (J_{fi} + J_{so} + J_{si}) p \\,d\\Omega = 0 & \\forall p \\in U \\\\[5pt]\n\\displaystyle \\int_\\Omega \\dfrac{d\\boldsymbol{z}(t)}{dt} \\boldsymbol{q} \\,d\\Omega = \\int_\\Omega \\boldsymbol{F}(u(t), \\boldsymbol{z}(t)) \\boldsymbol{q} \\,d\\Omega & \\forall \\boldsymbol{q} \\in \\boldsymbol{W}\n\\end{cases}\n" }, { "math_id": 43, "text": "\n- \\int_\\Omega \\nabla \\cdot (D \\nabla u(t)) p \\,d\\Omega = \\int_\\Omega D \\nabla u(t) \\nabla p \\, d\\Omega - \\cancelto{\\text{Neumann B.C.}}{\\int_{\\partial \\Omega} (D \\nabla u(t)) \\cdot \\boldsymbol{N} p \\,d\\Omega}\n" }, { "math_id": 44, "text": "u \\in L^2(0, T; H^1(\\Omega))" }, { "math_id": 45, "text": "\\boldsymbol{z} \\in L^2(0, T; [L^2(\\Omega)]^3)" }, { "math_id": 46, "text": "\\forall t \\in (0, T)" }, { "math_id": 47, "text": "\n\\begin{cases}\n\\displaystyle \\int_\\Omega \\frac{du(t)}{dt} p \\,d\\Omega + \\int_{\\Omega} D \\nabla u(t) \\cdot \\nabla p \\,d\\Omega + \\int_\\Omega (J_{fi} + J_{so} + J_{si}) p \\,d\\Omega = 0 & \\forall p \\in U \\\\[5pt]\n\\displaystyle \\int_\\Omega \\frac{d\\boldsymbol{z}(t)}{dt} \\boldsymbol{q} \\,d\\Omega = \\int_\\Omega \\boldsymbol{F}(u(t), \\boldsymbol{z}(t)) \\boldsymbol{q} \\,d\\Omega & \\forall \\boldsymbol{q} \\in \\boldsymbol{W} \\\\[5pt]\nu(0) = u_0 \\\\[5pt]\n\\boldsymbol{z}(0) = \\boldsymbol{z}_0\n\\end{cases}\n" }, { "math_id": 48, "text": "\\sigma" }, { "math_id": 49, "text": "\\mathcal{T}_h" }, { "math_id": 50, "text": "h" }, { "math_id": 51, "text": "K \\in \\mathcal{T}_h" }, { "math_id": 52, "text": "\\mathbb{P}^r(K)" }, { "math_id": 53, "text": "r" }, { "math_id": 54, "text": "K" }, { "math_id": 55, "text": "\\mathcal{X}_h^r = \\{f \\in C^0(\\bar \\Omega): f|_K \\in \\mathbb{P}^r(K) \\,\\, \\forall K \\in \\mathcal{T}_h \\}" }, { "math_id": 56, "text": "N_h=\\dim(\\mathcal{X}_h^r)" }, { "math_id": 57, "text": "\\mathcal{X}_h^r" }, { "math_id": 58, "text": "\\{ \\phi_i \\}_{i=1}^{N_h}" }, { "math_id": 59, "text": "u_h = u_h(t) = \\sum_{j=1}^{N_h} \\bar{u}_j(t) \\phi_j" }, { "math_id": 60, "text": "u(t)" }, { "math_id": 61, "text": "\\mathcal{X}_{h}^r" }, { "math_id": 62, "text": "\\forall t \\in (0, T) " }, { "math_id": 63, "text": "\n\\int_\\Omega {\\dot{u}}_{h} \\phi_i \\, d \\Omega + \\int_\\Omega (D \\nabla u_h ) \\cdot \\nabla \\phi_i \\, d\\Omega + \\int_\\Omega J_\\text{ion}(u_h, \\boldsymbol{z}_h) \\phi_i \\, d \\Omega = 0 \\quad \\text{for } i = 1, \\ldots, N_h\n" }, { "math_id": 64, "text": "u_h(0) = \\sum_{j=1}^{N_h} \\left( \\int_\\Omega u_0 \\phi_j \\,d\\Omega \\right) \\phi_j" }, { "math_id": 65, "text": "\\boldsymbol{z}_{h} = \\boldsymbol{z}_h(t) = [v_h(t), w_h(t), s_h(t)]^T" }, { "math_id": 66, "text": "J_\\text{ion}(u_h, \\boldsymbol{z}_{h})=J_{fi}(u_h, \\boldsymbol{z}_h) + J_{so}(u_h, \\boldsymbol{z}_h) + J_{si}(u_h, \\boldsymbol{z}_h)" }, { "math_id": 67, "text": "\\boldsymbol{U}_h(t) = \\{ \\bar{u}_j(t) \\}_{j=1}^{N_h}" }, { "math_id": 68, "text": "\\boldsymbol{Z}_h(t) = \\{ \\bar{\\boldsymbol{z}}_j(t) \\}_{j=1}^{N_h}" }, { "math_id": 69, "text": "\n\\begin{cases}\n\\mathbb{M} \\dot{\\boldsymbol{U}}_h(t) + \\mathbb{K} \\boldsymbol{U}_h(t) + \\boldsymbol{J}_\\text{ion}(\\boldsymbol{U}_h(t), \\boldsymbol{Z}_h(t)) = 0 & \\forall t \\in (0, T) \\\\\n\\boldsymbol{U}_h(0) = \\boldsymbol{U}_{0, h}\n\\end{cases}\n" }, { "math_id": 70, "text": "\\mathbb{M}_{ij} = \\int_\\Omega \\phi_j \\phi_i \\,d \\Omega" }, { "math_id": 71, "text": "\\mathbb{K}_{ij} = \\int_\\Omega D \\nabla \\phi_j \\cdot \\nabla \\phi_i \\,d \\Omega" }, { "math_id": 72, "text": " \\left( \\boldsymbol{J}_\\text{ion}(\\boldsymbol{U}_{h}(t), \\boldsymbol{z}_{h}(t)) \\right)_i = \\int_\\Omega J_\\text{ion}(u_h, \\boldsymbol{z}_{h}) \\phi_i \\,d \\Omega" }, { "math_id": 73, "text": "\\boldsymbol{J}_\\text{ion}(\\boldsymbol{U}_{h}(t), \\boldsymbol{Z}_{h}(t))" }, { "math_id": 74, "text": "\\{ \\boldsymbol{x}_s^K \\}_{s=1}^{N_q}" }, { "math_id": 75, "text": "\\{ \\omega_s^K \\}_{s=1}^{N_q}" }, { "math_id": 76, "text": "K \\in \\mathcal{T}_{h}" }, { "math_id": 77, "text": "u_h" }, { "math_id": 78, "text": "\\boldsymbol{z}_h" }, { "math_id": 79, "text": "\n\\int_\\Omega J_\\text{ion}(u_h, \\boldsymbol{z}_h) \\phi_i \\, d \\Omega \\approx \\sum_{K \\in \\mathcal{T}_h} \\left( \\sum_{s=1}^{N_q} J_\\text{ion} \\left( \\sum_{j=1}^{N_h} \\bar{u}_j(t) \\phi_j(\\boldsymbol{x}_s^K), \\sum_{j=1}^{N_h} \\bar{\\boldsymbol{z}}_j(t) \\phi_j(\\boldsymbol{x}_s^K) \\right) \\phi_i(\\boldsymbol{x}_s^K) \\omega_s^K \\right) \n" }, { "math_id": 80, "text": "\n\\begin{cases}\n\\dot{\\boldsymbol{Z}}_h(t) = \\boldsymbol{F}(\\boldsymbol{U}_h(t), \\boldsymbol{Z}_h(t)) & \\forall t \\in (0, T) \\\\\n\\boldsymbol{Z}_h(0) = \\boldsymbol{Z}_{0, h}\n\\end{cases}\n" }, { "math_id": 81, "text": "(0, T]" }, { "math_id": 82, "text": "\\Delta t = \\dfrac{T}{N}" }, { "math_id": 83, "text": "N" }, { "math_id": 84, "text": "[t_0 = 0, t_1 = \\Delta t, \\ldots, t_k, \\ldots, t_{N-1}, t_N = T]" }, { "math_id": 85, "text": "t = t^k" }, { "math_id": 86, "text": "(\\boldsymbol{U}_h^{k + 1}, \\boldsymbol{Z}_h^{k + 1})" }, { "math_id": 87, "text": "\n\\begin{cases}\n\\mathbb{M} \\alpha \\dfrac{ \\boldsymbol{U}_{h}^{k + 1} - \\boldsymbol{U}_{\\text{ext}, h}^{k}}{\\Delta t} + \\mathbb{K} \\boldsymbol{U}_h^{k + 1} + \\boldsymbol{J}_\\text{ion}(\\boldsymbol{U}_{h}^{k + 1}, \\boldsymbol{Z}_{h}^{k + 1}) = 0 \\\\\n\\alpha \\dfrac{\\boldsymbol{Z}_h^{k + 1} - \\boldsymbol{Z}_{\\text{ext}, h}^{k}}{\\Delta t} = \\boldsymbol{F}(\\boldsymbol{U}_h^{k + 1}, \\boldsymbol{Z}_h^{k + 1} )\n\\end{cases} \n" }, { "math_id": 88, "text": "\\boldsymbol{U}_{\\text{ext}, h}^{k}" }, { "math_id": 89, "text": "\\boldsymbol{Z}_{\\text{ext}, h}^{k}" }, { "math_id": 90, "text": "t^{k+1}" }, { "math_id": 91, "text": "\\alpha" }, { "math_id": 92, "text": "\\boldsymbol{Z}_h^{k + 1}" }, { "math_id": 93, "text": "\\boldsymbol{U}_{\\text{ext}, h}^k" }, { "math_id": 94, "text": "\n\\alpha \\dfrac{\\boldsymbol{Z}_h^{k + 1} - \\boldsymbol{Z}_{\\text{ext}, h}^{k}}{\\Delta t} = \\boldsymbol{F}(\\boldsymbol{U}_{\\text{ext}, h}^k, \\boldsymbol{Z}_h^{k + 1} )\n" }, { "math_id": 95, "text": "\\boldsymbol{U}_h^{k + 1}" }, { "math_id": 96, "text": "\n\\mathbb{M} \\alpha \\dfrac{ \\boldsymbol{U}_h^{k + 1} - \\boldsymbol{U}_{\\text{ext}, h}^k}{\\Delta t} + \\mathbb{K} \\boldsymbol{U}_h^{k + 1} + \\boldsymbol{J}_\\text{ion}(\\boldsymbol{U}_h^{k + 1}, \\boldsymbol{Z}_h^{k + 1}) = 0 \n" }, { "math_id": 97, "text": "\\boldsymbol{U}_{h}^{k + 1}" } ]
https://en.wikipedia.org/wiki?curid=61186826
61186827
Electro-oxidation
Technique used for wastewater treatment Electro-oxidation (EO or EOx), also known as anodic oxidation or electrochemical oxidation (EC), is a technique used for wastewater treatment, mainly for industrial effluents, and is a type of advanced oxidation process (AOP). The most general layout comprises two electrodes, operating as anode and cathode, connected to a power source. When an energy input and sufficient supporting electrolyte are provided to the system, strong oxidizing species are formed, which interact with the contaminants and degrade them. The refractory compounds are thus converted into reaction intermediates and, ultimately, into water and CO2 by complete mineralization. Electro-oxidation has recently grown in popularity thanks to its ease of set-up and effectiveness in treating harmful and recalcitrant organic pollutants, which are typically difficult to degrade with conventional wastewater remediation processes. Also, it does not require any external addition of chemicals (contrary to other processes), as the required reactive species are generated at the anode surface. Electro-oxidation has been applied to treat a wide variety of harmful and non-biodegradable contaminants, including aromatics, pesticides, drugs and dyes. Due to its relatively high operating costs, it is often combined with other technologies, such as biological remediation. Electro-oxidation can additionally be paired with other electrochemical technologies such as electrocoagulation, consecutively or simultaneously, to further reduce operational costs while achieving high degradation standards. Apparatus. The set-up for performing an electro-oxidation treatment consists of an electrochemical cell. An external electric potential difference (aka voltage) is applied to the electrodes, resulting in the formation of reactive species, namely hydroxyl radicals, in the proximity of the electrode surface. To assure a reasonable rate of generation of radicals, voltage is adjusted to provide current density of 10-100 mA/cm2. While the cathodes materials are mostly the same in all cases, the anodes can vary greatly according to the application (see ), as the reaction mechanism is strongly influenced by the material selection. Cathodes are mostly made up by stainless steel plates, Platinum mesh or carbon felt electrodes. Depending on the effluent nature, an increase of the conductivity of the solution may be required: the value of 1000 mS/cm is commonly taken as a threshold. Salts like sodium chloride or sodium sulfate can be added to the solution, acting as electrolytes, thus raising the conductivity. Typical values of salts concentration are in the range of few grams per liter, but the addition has a significant impact on power consumption and can reduce it by up to 30%. As the main cost associated to electro-oxidation process is the consumption of electricity, its performance are typically assessed through two main parameters, namely current efficiency and specific energy consumption. Current efficiency is generally defined as the charge required for the oxidation of the considered species over the total charged passed during electrolysis. Although some expressions have been proposed to evaluate the instantaneous current efficiency, they have several limitations due to the presence of volatile intermediates or the need for specialized equipment. Thus, it is much easier to define a general current efficiency (GCE), defined as an average of the value of current efficiency along the entire process and formulated as follows: formula_0 Where COD0 and CODt are the chemical oxygen demand (g/dm3) at time 0 and after the treatment time t, F is the Faraday's constant (96'485 C/mol), V is the electrolyte volume (dm3), I is the current (A), t is the treatment time (h) and 8 is the oxygen equivalent mass. Current efficiency is a time dependent parameter and it decreases monotonically with treatment time. Instead, the specific energy consumption measures the energy required to remove a unit of COD from the solution and is typically expressed in kWh/kgCOD. It can be calculated according to: formula_1 Where EC is the cell voltage (V), I is the current (A), t is the treatment time (h), (ΔCOD)t is the COD decay at the end of the process (g/L) and Vs is the solute volume (L). As the current efficiency may vary significantly depending on the treated solution, one should always find the optimal compromise between current density, treatment time and the resulting specific energy consumption, so to meet the required removal efficiency. Working principle. Direct oxidation. When voltage is applied to the electrodes, intermediates of oxygen evolution are formed near the anode, notably hydroxyl radicals. Hydroxyl radicals are known to have one of the highest redox potentials, allowing the degrading many refractory organic compounds. A reaction mechanism has been proposed for the formation of the hydroxyl radical at the anode through oxidation of water: <chem>S + H2O -> S[*OH] + H+ + e-</chem> Where S represents the generic surface site for adsorption on the electrode surface. Then, the radical species can interact with the contaminants through two different reaction mechanisms, according to the anode material. The surface of "active" anodes strongly interacts with hydroxyl radicals, leading to the production of higher state oxides or superoxides. The higher oxide then acts as a mediator in the selective oxidation of organic pollutants. Due to the radicals being strongly chemisorbed onto the electrode surface, the reactions are limited to the proximity of the anode surface, according to the mechanism: <chem>S[*OH] -> SO +H+ + e-</chem> <chem>SO + R -> S + RO</chem> Where R is the generic organic compound, while RO is the partially oxidized product. If the electrode interacts weakly with the radicals, it is qualified as a "non active" anode. Hydroxyl radicals are physisorbed on the electrode surface by means of weak interaction forces and thus available for reaction with contaminants. The organic pollutants are converted to fully oxidized products, such as CO2, and reactions occur in a much less selective way with respect to active anodes: <chem>S[*OH] + R -> S + mCO2 + nH2O + H+ + e-</chem> Both chemisorbed and physisorbed radicals can undergo the oxygen evolution competitive reaction. For this reason, the distinction between active and non active anodes is made according to their oxygen evolution overpotential. Electrodes with low oxygen overpotential show an active behavior, as in the case of Platinum, graphite or mixed metal oxide electrodes. Conversely, electrodes with high oxygen overpotential will be non-active. Typical examples of nonactive electrodes are lead dioxide or boron-doped diamond electrodes. A higher oxygen overpotential implies a lower yield of the oxygen evolution reaction, thus raising the anodic process efficiency. Mediated oxidation. When appropriate oxidizing agents are dissolved into the solution, the electro-oxidation process not only leads to organics oxidation at the electrode surface, but it also promotes the formation of other oxidant species within the solution. Such oxidizing chemicals are not bound to the anode surface and can extend the oxidation process to the entire bulk of the system. Chlorides are the most widespread species for the mediated oxidation. This is due to the chlorides being very common in most wastewater effluents and being easily converted into hypochlorite, according to global reaction: <chem>Cl- + H2O -> ClO- + 2H+ + 2e-</chem> Although hypochlorite is the main product, chlorine and hypochlorous acid are also formed as reactions intermediate. Such species are strongly reactive with many organic compounds, promoting their mineralization, but they can also produce several unwanted intermediates and final products. These chlorinated by-products sometimes can be even more harmful than the raw effluent contaminants and require additional treatments to be removed. To avoid this issue, sodium sulfate is preferred as electrolyte to sodium chloride, so that chloride ions are not available for the mediated oxidation reaction. Although sulfates can be involved in mediated oxidation as well, electrodes with high oxygen evolution overpotential are required to make it happen. Electrode materials. Carbon and graphite. Electrodes based on carbon or graphite are common due to their low cost and high surface area. Also, they are able to promote adsorption of contaminants on their surface while at the same generating the radicals for electro-oxidation. However, they are not suited for working at high potentials, as at such conditions they experience surface corrosion, resulting in reduced efficiency and progressive degradation of the exposed area. In fact, the overpotential for oxygen evolution is quite low for graphite (1.7 V vs SHE). Platinum. Platinum electrodes provide good conductivity and they are inert and stable at high potentials. At the same time, the oxygen evolution overpotential is low (1.6 V vs SHE) and comparable to that of graphite. As a result, electro-oxidation with Platinum electrodes usually provides low yield due to partial oxidation of the compounds. The contaminants are converted into stable intermediates, difficult to be broken down, thus reducing current efficiency for complete mineralization. Mixed metal oxides (MMOs). Mixed metal oxides, also known as dimensionally stable anodes, are very popular in electrochemical process industry, because they are very effective in promoting both chlorine and oxygen evolution. In fact, they have been used extensively in the chloroalkali industry and for water electrolysis process. In the case of wastewater treatment, they provide low current efficiency, because they favor the competitive reaction of oxygen evolution. Similarly to Platinum electrodes, formation of stable intermediates is favored over complete mineralization of the contaminants, resulting in reduced removal efficiency. Due to their ability to promote chlorine evolution reaction, dimensionally stable anodes are the most common choice for processes relying on mediated oxidation mechanism, especially in the case of chlorine and hypochlorite production. Lead dioxide. Lead dioxide electrodes have long been exploited in industrial applications, as they show high stability, large surface area, good conductivity and they are quite cheap. In addition, lead dioxide has a very high oxygen evolution overpotential (1.9 V vs SHE), which implies a high current efficiency for complete mineralization. Also, lead dioxide electrodes were found to be able to generate ozone, another strong oxidizer, at high potentials, according to the following mechanism: <chem>PbO2[*OH] -> PbO2[O*] + H+ + e-</chem> <chem>PbO2[O*] + O2-> PbO2 + O3</chem> Also, the electrochemical properties and the stability of these electrodes can be improved by selecting the proper crystal structure: the highly crystalline beta-phase of lead dioxide showed improved performance in the removal of phenols, due to the increased active surface provided by its porous structure. Moreover, incorporation of metallic species, such as Fe, Bi or As, within the film was found to increase the current efficiency for mineralization. Boron-doped diamond (BDD). Synthetic diamond is doped with Boron to raise its conductivity, making it feasible as electrochemical electrode. Once doped, BDD electrodes show high chemical and electrochemical stability, good conductivity, great resistance to corrosion even in harsh environment and a remarkable wide potential window (2.3 V vs SHE). For this reason, BDD is generally considered as the most effective electrode for complete mineralization of organics, providing high current efficiency as well as lower energy consumption compared to all other electrodes. At the same time, the manufacturing processes for this electrode, usually based on high temperature CVD technologies, are very costly. Reaction kinetics. Once the hydroxyl radicals are formed on the electrode surface, they rapidly react with organic pollutants, resulting in a lifetime of few nanoseconds. However, a transfer of ions from the bulk of the solution to the proximity of the electrode surface is required for the reaction to occur. Above a certain potential, the active species formed near the electrode are immediately consumed and the diffusion through the boundary layer near the electrode surface becomes the limiting step of the process. This explains why the observed rate of some fast electrode reactions can be low due to transport limitations. Evaluation of the limiting current density can be used as a tool to assess whether the electrochemical process is in diffusion control or not. If the mass transfer coefficient for the system is known, the limiting current density can be defined for a generic organic pollutant according to the relation: formula_2 Where jL is the limiting current density (A/m2), F is the Faraday's constant (96'485 C/mol), kd is the mass transfer coefficient (m/s), COD is the chemical oxygen demand for the organic pollutant (g/dm3) and 8 is the oxygen equivalent mass. According to this equation, the lower the COD the lower the corresponding limiting current. Hence, systems with low COD are likely to be operating in diffusion control, exhibiting pseudo-first order kinetics with exponential decrease. Conversely, for high COD concentration (roughly above 4000 mg/L) pollutants are degraded under kinetic control (actual current below the limiting value), following a linear trend according to zero-order kinetics. For intermediate values, the COD initially decreases linearly, under kinetic control, but below a critical COD value diffusion becomes the limiting step, resulting in an exponential trend. If the limiting current density is obtained with other analytical procedures, such as cyclic voltammetry, the proposed equation can be used to retrieve the corresponding mass transfer coefficient for the investigated system. Applications. Given the thorough investigations on the process design and electrodes formulation, electro-oxidation has already been applied to both pilot-scale and full-stage commercially available plants. Some relevant cases are listed below:
[ { "math_id": 0, "text": "GCE=FV\\frac{(COD_0-COD_t)}{8It}" }, { "math_id": 1, "text": "EC=\\frac{E_CIt}{(\\Delta COD)_tV_s}" }, { "math_id": 2, "text": "j_L=\\frac{Fk_dCOD}{8}" } ]
https://en.wikipedia.org/wiki?curid=61186827
61186830
Reinforced concrete structures durability
Set of processes and factors determining the service life of reinforced concrete structures The durability design of reinforced concrete structures has been recently introduced in national and international regulations. It is required that structures are designed to preserve their characteristics during the service life, avoiding premature failure and the need of extraordinary maintenance and restoration works. Considerable efforts have therefore made in the last decades in order to define useful models describing the degradation processes affecting reinforced concrete structures, to be used during the design stage in order to assess the material characteristics and the structural layout of the structure. Service life of a reinforced concrete structure. Initially, the chemical reactions that normally occur in the cement paste, generate an alkaline environment, bringing the solution in the cement paste pores to pH values around 13. In these conditions, passivation of steel rebar occurs, due to a spontaneous generation of a thin film of oxides able to protect the steel from corrosion. Over time, the thin film can be damaged, and corrosion of steel rebar starts. The corrosion of steel rebar is one of the main causes of premature failure of reinforced concrete structures worldwide, mainly as a consequence of two degradation processes, carbonation and penetration of chlorides. With regard to the corrosion degradation process, a simple and accredited model for the assessment of the service life is the one proposed by Tuutti, in 1982. According to this model, the service life of a reinforced concrete structure can be divided into two distinct phases. The identification of initiation time and propagation time is useful to further identify the main variables and processes influencing the service life of the structure which are specific of each service life phase and of the degradation process considered. Carbonation-induced corrosion. The initiation time is related to the rate at which carbonation propagates in the concrete cover thickness. Once that carbonation reaches the steel surface, altering the local pH value of the environment, the protective thin film of oxides on the steel surface becomes instable, and corrosion initiates involving an extended portion of the steel surface. One of the most simplified and accredited models describing the propagation of carbonation in time is to consider penetration depth proportional to the square root of time, following the correlation formula_2 where formula_3 is the carbonation depth, formula_4 is time, and formula_5 is the carbonation coefficient. The corrosion onset takes place when the carbonation depth reaches the concrete cover thickness, and therefore can be evaluated as formula_6 where formula_7 is the concrete cover thickness. formula_5 is the key design parameter to assess initiation time in the case of carbonation-induced corrosion. It is expressed in mm/year1/2 and depends on the characteristics of concrete and the exposure conditions. The penetration of gaseous CO2 in a porous medium such as concrete occurs via diffusion. The humidity content of concrete is one of the main influencing factors of CO2 diffusion in concrete. If concrete pores are completely and permanently saturated (for instance in submerged structures) CO2 diffusion is prevented. On the other hand, for completely dry concrete, the chemical reaction of carbonation cannot occur. Another influencing factor for CO2 diffusion rate is concrete porosity. Concrete obtained with higher w/c ratio or obtained with an incorrect curing process presents higher porosity at hardened state, and is therefore subjected to a higher carbonation rate. The influencing factors concerning the exposure conditions are the environmental temperature, humidity and concentration of CO2. Carbonation rate is higher for environments with higher humidity and temperature, and increases in polluted environments such as urban centres and inside close spaces as tunnels. To evaluate propagation time in the case of carbonation-induced corrosion, several models have been proposed. In a simplified but commonly accepted method, the propagation time is evaluated as function of the corrosion propagation rate. If the corrosion rate is considered constant, tp can be estimated as: formula_8 where formula_9 is the limit corrosion penetration in steel and formula_10 is the corrosion propagation rate. formula_9 must be defined in function of the limit state considered. Generally for carbonation-induced corrosion the concrete cover cracking is considered as limit state, and in this case a formula_9 equal to 100 μm is considered. formula_10 depends on the environmental factors in proximity of the corrosion process, such as the availability of oxygen and water at concrete cover depth. Oxygen is generally available at the steel surface, except for submerged structures. If pores are constantly fully saturated, a very low amount of oxygen reaches the steel surface and corrosion rate can be considered negligible. For very dry concretes formula_10 is negligible due to the absence of water which prevents the chemical reaction of corrosion. For intermediate concrete humidity content, corrosion rate increases with increasing the concrete humidity content. Since the humidity content in a concrete can significantly vary along the year, it is general not possible to define a constant formula_10. One possible approach is to consider a mean annual value of formula_10. Chloride-induced corrosion. The presence of chlorides to the steel surface, above a certain critical amount, can locally break the protective thin film of oxides on the steel surface, even if concrete is still alkaline, causing a very localized and aggressive form of corrosion known as pitting. Current regulations forbid the use of chloride contaminated raw materials, therefore one factor influencing the initiation time is chloride penetration rate from the environment. This is a complex task, because chloride solutions penetrate in concrete through the combination of several transport phenomena, such as diffusion, capillary effect and hydrostatic pressure. Chloride binding is another phenomenon affecting the kinetic of chloride penetration. Part of the total chloride ions can be absorbed or can chemically react with some constituents of the cement paste, leading to a reduction of chlorides in the pore solution (free chlorides that are steel able to penetrate in concrete). The ability of a concrete to chloride binding is related to the cement type, being higher for blended cements containing silica fume, fly ash or furnace slag. Being the modelling of chloride penetration in concrete particularly complex, a simplified correlation is generally adopted, which was firstly proposed by Collepardi in 1972 formula_11 Where formula_12 is the chloride concentration at the exposed surface, x is the chloride penetration depth, D is the chloride diffusion coefficient, and t is time. This equation is a solution of Fick's II law of diffusion in the hypothesis that chloride initial content is zero, that formula_12 is constant in time on the whole surface, and D is constant in time and through the concrete cover. With formula_12 and D known, the equation can be used to evaluate the temporal evolution of the chloride concentration profile in the concrete cover and evaluate the initiation time as the moment in which critical chloride threshold (formula_13) is reached at the depth of steel rebar. However, there are many critical issues related to the practical use of this model. For existing reinforced concrete structures in chloride-bearing environment formula_12 and D can be identified calculating the best-fit curve for measured chloride concertation profiles. From concrete samples retrieved on field is therefore possible to define the values of Cs and D for residual service life evaluation. On the other hand, for new structures it is more complicated to define formula_12 and D. These parameters depend on the exposure conditions, the properties of concrete such as porosity (and therefore w/c ratio and curing process) and type of cement used. Furthermore, for the evaluation of long-term behaviour of structure, a critical issue is related to the fact that formula_12 and D can not be considered constant in time, and that the transport penetration of chlorides can be considered as pure diffusion only for submerged structures. A further issue is the assessment of formula_13. There are various influencing factors, such as are the potential of steel rebar and the pH of the solution included in concrete pores. Moreover, pitting corrosion initiation is a phenomenon with a stochastic nature, therefore also formula_13 can be defined only on statistical basis. Corrosion prevention. The durability assessment has been implemented in European design codes at the beginning of the 90s. It is required for designers to include the effects of long-term corrosion of steel rebar during the design stage, in order to avoid unacceptable damages during the service life of the structure. Different approaches are then available for the durability design. Standard approach. It is the standardized method to deal with durability, also known as deem-to-satisfy approach, and provided by current european regulation EN 206. It is required that the designer identifies the environmental exposure conditions and the expected degradation process, assessing the correct exposure class. Once this is defined, design code gives standard prescriptions for w/c ratio, the cement content, and the thickness of the concrete cover. This approach represents an improvement step for the durability design of reinforced concrete structures, it is suitable for the design of ordinary structures designed with traditional materials (Portland cement, carbon steel rebar) and with an expected service life of 50 years. Nevertheless, it is considered not completely exhaustive in some cases. The simple prescriptions do not allow to optimize the design for different parts of the structures with different local exposure conditions. Furthermore, they do not allow to consider the effects on service life of special measures such as the use of additional protections. Performance-based approach. Performance-based approaches provide for a real design of durability, based on models describing the evolution in time of degradation processes, and the definition of times at which defined limit states will be reached. To consider the wide variety of service life influencing factors and their variability, performance-based approaches address the problem from a probabilistic or semiprobabilistic point of view. The performance-based service life model proposed by the European project DuraCrete, and by FIB Model Code for Service Life Design, is based on a probabilistic approach, similar to the one adopted for structural design. Environmental factors are considered as loads S(t), while material properties such as chloride penetration resistance are considered as resistances R(t) as shown in Figure 2. For each degradation process, design equations are set to evaluate the probability of failure of predefined performances of the structure, where acceptable probability is selected on the basis of the limit state considered. The degradation processes are still described with the models previously defined for carbonation-induced and chloride-induced corrosion, but to reflect the statistical nature of the problem, the variables are considered as probability distribution curves over time. To assess some of the durability design parameters, the use of accelerated laboratory test is suggested, such as the so called Rapid Chloride Migration Test to evaluate chloride penetration resistance of concrete '. Through the application of corrective parameters, the long-term behaviour of the structure in real exposure conditions may be evaluated. The use of probabilistic service life models allows to implement a real durability design that could be implemented in the design stage of structures. This approach is of particular interest when an extended service life is required (>50 years) or when the environmental exposure conditions are particularly aggressive. Anyway, the applicability of this kind of models is still limited. The main critical issues still concern, for instance, the individuation of accelerated laboratory tests able to characterize concrete performances, reliable corrective factors to be used for the evaluation of long-term durability performances and the validation of these models based on real long-term durability performances. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "t_{i}" }, { "math_id": 1, "text": "t_{p}" }, { "math_id": 2, "text": "x=K \\sqrt{t}" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "t" }, { "math_id": 5, "text": "K" }, { "math_id": 6, "text": "t_{i}=\\left( \\frac{c}{K} \\right)^2" }, { "math_id": 7, "text": "c" }, { "math_id": 8, "text": "t_{p}= \\frac{p_{lim}}{v_{corr}}" }, { "math_id": 9, "text": "p_{lim}" }, { "math_id": 10, "text": "v_{corr}" }, { "math_id": 11, "text": "C(x,t)=C_{s}\\left[1- \\mathrm{erf} \\left( \\frac{x}{2\\sqrt{Dt}}\\right) \\right]" }, { "math_id": 12, "text": "C_{s}" }, { "math_id": 13, "text": "C_{cl}" } ]
https://en.wikipedia.org/wiki?curid=61186830
61186833
Truth discovery
Process of choosing the actual true value for a data item Truth discovery (also known as truth finding) is the process of choosing the actual "true value" for a data item when different data sources provide conflicting information on it. Several algorithms have been proposed to tackle this problem, ranging from simple methods like majority voting to more complex ones able to estimate the trustworthiness of data sources. Truth discovery problems can be divided into two sub-classes: single-truth and multi-truth. In the first case only one true value is allowed for a data item (e.g birthday of a person, capital city of a country). While in the second case multiple true values are allowed (e.g. cast of a movie, authors of a book). Typically, truth discovery is the last step of a data integration pipeline, when the schemas of different data sources have been unified and the records referring to the same data item have been detected. General principles. The abundance of data available on the web makes more and more probable to find that different sources provide (partially or completely) different values for the same data item. This, together with the fact that we are increasing our reliance on data to derive important decisions, motivates the need of developing good truth discovery algorithms.   Many currently available methods rely on a voting strategy to define the true value of a data item. Nevertheless, recent studies, have shown that, if we rely only on majority voting, we could get wrong results even in 30% of the data items. The solution to this problem is to assess the trustworthiness of the sources and give more importance to votes coming from trusted sources. Ideally, supervised learning techniques could be exploited to assign a reliability score to sources after hand-crafted labeling of the provided values; unfortunately, this is not feasible since the number of needed labeled examples should be proportional to the number of sources, and in many applications the number of sources can be prohibitive. Single-truth vs multi-truth discovery. Single-truth and multi-truth discovery are two very different problems. Single-truth discovery is characterized by the following properties: While in the multi-truth case the following properties hold: Multi-truth discovery has unique features that make the problem more complex and should be taken into consideration when developing truth-discovery solutions. The examples below point out the main differences of the two methods. Knowing that in both examples the truth is provided by source 1, in the single truth case (first table) we can say that sources 2 and 3 oppose to the truth and as a result provide wrong values. On the other hand, in the second case (second table), sources 2 and 3 are neither correct nor erroneous, they instead provide a subset of the true values and at the same time they do not oppose the truth. Source trustworthiness. The vast majority of truth discovery methods are based on a voting approach: each source votes for a value of a certain data item and, at the end, the value with the highest vote is select as the true one. In the more sophisticated methods, votes do not have the same weight for all the data sources, more importance is indeed given to votes coming from trusted sources. Source trustworthiness usually is not known "a" "priori" but estimated with an iterative approach. At each step of the truth discovery algorithm the trustworthiness score of each data source is refined, improving the assessment of the true values that in turn leads to a better estimation of the trustworthiness of the sources. This process usually ends when all the values reach a convergence state. Source trustworthiness can be based on different metrics, such as accuracy of provided values, copying values from other sources and domain coverage. Detecting copying behaviors is very important, in fact, copy allows to spread false values easily making truth discovery very hard, since many sources would vote for the wrong values. Usually systems decrease the weight of votes associated to copied values or even don’t count them at all. Single-truth methods. Most of the currently available truth discovery methods have been designed to work well only in the single-truth case. Below are reported some of the characteristics of the most relevant typologies of single-truth methods and how different systems model source trustworthiness. Majority voting. Majority voting is the simplest method, the most popular value is selected as the true one. Majority voting is commonly used as a baseline when assessing the performances of more complex methods. Web-link based. These methods estimate source trustworthiness exploiting a similar technique to the one used to measure authority of web pages based on web links. The vote assigned to a value is computed as the sum of the trustworthiness of the sources that provide that particular value, while the trustworthiness of a source is computed as the sum of the votes assigned to the values that the source provides. Information-retrieval based. These methods estimate source trustworthiness using similarity measures typically used in information retrieval. Source trustworthiness is computed as the cosine similarity (or other similarity measures) between the set of values provided by the source and the set of values considered true (either selected in a probabilistic way or obtained from a ground truth). Bayesian based. These methods use Bayesian inference to define the probability of a value being true conditioned on the values provided by all the sources. formula_0 where formula_1 is a value provided for a data item formula_2 and formula_3 is the set of the observed values provided by all the sources for that specific data item. The trustworthiness of a source is then computed based on the accuracy of the values that provides. Other more complex methods exploit Bayesian inference to detect copying behaviors and use these insights to better assess source trustworthiness. Multi-truth methods. Due to its complexity, less attention has been devoted to the study of the multi-truth discovery Below are reported two typologies of multi-truth methods and their characteristics. Bayesian based. These methods use Bayesian inference to define the probability of a group of values being true conditioned on the values provided by all the data sources. In this case, since there could be multiple true values for each data item, and sources can provide multiple values for a single data item, it is not possible to consider values individually. An alternative is to consider mappings and relations between set of provided values and sources providing them. The trustworthiness of a source is then computed based on the accuracy of the values that provides. More sophisticated methods also consider domain coverage and copying behaviors to better estimate source trustworthiness. Probabilistic Graphical Models based. These methods use probabilistic graphical models to automatically define the set of true values of given data item and also to assess source quality without need of any supervision. Applications. Many real-world applications can benefit from the use of truth discovery algorithms. Typical domains of application include: healthcare, crowd/social sensing, crowdsourcing aggregation, information extraction and knowledge base construction. Truth discovery algorithms could be also used to revolutionize the way in which web pages are ranked in search engines, going from current methods based on link analysis like PageRank, to procedures that rank web pages based on the accuracy of the information they provide.
[ { "math_id": 0, "text": "P(v \\mid \\psi(o)) = \\frac{P(\\psi(o) \\mid v) \\cdot P(v)}{P(\\psi(o))}" }, { "math_id": 1, "text": "\\textstyle v" }, { "math_id": 2, "text": "\\textstyle o" }, { "math_id": 3, "text": "\\textstyle \\psi(o)" } ]
https://en.wikipedia.org/wiki?curid=61186833
61186989
Two-dimensional electronic spectroscopy
Two-dimensional electronic spectroscopy (2DES) is an ultrafast laser spectroscopy technique that allows the study of ultrafast phenomena inside systems in condensed phase. The term electronic refers to the fact that the optical frequencies in the visible spectral range are used to excite electronic energy states of the system; however, such a technique is also used in the IR optical range (excitation of vibrational states) and in this case the method is called two-dimensional infrared spectroscopy (2DIR). This technique records the signal which is emitted from a system after an interaction with a sequence of 3 laser pulses. Such pulses usually have a time duration of few hundred femtosecond (10−15 s) and this high time resolution allows capturing of dynamics inside the system that evolves with the same time scale. The main result of this technique is a two-dimensional absorption spectrum that shows the correlation between excitation and detection frequencies. The first 2DES spectra were recorded in 1998 Basic concepts about 2DES. Pulse sequence. The pulse sequence in this experiment is the same as 2DIR in which the delay between the first and second pulse is called the coherence time and is usually labeled as formula_0. The delay between the second and the third pulse is called the population time and it is labeled as formula_1. The time after the third pulse corresponds to the detection time formula_2 which is usually Fourier transformed by a spectrometer. The interaction with the pulses creates a third-order nonlinear response function formula_3 of the system from which it is possible to extract two-dimensional spectra as a function of excitation and detection frequencies. Although third-order two-dimensional spectroscopy is historically first and most popular, high-order two-dimensional spectroscopy approaches have also been developed. 2D Signal. A possible way to recover an analytical expression of the response function is to consider the system as an ensemble and deal with the light-matter interaction process by using the density matrix approach. Such a result shows that the response function is proportional to the product of the three pulses' electric fields. Considering formula_4 the wave vectors of the three pulses, the nonlinear signal will emit in several directions formula_5 which are derived from a linear combination of the three wave vectors: formula_6. For this technique, two different signals which propagate in different directions are usually taken into account. When formula_7 the signal is called rephasing and when formula_8 the signal is called non-rephasing. An interpretation of these signals is possible by considering the system to be composed of many electric dipoles. When the first pulse interacts with the system, the dipoles start to oscillate in phase. The signal generated from each dipole rapidly dephases due to the different interaction that each dipole experienced with the environment. The interaction with the third pulse, in the case of rephasing, generates a signal which has an opposite temporal evolution with respect to the previous one. The dephasing of the last signal during formula_2 compensates the one during formula_0. When formula_9 the oscillations are in-phase again and the new signal generated is called photon echo. In the other case, there is no creation of a photon echo and the signal is called non-rephasing. From these signals is possible to extract the pure absorptive and dispersive spectra which are usually shown in literature. The real part of the sum of these two signals represents the absorption of the system and the imaginary part contains the dispersion contribution. In the absorptive 2D spectra, the sign of the peak implies different effects. If the transmitted signal is plotted, a positive peak can be associated to a bleaching signal with respect to the ground state or stimulated emission. If the sign is negative, that peak on the 2D spectra is associated with a photoinduced absorption. Acquisition of the 2DES spectra. The first and the second pulses act as a pump and the third as a probe. The time-domain nonlinear response of the system interferes with another pulse called local oscillator (LO) which allows measurement of both amplitude and phase. Such a signal is usually acquired with a spectrometer which separates the contribution of each spectral components (detection frequencies formula_10). The acquisition proceeds by scanning the delay formula_0 for a fixed delay formula_1. Once the scan ends, the detector has acquired a signal as a function of coherence time per each detection frequency formula_11. The application of the Fourier transform along the formula_0 axis allows for recovery of the excitation spectra for every formula_10. The result of this procedure is a 2D map that shows the correlation between excitation (formula_12) and detection frequency (formula_10) at a fixed population time formula_13. The time evolution of the system can be measured by repeating the procedure described before for different values of formula_1. There are several methods to implement this technique, all of which are based on the different configurations of the pulses. Two examples of possible implementations are the "boxcar geometry" and the "partially collinear geometry". The boxcar geometry is a configuration where all the pulses arrive at the system from different directions formula_14 this property allows acquiring separately the rephasing and non-rephasing signal. The partially collinear geometry is another implementation of this technique where the first and the second pulse coming from the same direction formula_15. In this case, the rephasing and non-rephasing signal are emitted in the same direction and it is possible to directly recover the absorptive and dispersive spectra of the system. Information acquired from 2DES. 2D spectra contain a lot of information about the system; in particular amplitude, position and lineshape of the peaks are related to different effects that happened inside of the system. Position of the peaks. Diagonal Peaks. The peaks that stay along the diagonal line in the 2D spectra are called diagonal peaks. These peaks appear when the system emits a signal that oscillates at the same frequency of the excitation signal. These points reflect the information of the linear absorption spectrum. Cross Peaks. The peaks that stay out of the diagonal line are called cross peaks. These peaks appear when the system emits a signal that oscillates at a different frequency with respect to the signal used to excite. When a cross peak appears means that two electronic states of the system are coupled because when the pulses pump an electronic state, the system responds with emission from a different energy level. This coupling can be related to an energy transfer or charge transfer process between molecules. Lineshape of the peaks. Short population times. Thanks to the high spectral resolution, this technique acquires information based on the two dimensional shape of the peaks. When formula_1 is close to zero the diagonal peaks show an elliptical lineshape as is shown in the figure on the right. The width along the diagonal line represents the inhomogeneous broadening which contains information about interactions between the environment and the system. If the system is composed of a large amount of identical molecules, each of them interacts with the environment in a different way; this implies that the same electronic state of each molecule assumes different small variations. The value of the linewidth will be close to the one calculated in the linear absorption spectrum. On the other hand, the linewidth along the off-diagonal shows a smaller value with respect to the diagonal one. In this case the spectral broadening contains a contribution from a local interaction inside of the system; for this reason, the width reflects the homogeneous broadening. Long population times. For formula_16 fs, the shape of the peaks becomes circular and the width along diagonal and off-diagonal line are similar. This phenomenon takes place because all the molecules of the system experienced different local environments and the entire system lose memory of the initial condition. This effect is called Spectral Diffusion. Temporal lineshape evolution. The temporal evolution of the lineshape can be evaluated with several methods. One method evaluates the linewidth along diagonal and off-diagonal line separately. From the two values of the widths it is possible to calculate the flattening as formula_17 where formula_18 is the linewidth along diagonal line and formula_19 is the linewidth along off-diagonal line. The flattening curve as a function of formula_1 assumes a value close to 1 at formula_20 fs ( formula_21) and then decreases to zero at formula_16 fs (formula_22). Another method is called Central Line Slope (CLS). In this case the positions of the maximum values in the 2D spectra per each detection frequency are considered. These points are then interpolated with a linear function where is possible to extract the slope formula_23 between this function and the detection axis (x axis). From a theoretical point of view, this value should be 45° when formula_1 is close to zero because the peak is elongated along the diagonal line. When the peak assumes a circular lineshape, the value of the slope goes to zero. The same approach can also be used by considering the positions of the maximum values per each excitation frequency (y axis) and the slope will be 45° at formula_20 fs and 90° when the shape becomes circular. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "t_1" }, { "math_id": 1, "text": "t_2" }, { "math_id": 2, "text": "t_3" }, { "math_id": 3, "text": "S(t_1,t_2,t_3)" }, { "math_id": 4, "text": "\\vec{k_1},\\vec{k_2},\\vec{k_3}" }, { "math_id": 5, "text": "\\vec{k_s}" }, { "math_id": 6, "text": "\\vec{k_s}=\\pm \\vec{k_1}\\pm \\vec{k_2}\\pm \\vec{k_3}" }, { "math_id": 7, "text": "\\vec{k_s}=-\\vec{k_1}+\\vec{k_2}+\\vec{k_3}" }, { "math_id": 8, "text": "\\vec{k_s}=\\vec{k_1}-\\vec{k_2}+\\vec{k_3}" }, { "math_id": 9, "text": "t_3=t_1" }, { "math_id": 10, "text": "\\nu_3" }, { "math_id": 11, "text": "S(t_1,t_2,\\nu_3)" }, { "math_id": 12, "text": "\\nu_1" }, { "math_id": 13, "text": "S(\\nu_1,t_2,\\nu_3)" }, { "math_id": 14, "text": "\\vec{k_1}\\neq\\vec{k_2}\\neq\\vec{k_3}" }, { "math_id": 15, "text": "\\vec{k_1}=\\vec{k_2}\\neq\\vec{k_3}" }, { "math_id": 16, "text": "t_2\\gg 0" }, { "math_id": 17, "text": "f=\\frac{a-b}{a}" }, { "math_id": 18, "text": "a" }, { "math_id": 19, "text": "b" }, { "math_id": 20, "text": "t_2=0" }, { "math_id": 21, "text": "a\\gg b" }, { "math_id": 22, "text": "a\\simeq b" }, { "math_id": 23, "text": "\\theta" } ]
https://en.wikipedia.org/wiki?curid=61186989
61187256
Charge modulation spectroscopy
Charge modulation spectroscopy is an electro-optical spectroscopy technique tool. It is used to study the charge carrier behavior of organic field-effect transistors. It measures the charge introduced optical transmission variation by directly probing the accumulation charge at the burning interface of semiconductor and dielectric layer where the conduction channel forms. Principles. Unlike ultraviolet–visible spectroscopy which measures absorbance, "charge modulation spectroscopy" measures the charge introduced optical transmission variation. In other words, it reveals the new features in optical transmission introduced by charges. In this setup, there are mainly four components: lamp, monochromator, photodetector and lock-in amplifier. Lamp and monochromator are used for generating and selecting the wavelength. The selected wavelength passes through the transistor, and the transmitted light is recorded by the Photodiode. When the signal to noise ratio is very low, the signal can be modulated and recovered with a Lock-in amplifier. In the experiment, a direct current plus an alternating current bias are applied to the organic field-effect transistor. Charge carriers accumulate at the interface between the dielectric and the semiconductor (usually a few nanometers). With the appearance of the accumulation charge, the intensity of the transmitted light changes. The variation of the light intensity () is then collected though the photodetector and lock-in amplifier. The charge modulation frequency is given to Lock-in amplifier as the reference. Modulate charge at the organic field-effect transistor. There are four typically Organic field-effect transistor architectures: Top-gate, bottom-contacts; bottom-gate, top-contacts; bottom-gate, bottom-contacts; top-gate, top-contact. In order to create the accumulation charge layer, a positive/negative direct current voltage is applied to the gate of the organic field-effect transistor (positive for the P type transistor, negative for the N type transistor). In order to modulate the charge, an AC voltage is given between the gate and source. It is important to notice that only mobile charge can follow the modulation and that the modulation frequency given to lock-in amplifier has to be synchronous. Charge modulation spectra. The charge modulation spectroscopy signal can be defined as the differential transmission formula_0divided by the total transmission formula_1. By modulating the mobile carriers, an increase transmission formula_2 and decrease transmission formula_3 features could be both observed. The former relates to the bleaching and the latter to the charge absorption and electrically induced absorption (electro-absorption). The charge modulation spectroscopy spectra is an overlap of charge-induced and electro-absorption features. In transistors, the electro-absorption is more significant during the high voltage drop. There are several ways to identify the electro-absorption contribution, such as get the second harmonic formula_4, or probe it at the depletion region. Bleaching and charge absorption. When the accumulation charge carrier removes the ground state of the neutral polymer, there is more transmission in the ground state. This is called bleaching formula_5. With the excess hole or electrons at the polymer, there will be new transitions at low energy levels, therefore the transmission intensity is reduced formula_6, this is related to charge absorption. Electro-absorption. The electro-absorption is a type of Stark effect in the neutral polymer, it is predominant at the electrode edge since there is a strong voltage drop. Electro-absorption can be observed from the second harmonic charge modulation spectroscopy spectra. Charge modulation microscopy. Charge modulation microscopy is a new technology which combines the confocal microscopy with charge modulation spectroscopy. Unlike the charge modulation spectroscopy which is focused on the whole transistor, the charge modulation microscopy give us the local spectra and map. Thanks for this technology, the channel spectra and electrode spectra can be obtained individually. A more local dimension of charge modulation spectra (around submicrometer) can be observed without a significant Electro-absorption feature. Of course, this depends on the resolution of the optical microscopy. The high resolution of charge modulation microscopy allows mapping of the charge carrier distribution at the active channel of the organic field-effect transistor. In other words, a functional carrier morphology can be observed. It is well known that the local carrier density can be related to the polymer microstructure. Based on Density functional theory calculations, a polarized charge modulation microscopy can selectively map the charge transport associated with a relative direction of the transition dipole moment. The local direction can be correlated to the orientational order of polymer domains. More ordered domains show a high carrier mobility of the organic field-effect transistor device. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\bigtriangleup T" }, { "math_id": 1, "text": "T" }, { "math_id": 2, "text": "\\bigtriangleup T/T>0" }, { "math_id": 3, "text": "\\bigtriangleup T/T<0" }, { "math_id": 4, "text": "\\bigtriangleup T/T" }, { "math_id": 5, "text": "\\bigtriangleup T > 0" }, { "math_id": 6, "text": "\\bigtriangleup T < 0" } ]
https://en.wikipedia.org/wiki?curid=61187256
61190
Cauchy's integral theorem
Theorem in complex analysis In mathematics, the Cauchy integral theorem (also known as the Cauchy–Goursat theorem) in complex analysis, named after Augustin-Louis Cauchy (and Édouard Goursat), is an important statement about line integrals for holomorphic functions in the complex plane. Essentially, it says that if formula_0 is holomorphic in a simply connected domain Ω, then for any simply closed contour formula_1 in Ω, that contour integral is zero. formula_2 Statement. Fundamental theorem for complex line integrals. If "f"("z") is a holomorphic function on an open region U, and formula_3 is a curve in U from formula_4 to formula_5 then, formula_6 Also, when "f"("z") has a single-valued antiderivative in an open region U, then the path integral formula_7 is path independent for all paths in U. Formulation on simply connected regions. Let formula_8 be a simply connected open set, and let formula_9 be a holomorphic function. Let formula_10 be a smooth closed curve. Then: formula_11 General formulation. Let formula_8 be an open set, and let formula_9 be a holomorphic function. Let formula_10 be a smooth closed curve. If formula_3 is homotopic to a constant curve, then: formula_11 (Recall that a curve is homotopic to a constant curve if there exists a smooth homotopy (within formula_12) from the curve to the constant curve. Intuitively, this means that one can shrink the curve into a point without exiting the space.) The first version is a special case of this because on a simply connected set, every closed curve is homotopic to a constant curve. Main example. In both cases, it is important to remember that the curve formula_3 does not surround any "holes" in the domain, or else the theorem does not apply. A famous example is the following curve: formula_13 which traces out the unit circle. Here the following integral: formula_14 is nonzero. The Cauchy integral theorem does not apply here since formula_15 is not defined at formula_16. Intuitively, formula_3 surrounds a "hole" in the domain of formula_17, so formula_3 cannot be shrunk to a point without exiting the space. Thus, the theorem does not apply. Discussion. As Édouard Goursat showed, Cauchy's integral theorem can be proven assuming only that the complex derivative formula_18 exists everywhere in formula_12. This is significant because one can then prove Cauchy's integral formula for these functions, and from that deduce these functions are infinitely differentiable. The condition that formula_12 be simply connected means that formula_12 has no "holes" or, in homotopy terms, that the fundamental group of formula_12 is trivial; for instance, every open disk formula_19, for formula_20, qualifies. The condition is crucial; consider formula_21 which traces out the unit circle, and then the path integral formula_22 is nonzero; the Cauchy integral theorem does not apply here since formula_15 is not defined (and is certainly not holomorphic) at formula_16. One important consequence of the theorem is that path integrals of holomorphic functions on simply connected domains can be computed in a manner familiar from the fundamental theorem of calculus: let formula_12 be a simply connected open subset of formula_23, let formula_9 be a holomorphic function, and let formula_3 be a piecewise continuously differentiable path in formula_12 with start point formula_24 and end point formula_25. If formula_26 is a complex antiderivative of formula_17, then formula_27 The Cauchy integral theorem is valid with a weaker hypothesis than given above, e.g. given formula_12"," a simply connected open subset of formula_23, we can weaken the assumptions to formula_17 being holomorphic on formula_12 and continuous on formula_28 and formula_3 a rectifiable simple loop in formula_28. The Cauchy integral theorem leads to Cauchy's integral formula and the residue theorem. Proof. If one assumes that the partial derivatives of a holomorphic function are continuous, the Cauchy integral theorem can be proven as a direct consequence of Green's theorem and the fact that the real and imaginary parts of formula_29 must satisfy the Cauchy–Riemann equations in the region bounded by formula_3, and moreover in the open neighborhood U of this region. Cauchy provided this proof, but it was later proven by Goursat without requiring techniques from vector calculus, or the continuity of partial derivatives. We can break the integrand formula_17, as well as the differential formula_30 into their real and imaginary components: formula_31 formula_32 In this case we have formula_33 By Green's theorem, we may then replace the integrals around the closed contour formula_3 with an area integral throughout the domain formula_34 that is enclosed by formula_3 as follows: formula_35 formula_36 But as the real and imaginary parts of a function holomorphic in the domain formula_34, formula_37 and formula_38 must satisfy the Cauchy–Riemann equations there: formula_39 formula_40 We therefore find that both integrands (and hence their integrals) are zero formula_41 formula_42 This gives the desired result formula_43 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(z)" }, { "math_id": 1, "text": "C" }, { "math_id": 2, "text": "\\int_C f(z)\\,dz = 0. " }, { "math_id": 3, "text": "\\gamma" }, { "math_id": 4, "text": "z_0" }, { "math_id": 5, "text": "z_1" }, { "math_id": 6, "text": "\\int_{\\gamma}f'(z) \\, dz = f(z_1)-f(z_0)." }, { "math_id": 7, "text": "\\int_{\\gamma}f'(z) \\, dz" }, { "math_id": 8, "text": "U \\subseteq \\Complex" }, { "math_id": 9, "text": "f: U \\to \\Complex" }, { "math_id": 10, "text": "\\gamma: [a,b] \\to U" }, { "math_id": 11, "text": "\\int_\\gamma f(z)\\,dz = 0. " }, { "math_id": 12, "text": "U" }, { "math_id": 13, "text": "\\gamma(t) = e^{it} \\quad t \\in \\left[0, 2\\pi\\right] ," }, { "math_id": 14, "text": "\\int_{\\gamma} \\frac{1}{z}\\,dz = 2\\pi i \\neq 0 , " }, { "math_id": 15, "text": "f(z) = 1/z" }, { "math_id": 16, "text": "z = 0" }, { "math_id": 17, "text": "f" }, { "math_id": 18, "text": "f'(z)" }, { "math_id": 19, "text": "U_{z_0} = \\{ z : \\left|z-z_{0}\\right| < r\\}" }, { "math_id": 20, "text": "z_0 \\in \\Complex" }, { "math_id": 21, "text": "\\gamma(t) = e^{it} \\quad t \\in \\left[0, 2\\pi\\right]" }, { "math_id": 22, "text": "\\oint_\\gamma \\frac{1}{z}\\,dz = \\int_0^{2\\pi} \\frac{1}{e^{it}}(ie^{it} \\,dt) = \\int_0^{2\\pi}i\\,dt = 2\\pi i " }, { "math_id": 23, "text": "\\Complex" }, { "math_id": 24, "text": "a" }, { "math_id": 25, "text": "b" }, { "math_id": 26, "text": "F" }, { "math_id": 27, "text": "\\int_\\gamma f(z)\\,dz=F(b)-F(a)." }, { "math_id": 28, "text": "\\overline{U}" }, { "math_id": 29, "text": "f=u+iv" }, { "math_id": 30, "text": "dz" }, { "math_id": 31, "text": " f=u+iv " }, { "math_id": 32, "text": " dz=dx+i\\,dy " }, { "math_id": 33, "text": "\\oint_\\gamma f(z)\\,dz = \\oint_\\gamma (u+iv)(dx+i\\,dy) = \\oint_\\gamma (u\\,dx-v\\,dy) +i\\oint_\\gamma (v\\,dx+u\\,dy)" }, { "math_id": 34, "text": "D" }, { "math_id": 35, "text": "\\oint_\\gamma (u\\,dx-v\\,dy) = \\iint_D \\left( -\\frac{\\partial v}{\\partial x} -\\frac{\\partial u}{\\partial y} \\right) \\,dx\\,dy " }, { "math_id": 36, "text": "\\oint_\\gamma (v\\,dx+u\\,dy) = \\iint_D \\left( \\frac{\\partial u}{\\partial x} -\\frac{\\partial v}{\\partial y} \\right) \\,dx\\,dy " }, { "math_id": 37, "text": "u" }, { "math_id": 38, "text": "v" }, { "math_id": 39, "text": "\\frac{ \\partial u }{ \\partial x } = \\frac{ \\partial v }{ \\partial y } " }, { "math_id": 40, "text": "\\frac{ \\partial u }{ \\partial y } = -\\frac{ \\partial v }{ \\partial x } " }, { "math_id": 41, "text": "\\iint_D \\left( -\\frac{\\partial v}{\\partial x} -\\frac{\\partial u}{\\partial y} \\right )\\,dx\\,dy = \\iint_D \\left( \\frac{\\partial u}{\\partial y} - \\frac{\\partial u}{\\partial y} \\right ) \\, dx \\, dy =0" }, { "math_id": 42, "text": "\\iint_D \\left( \\frac{\\partial u}{\\partial x}-\\frac{\\partial v}{\\partial y} \\right )\\,dx\\,dy = \\iint_D \\left( \\frac{\\partial u}{\\partial x} - \\frac{\\partial u}{\\partial x} \\right ) \\, dx \\, dy = 0" }, { "math_id": 43, "text": "\\oint_\\gamma f(z)\\,dz = 0" } ]
https://en.wikipedia.org/wiki?curid=61190
61194766
Time-domain diffuse optics
Time-domain diffuse optics or time-resolved functional near-infrared spectroscopy is a branch of functional near-Infrared spectroscopy which deals with light propagation in diffusive media. There are three main approaches to diffuse optics namely continuous wave (CW), frequency domain (FD) and time-domain (TD). Biological tissue in the range of red to near-infrared wavelengths are transparent to light and can be used to probe deep layers of the tissue thus enabling various in vivo applications and clinical trials. Physical concepts. In this approach, a narrow pulse of light (&lt; 100 picoseconds) is injected into the medium. The injected photons undergo multiple scattering and absorption events and the scattered photons are then collected at a certain distance from the source and the photon arrival times are recorded. The photon arrival times are converted into the histogram of the distribution of time-of-flight (DTOF) of photons or temporal point spread function. This DTOF is delayed, attenuated and broadened with respect to the injected pulse. The two main phenomena affecting photon migration in diffusive media are absorption and scattering. Scattering is caused by microscopic refractive index changes due to the structure of the media. Absorption, on the other hand, is caused by a radiative or non-radiative transfer of light energy on interaction with absorption centers such as chromophores. Both absorption and scattering are described by coefficients formula_0and formula_1 respectively. Multiple scattering events broaden the DTOF and the attenuation of a result of both absorption and scattering as they divert photons from the direction of the detector. Higher scattering leads to a more delayed and a broader DTOF and higher absorption reduces the amplitude and changes the slope of the tail of the DTOF. Since absorption and scattering have different effects on the DTOF, they can be extracted independently while using a single source-detector separation. Moreover, the penetration depth in TD depends solely on the photon arrival times and is independent of the source-detector separation unlike in CW approach. The theory of light propagation in diffusive media is usually dealt with using the framework of radiative transfer theory under the multiple scattering regime. It has been demonstrated that radiative transfer equation under the diffusion approximation yields sufficiently accurate solutions for practical applications. For example, it can be applied for the semi-infinite geometry or the infinite slab geometry, using proper boundary conditions. The system is considered as a homogeneous background and an inclusion is considered as an absorption or scattering perturbation. The time-resolved reflectance curve at a point formula_2 from the source for a semi-infinite geometry is given by formula_3 where formula_4 is the diffusion coefficient, formula_5 is the reduced scattering coefficient and formula_6 is asymmetry factor, formula_7 is the photon velocity in the medium, formula_8 takes into account the boundary conditions and formula_9 is a constant. The final DTOF is a convolution of the instrument response function (IRF) of the system with the theoretical reflectance curve. When applied to biological tissues estimation of formula_10 and formula_11 allows us to then estimate the concentration of the various tissue constituents as well as provides information about blood oxygenation (oxy and deoxy-hemoglobin) as well as saturation and total blood volume. These can then be used as biomarkers for detecting various pathologies. Instrumentation. Instrumentation in time-domain diffuse optics consists of three fundamental components namely, a pulsed laser source, a single photon detector and a timing electronics. Sources. Time-domain diffuse optical sources must have the following characteristics; emission wavelength in the optical window i.e. between 650 and 1350 nanometre (nm); a narrow full width at half maximum (FWHM), ideally a delta function; high repetition rate (&gt;20 MHz) and finally, sufficient laser power (&gt;1 mW) to achieve good signal to noise ratio. In the past bulky tunable Ti:sapphire Lasers were used. They provided a wide wavelength range of 400 nm, a narrow FWHM (&lt; 1 ps) high average power (up to 1W) and high repetition (up to 100 MHz) frequency. However, they are bulky, expensive and take a long time for wavelength swapping. In recent years, pulsed fiber lasers based on super continuum generation have emerged. They provide a wide spectral range (400 to 2000 ps), typical average power of 5 to 10 W, a FWHM of &lt; 10ps and a repetition frequency of tens of MHz. However, they are generally quite expensive and lack stability in super continuum generation and hence, have been limited in there use. The most wide spread sources are the pulsed diode lasers. They have a FWHM of around 100 ps and repetition frequency of up to 100 MHz and an average power of about a few milliwatts. Even though they lack tunability, their low cost and compactness allows for multiple modules to be used in a single system. Detectors. Single photon detector used in time-domain diffuse optics require not only a high photon detection efficiency in the wavelength range of optical window, but also a large active area as well as large numerical aperture (N.A.) to maximize the overall light collection efficiency. They also require narrow timing response and a low noise background. Traditionally, fiber coupled photomultiplier tubes (PMT) have been the detector of choice for diffuse optical measurements, thanks mainly due to the large active area, low dark count and excellent timing resolution. However, they are intrinsically bulky, prone to electromagnetic disturbances and they have a quite limited spectral sensitivity. Moreover, they require a high biasing voltage and they are quite expensive. Single photon avalanche diodes have emerged as an alternative to PMTS. They are low cost, compact and can be placed in contact, while needing a much lower biasing voltage. Also, they offer a wider spectral sensitivity and they are more robust to bursts of light. However, they have a much lower active area and hence a lower photon collection efficiency and a larger dark count. Silicon photomultipliers (SiPM) are an arrays of SPADs with a global anode and a global cathode and hence have a larger active area while maintaining all the advantages offered by SPADs. However, they suffer from a larger dark count and a broader timing response. Timing electronics. The timing electronics is needed to losslessly reconstruct the histogram of the distribution of time of flight of photons. This is done by using the technique of time-correlated single photon counting (TCSPC), where the individual photon arrival times are marked with respect to a start/stop signal provided by the periodic laser cycle. These time-stamps can then be used to build up histograms of photon arrival times. The two main types of timing electronics are based on a combination of time-to-analog converter (TAC) and an analog-to-digital converter (ADC), and time-to-digital converter (TDC), respectively. In the first case, the difference between the start and the stop signal is converted into an analog voltage signal, which is then processed by the ADC. In the second method, the delay is directly converted into a digital signal. Systems based on ADCs generally have a better timing resolution and linearity while being expensive and the capability of being integrated. TDCs, on the other hand, can be integrated into a single chip and hence are better suited in multi-channel systems. However, they have a worse timing performance and can handle much lower sustained count-rates. Applications. The usefulness of TD Diffuse optics lies in its ability to continually and noninvasive monitor optical properties of tissue. Making it a powerful diagnostic tool for long-term bedside monitoring in infants and adults alike. It has already been demonstrated that TD diffuse optics can be successfully applied to various biomedical applications such as cerebral monitoring, optical mammography, muscle monitoring, etc.
[ { "math_id": 0, "text": "\\mu_a" }, { "math_id": 1, "text": "\\mu_s" }, { "math_id": 2, "text": "\\rho" }, { "math_id": 3, "text": "R(\\rho, t)=k t^{-5 / 2} \\exp \\left(-\\mu_{a} \\nu t\\right) \\exp \\left(-\\frac{\\rho^{2}}{4 D \\nu t}\\right) S\\left(D, s_{0}, t\\right)" }, { "math_id": 4, "text": "D=\\frac{1}{3 \\mu_{s}^{\\prime}}" }, { "math_id": 5, "text": "\\mu_{s}^{\\prime}=\\mu_{s}(1-g)" }, { "math_id": 6, "text": "g" }, { "math_id": 7, "text": "\\nu" }, { "math_id": 8, "text": "S(D,s_0,t)" }, { "math_id": 9, "text": "k" }, { "math_id": 10, "text": "\\mu_a\n" }, { "math_id": 11, "text": "\\mu'_s" } ]
https://en.wikipedia.org/wiki?curid=61194766
611964
Sobolev space
Vector space of functions in mathematics In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of "Lp"-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function. Sobolev spaces are named after the Russian mathematician Sergei Sobolev. Their importance comes from the fact that weak solutions of some important partial differential equations exist in appropriate Sobolev spaces, even when there are no strong solutions in spaces of continuous functions with the derivatives understood in the classical sense. Motivation. In this section and throughout the article formula_0 is an open subset of formula_1 There are many criteria for smoothness of mathematical functions. The most basic criterion may be that of continuity. A stronger notion of smoothness is that of differentiability (because functions that are differentiable are also continuous) and a yet stronger notion of smoothness is that the derivative also be continuous (these functions are said to be of class formula_2 — see Differentiability classes). Differentiable functions are important in many areas, and in particular for differential equations. In the twentieth century, however, it was observed that the space formula_2 (or formula_3, etc.) was not exactly the right space to study solutions of differential equations. The Sobolev spaces are the modern replacement for these spaces in which to look for solutions of partial differential equations. Quantities or properties of the underlying model of the differential equation are usually expressed in terms of integral norms. A typical example is measuring the energy of a temperature or velocity distribution by an formula_4-norm. It is therefore important to develop a tool for differentiating Lebesgue space functions. The integration by parts formula yields that for every formula_5, where formula_6 is a natural number, and for all infinitely differentiable functions with compact support formula_7 formula_8 where formula_9 is a multi-index of order formula_10 and we are using the notation: formula_11 The left-hand side of this equation still makes sense if we only assume formula_12 to be locally integrable. If there exists a locally integrable function formula_13, such that formula_14 then we call formula_13 the weak formula_9-th partial derivative of formula_12. If there exists a weak formula_9-th partial derivative of formula_12, then it is uniquely defined almost everywhere, and thus it is uniquely determined as an element of a Lebesgue space. On the other hand, if formula_5, then the classical and the weak derivative coincide. Thus, if formula_13 is a weak formula_9-th partial derivative of formula_12, we may denote it by formula_15. For example, the function formula_16 is not continuous at zero, and not differentiable at −1, 0, or 1. Yet the function formula_17 satisfies the definition for being the weak derivative of formula_18 which then qualifies as being in the Sobolev space formula_19 (for any allowed formula_20, see definition below). The Sobolev spaces formula_21 combine the concepts of weak differentiability and Lebesgue norms. Sobolev spaces with integer "k". One-dimensional case. In the one-dimensional case the Sobolev space formula_22 for formula_23 is defined as the subset of functions formula_24 in formula_25 such that formula_24 and its weak derivatives up to order formula_6 have a finite "Lp" norm. As mentioned above, some care must be taken to define derivatives in the proper sense. In the one-dimensional problem it is enough to assume that the formula_26-th derivative formula_27 is differentiable almost everywhere and is equal almost everywhere to the Lebesgue integral of its derivative (this excludes irrelevant examples such as Cantor's function). With this definition, the Sobolev spaces admit a natural norm, formula_28 One can extend this to the case formula_29, with the norm then defined using the essential supremum by formula_30 Equipped with the norm formula_31 becomes a Banach space. It turns out that it is enough to take only the first and last in the sequence, i.e., the norm defined by formula_32 is equivalent to the norm above (i.e. the induced topologies of the norms are the same). 2==== Sobolev spaces with "p" 2 are especially important because of their connection with Fourier series and because they form a Hilbert space. A special notation has arisen to cover this case, since the space is a Hilbert space: formula_33 The space formula_34 can be defined naturally in terms of Fourier series whose coefficients decay sufficiently rapidly, namely, formula_35 where formula_36 is the Fourier series of formula_37 and formula_38 denotes the 1-torus. As above, one can use the equivalent norm formula_39 Both representations follow easily from Parseval's theorem and the fact that differentiation is equivalent to multiplying the Fourier coefficient by formula_40. Furthermore, the space formula_34 admits an inner product, like the space formula_41 In fact, the formula_34 inner product is defined in terms of the formula_4 inner product: formula_42 The space formula_34 becomes a Hilbert space with this inner product. Other examples. In one dimension, some other Sobolev spaces permit a simpler description. For example, formula_43 is the space of absolutely continuous functions on (0, 1) (or rather, equivalence classes of functions that are equal almost everywhere to such), while formula_44 is the space of bounded Lipschitz functions on I, for every interval I. However, these properties are lost or not as simple for functions of more than one variable. All spaces formula_45 are (normed) algebras, i.e. the product of two elements is once again a function of this Sobolev space, which is not the case for formula_46 (E.g., functions behaving like |"x"|−1/3 at the origin are in formula_47 but the product of two such functions is not in formula_4). Multidimensional case. The transition to multiple dimensions brings more difficulties, starting from the very definition. The requirement that formula_27 be the integral of formula_48 does not generalize, and the simplest solution is to consider derivatives in the sense of distribution theory. A formal definition now follows. Let formula_49 The Sobolev space formula_21 is defined to be the set of all functions formula_24 on formula_0 such that for every multi-index formula_9 with formula_50 the mixed partial derivative formula_51 exists in the weak sense and is in formula_52 i.e. formula_53 That is, the Sobolev space formula_21 is defined as formula_54 The natural number formula_6 is called the order of the Sobolev space formula_55 There are several choices for a norm for formula_55 The following two are common and are equivalent in the sense of equivalence of norms: formula_56 and formula_57 With respect to either of these norms, formula_21 is a Banach space. For formula_58 is also a separable space. It is conventional to denote formula_59 by formula_60 for it is a Hilbert space with the norm formula_61. Approximation by smooth functions. It is rather hard to work with Sobolev spaces relying only on their definition. It is therefore interesting to know that by the Meyers–Serrin theorem a function formula_62 can be approximated by smooth functions. This fact often allows us to translate properties of smooth functions to Sobolev functions. If formula_20 is finite and formula_0 is open, then there exists for any formula_62 an approximating sequence of functions formula_63 such that: formula_64 If formula_0 has Lipschitz boundary, we may even assume that the formula_65 are the restriction of smooth functions with compact support on all of formula_1 Examples. In higher dimensions, it is no longer true that, for example, formula_66 contains only continuous functions. For example, formula_67 where formula_68 is the unit ball in three dimensions. For formula_69, the space formula_21 will contain only continuous functions, but for which formula_6 this is already true depends both on formula_20 and on the dimension. For example, as can be easily checked using spherical polar coordinates for the function formula_70 defined on the "n"-dimensional ball we have: formula_71 Intuitively, the blow-up of "f" at 0 "counts for less" when "n" is large since the unit ball has "more outside and less inside" in higher dimensions. Absolutely continuous on lines (ACL) characterization of Sobolev functions. Let formula_72 If a function is in formula_73 then, possibly after modifying the function on a set of measure zero, the restriction to almost every line parallel to the coordinate directions in formula_74 is absolutely continuous; what's more, the classical derivative along the lines that are parallel to the coordinate directions are in formula_75 Conversely, if the restriction of formula_24 to almost every line parallel to the coordinate directions is absolutely continuous, then the pointwise gradient formula_76 exists almost everywhere, and formula_24 is in formula_77 provided formula_78 In particular, in this case the weak partial derivatives of formula_24 and pointwise partial derivatives of formula_24 agree almost everywhere. The ACL characterization of the Sobolev spaces was established by Otto M. Nikodym (1933); see . A stronger result holds when formula_79 A function in formula_77 is, after modifying on a set of measure zero, Hölder continuous of exponent formula_80 by Morrey's inequality. In particular, if formula_81 and formula_0 has Lipschitz boundary, then the function is Lipschitz continuous. Functions vanishing at the boundary. The Sobolev space formula_82 is also denoted by formula_83 It is a Hilbert space, with an important subspace formula_84 defined to be the closure of the infinitely differentiable functions compactly supported in formula_0 in formula_83 The Sobolev norm defined above reduces here to formula_85 When formula_0 has a regular boundary, formula_84 can be described as the space of functions in formula_86 that vanish at the boundary, in the sense of traces (see below). When formula_87 if formula_88 is a bounded interval, then formula_89 consists of continuous functions on formula_90 of the form formula_91 where the generalized derivative formula_92 is in formula_93 and has 0 integral, so that formula_94 When formula_0 is bounded, the Poincaré inequality states that there is a constant formula_95 such that: formula_96 When formula_0 is bounded, the injection from formula_84 to formula_97 is compact. This fact plays a role in the study of the Dirichlet problem, and in the fact that there exists an orthonormal basis of formula_98 consisting of eigenvectors of the Laplace operator (with Dirichlet boundary condition). Traces. Sobolev spaces are often considered when investigating partial differential equations. It is essential to consider boundary values of Sobolev functions. If formula_99, those boundary values are described by the restriction formula_100 However, it is not clear how to describe values at the boundary for formula_101 as the "n"-dimensional measure of the boundary is zero. The following theorem resolves the problem: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Trace theorem —  Assume Ω is bounded with Lipschitz boundary. Then there exists a bounded linear operator formula_102 such that formula_103 "Tu" is called the trace of "u". Roughly speaking, this theorem extends the restriction operator to the Sobolev space formula_77 for well-behaved Ω. Note that the trace operator "T" is in general not surjective, but for 1 &lt; "p" &lt; ∞ it maps continuously onto the Sobolev–Slobodeckij space formula_104 Intuitively, taking the trace costs 1/"p" of a derivative. The functions "u" in "W"1,p(Ω) with zero trace, i.e. "Tu" = 0, can be characterized by the equality formula_105 where formula_106 In other words, for Ω bounded with Lipschitz boundary, trace-zero functions in formula_77 can be approximated by smooth functions with compact support. Sobolev spaces with non-integer "k". Bessel potential spaces. For a natural number "k" and 1 &lt; "p" &lt; ∞ one can show (by using Fourier multipliers) that the space formula_107 can equivalently be defined as formula_108 with the norm formula_109 This motivates Sobolev spaces with non-integer order since in the above definition we can replace "k" by any real number "s". The resulting spaces formula_110 are called Bessel potential spaces (named after Friedrich Bessel). They are Banach spaces in general and Hilbert spaces in the special case "p" = 2. For formula_111 is the set of restrictions of functions from formula_112 to Ω equipped with the norm formula_113 Again, "Hs,p"(Ω) is a Banach space and in the case "p" = 2 a Hilbert space. Using extension theorems for Sobolev spaces, it can be shown that also "Wk,p"(Ω) = "Hk,p"(Ω) holds in the sense of equivalent norms, if Ω is domain with uniform "Ck"-boundary, "k" a natural number and 1 &lt; "p" &lt; ∞. By the embeddings formula_114 the Bessel potential spaces formula_112 form a continuous scale between the Sobolev spaces formula_115 From an abstract point of view, the Bessel potential spaces occur as complex interpolation spaces of Sobolev spaces, i.e. in the sense of equivalent norms it holds that formula_116 where: formula_117 Sobolev–Slobodeckij spaces. Another approach to define fractional order Sobolev spaces arises from the idea to generalize the Hölder condition to the "Lp"-setting. For formula_118 and formula_119 the Slobodeckij seminorm (roughly analogous to the Hölder seminorm) is defined by formula_120 Let "s" &gt; 0 be not an integer and set formula_121. Using the same idea as for the Hölder spaces, the Sobolev–Slobodeckij space formula_122 is defined as formula_123 It is a Banach space for the norm formula_124 If formula_0 is suitably regular in the sense that there exist certain extension operators, then also the Sobolev–Slobodeckij spaces form a scale of Banach spaces, i.e. one has the continuous injections or embeddings formula_125 There are examples of irregular Ω such that formula_77 is not even a vector subspace of formula_122 for 0 &lt; "s" &lt; 1 (see Example 9.1 of ) From an abstract point of view, the spaces formula_122 coincide with the real interpolation spaces of Sobolev spaces, i.e. in the sense of equivalent norms the following holds: formula_126 Sobolev–Slobodeckij spaces play an important role in the study of traces of Sobolev functions. They are special cases of Besov spaces. The constant arising in the characterization of the fractional Sobolev space formula_122 can be characterized through the Bourgain-Brezis-Mironescu formula: formula_127 and the condition formula_128 characterizes those functions of formula_129 that are in the first-order Sobolev space formula_130 Extension operators. If formula_0 is a domain whose boundary is not too poorly behaved (e.g., if its boundary is a manifold, or satisfies the more permissive "cone condition") then there is an operator "A" mapping functions of formula_0 to functions of formula_74 such that: We will call such an operator "A" an extension operator for formula_132 Case of "p" = 2. Extension operators are the most natural way to define formula_133 for non-integer "s" (we cannot work directly on formula_0 since taking Fourier transform is a global operation). We define formula_133 by saying that formula_134 if and only if formula_135 Equivalently, complex interpolation yields the same formula_133 spaces so long as formula_0 has an extension operator. If formula_0 does not have an extension operator, complex interpolation is the only way to obtain the formula_133 spaces. As a result, the interpolation inequality still holds. Extension by zero. Like above, we define formula_136 to be the closure in formula_133 of the space formula_137 of infinitely differentiable compactly supported functions. Given the definition of a trace, above, we may state the following &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — Let formula_0 be uniformly "Cm" regular, "m" ≥ "s" and let "P" be the linear map sending "u" in formula_133 to formula_138 where "d/dn" is the derivative normal to "G", and "k" is the largest integer less than "s". Then formula_139 is precisely the kernel of "P". If formula_140 we may define its extension by zero formula_141 in the natural way, namely formula_142 &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — Let formula_143 The map formula_144 is continuous into formula_145 if and only if "s" is not of the form formula_146 for "n" an integer. For "f" ∈ "Lp"(Ω) its extension by zero, formula_147 is an element of formula_148 Furthermore, formula_149 In the case of the Sobolev space "W"1,p(Ω) for 1 ≤ p ≤ ∞, extending a function "u" by zero will not necessarily yield an element of formula_150 But if Ω is bounded with Lipschitz boundary (e.g. ∂Ω is "C"1), then for any bounded open set O such that Ω⊂⊂O (i.e. Ω is compactly contained in O), there exists a bounded linear operator formula_151 such that for each formula_152 a.e. on Ω, "Eu" has compact support within O, and there exists a constant "C" depending only on "p", Ω, O and the dimension "n", such that formula_153 We call formula_154 an extension of formula_12 to formula_1 Sobolev embeddings. It is a natural question to ask if a Sobolev function is continuous or even continuously differentiable. Roughly speaking, sufficiently many weak derivatives (i.e. large "k") result in a classical derivative. This idea is generalized and made precise in the Sobolev embedding theorem. Write formula_155 for the Sobolev space of some compact Riemannian manifold of dimension "n". Here "k" can be any real number, and 1 ≤ "p" ≤ ∞. (For "p" = ∞ the Sobolev space formula_45 is defined to be the Hölder space "C""n",α where "k" = "n" + α and 0 &lt; α ≤ 1.) The Sobolev embedding theorem states that if formula_156 and formula_157 then formula_158 and the embedding is continuous. Moreover, if formula_159 and formula_160 then the embedding is completely continuous (this is sometimes called Kondrachov's theorem or the Rellich–Kondrachov theorem). Functions in formula_161 have all derivatives of order less than "m" continuous, so in particular this gives conditions on Sobolev spaces for various derivatives to be continuous. Informally these embeddings say that to convert an "Lp" estimate to a boundedness estimate costs 1/"p" derivatives per dimension. There are similar variations of the embedding theorem for non-compact manifolds such as formula_74 . Sobolev embeddings on formula_74 that are not compact often have a related, but weaker, property of cocompactness. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Omega" }, { "math_id": 1, "text": "\\R^n." }, { "math_id": 2, "text": "C^1" }, { "math_id": 3, "text": "C^2" }, { "math_id": 4, "text": "L^2" }, { "math_id": 5, "text": "u\\in C^k(\\Omega)" }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "\\varphi \\in C_c^{\\infty}(\\Omega)," }, { "math_id": 8, "text": " \\int_\\Omega u\\,D^{\\alpha\\!}\\varphi\\,dx=(-1)^{|\\alpha|}\\int_\\Omega \\varphi\\, D^{\\alpha\\!} u\\,dx," }, { "math_id": 9, "text": "\\alpha" }, { "math_id": 10, "text": "|\\alpha|=k" }, { "math_id": 11, "text": "D^{\\alpha\\!}f = \\frac{\\partial^{| \\alpha |}\\! f}{\\partial x_{1}^{\\alpha_{1}} \\dots \\partial x_{n}^{\\alpha_{n}}}." }, { "math_id": 12, "text": "u" }, { "math_id": 13, "text": "v" }, { "math_id": 14, "text": " \\int_\\Omega u\\,D^{\\alpha\\!}\\varphi\\;dx=(-1)^{|\\alpha|}\\int_\\Omega \\varphi \\,v \\;dx \\qquad\\text{for all }\\varphi\\in C_c^\\infty(\\Omega)," }, { "math_id": 15, "text": "D^\\alpha u := v" }, { "math_id": 16, "text": "u(x)=\\begin{cases}\n1+x & -1<x<0 \\\\\n10 & x=0\\\\\n1-x & 0<x<1\\\\\n0 & \\text{else}\n\\end{cases}" }, { "math_id": 17, "text": "v(x)=\\begin{cases}\n1 & -1<x<0 \\\\\n-1 & 0<x<1\\\\\n0 & \\text{else}\n\\end{cases}" }, { "math_id": 18, "text": "u(x)," }, { "math_id": 19, "text": "W^{1,p}" }, { "math_id": 20, "text": "p" }, { "math_id": 21, "text": "W^{k,p}(\\Omega)" }, { "math_id": 22, "text": "W^{k,p}(\\R)" }, { "math_id": 23, "text": "1 \\le p \\le \\infty" }, { "math_id": 24, "text": "f" }, { "math_id": 25, "text": "L^p(\\R)" }, { "math_id": 26, "text": "(k{-}1)" }, { "math_id": 27, "text": "f^{(k-1)}" }, { "math_id": 28, "text": "\\|f\\|_{k,p} = \\left (\\sum_{i=0}^k \\left \\|f^{(i)} \\right \\|_p^p \\right)^{\\frac{1}{p}} = \\left (\\sum_{i=0}^k \\int \\left |f^{(i)}(t) \\right |^p\\,dt \\right )^{\\frac{1}{p}}." }, { "math_id": 29, "text": " p = \\infty " }, { "math_id": 30, "text": "\\|f\\|_{k,\\infty} = \\max_{i=0,\\ldots,k} \\left \\|f^{(i)} \\right \\|_\\infty = \\max_{i=0,\\ldots,k} \\left(\\text{ess}\\, \\sup_t \\left |f^{(i)}(t) \\right |\\right)." }, { "math_id": 31, "text": "\\|\\cdot\\|_{k,p}, W^{k,p}" }, { "math_id": 32, "text": "\\left \\|f^{(k)} \\right \\|_p + \\|f\\|_p" }, { "math_id": 33, "text": "H^k = W^{k,2}." }, { "math_id": 34, "text": "H^k" }, { "math_id": 35, "text": "H^k(\\mathbb{T}) = \\Big \\{ f\\in L^2(\\mathbb{T}) : \\sum_{n=-\\infty}^\\infty \\left (1+n^2 + n^4 + \\dots + n^{2k} \\right ) \\left |\\widehat{f}(n) \\right |^2 < \\infty \\Big \\}," }, { "math_id": 36, "text": "\\widehat{f}" }, { "math_id": 37, "text": "f," }, { "math_id": 38, "text": "\\mathbb{T}" }, { "math_id": 39, "text": "\\|f\\|^2_{k,2}=\\sum_{n=-\\infty}^\\infty \\left (1 + |n|^{2} \\right )^k \\left |\\widehat{f}(n) \\right |^2." }, { "math_id": 40, "text": "in" }, { "math_id": 41, "text": "H^0 = L^2." }, { "math_id": 42, "text": "\\langle u,v\\rangle_{H^k} = \\sum_{i=0}^k \\left \\langle D^i u,D^i v \\right \\rangle_{L^2}." }, { "math_id": 43, "text": "W^{1,1}(0,1)" }, { "math_id": 44, "text": "W^{1,\\infty}(I)" }, { "math_id": 45, "text": "W^{k,\\infty}" }, { "math_id": 46, "text": "p<\\infty." }, { "math_id": 47, "text": "L^2," }, { "math_id": 48, "text": "f^{(k)}" }, { "math_id": 49, "text": "k \\in \\N, 1 \\leqslant p \\leqslant \\infty." }, { "math_id": 50, "text": "|\\alpha|\\leqslant k," }, { "math_id": 51, "text": "f^{(\\alpha)} = \\frac{\\partial^{| \\alpha |\\!} f}{\\partial x_{1}^{\\alpha_{1}} \\dots \\partial x_{n}^{\\alpha_{n}}}" }, { "math_id": 52, "text": "L^p(\\Omega)," }, { "math_id": 53, "text": "\\left \\|f^{(\\alpha)} \\right \\|_{L^{p}} < \\infty." }, { "math_id": 54, "text": "W^{k,p}(\\Omega) = \\left \\{ u \\in L^p(\\Omega) : D^{\\alpha}u \\in L^p(\\Omega) \\,\\, \\forall |\\alpha| \\leqslant k \\right \\}. " }, { "math_id": 55, "text": "W^{k,p}(\\Omega)." }, { "math_id": 56, "text": "\\| u \\|_{W^{k, p}(\\Omega)} := \\begin{cases} \n\\left( \\sum_{|\\alpha | \\leqslant k} \\left \\| D^{\\alpha}u \\right \\|_{L^p(\\Omega)}^p \\right)^{\\frac{1}{p}} & 1 \\leqslant p < \\infty; \\\\ \n\\max_{| \\alpha | \\leqslant k} \\left \\| D^{\\alpha}u \\right \\|_{L^{\\infty}(\\Omega)} & p = \\infty; \n\\end{cases}" }, { "math_id": 57, "text": "\\| u \\|'_{W^{k, p}(\\Omega)} := \\begin{cases} \n\\sum_{| \\alpha | \\leqslant k} \\left \\| D^{\\alpha}u \\right \\|_{L^{p}(\\Omega)} & 1 \\leqslant p < \\infty; \\\\ \n\\sum_{| \\alpha | \\leqslant k} \\left \\| D^{\\alpha}u \\right \\|_{L^{\\infty}(\\Omega)} & p = \\infty. \n\\end{cases}" }, { "math_id": 58, "text": "p<\\infty, W^{k,p}(\\Omega)" }, { "math_id": 59, "text": "W^{k,2}(\\Omega)" }, { "math_id": 60, "text": "H^k(\\Omega)" }, { "math_id": 61, "text": "\\| \\cdot \\|_{W^{k, 2}(\\Omega)}" }, { "math_id": 62, "text": "u \\in W^{k,p}(\\Omega)" }, { "math_id": 63, "text": "u_m \\in C^{\\infty}(\\Omega)" }, { "math_id": 64, "text": " \\left \\| u_m - u \\right \\|_{W^{k,p}(\\Omega)} \\to 0." }, { "math_id": 65, "text": "u_m" }, { "math_id": 66, "text": "W^{1,1}" }, { "math_id": 67, "text": "|x|^{-1} \\in W^{1,1}(\\mathbb{B}^3)" }, { "math_id": 68, "text": "\\mathbb{B}^3" }, { "math_id": 69, "text": "k > n/p" }, { "math_id": 70, "text": "f : \\mathbb{B}^n \\to \\R \\cup \\{\\infty \\}" }, { "math_id": 71, "text": "f(x) = | x |^{-\\alpha} \\in W^{k,p}(\\mathbb{B}^n) \\Longleftrightarrow \\alpha < \\tfrac{n}{p} - k." }, { "math_id": 72, "text": "1\\leqslant p \\leqslant \\infty." }, { "math_id": 73, "text": "W^{1,p}(\\Omega)," }, { "math_id": 74, "text": "\\R^n" }, { "math_id": 75, "text": "L^p(\\Omega)." }, { "math_id": 76, "text": "\\nabla f" }, { "math_id": 77, "text": "W^{1,p}(\\Omega)" }, { "math_id": 78, "text": "f, |\\nabla f| \\in L^p(\\Omega)." }, { "math_id": 79, "text": "p>n." }, { "math_id": 80, "text": "\\gamma = 1 - \\tfrac{n}{p}," }, { "math_id": 81, "text": "p=\\infty" }, { "math_id": 82, "text": "W^{1,2}(\\Omega)" }, { "math_id": 83, "text": "H^1\\!(\\Omega)." }, { "math_id": 84, "text": "H^1_0\\!(\\Omega)" }, { "math_id": 85, "text": "\\|f\\|_{H^1} = \\left ( \\int_\\Omega \\! |f|^2 \\!+\\! |\\nabla\\! f|^2 \\right)^{\\!\\frac12}." }, { "math_id": 86, "text": "H^1\\!(\\Omega)" }, { "math_id": 87, "text": "n=1," }, { "math_id": 88, "text": "\\Omega = (a,b)" }, { "math_id": 89, "text": "H^1_0(a,b)" }, { "math_id": 90, "text": "[a,b]" }, { "math_id": 91, "text": "f(x) = \\int_a^x f'(t) \\, \\mathrm{d}t, \\qquad x \\in [a, b]" }, { "math_id": 92, "text": "f'" }, { "math_id": 93, "text": "L^2(a,b)" }, { "math_id": 94, "text": "f(b) = f(a) = 0." }, { "math_id": 95, "text": "C= C(\\Omega)" }, { "math_id": 96, "text": "\\int_\\Omega | f|^2 \\leqslant C^2 \\int_\\Omega |\\nabla f|^2, \\qquad f \\in H^1_0(\\Omega)." }, { "math_id": 97, "text": "L^2\\!(\\Omega)," }, { "math_id": 98, "text": "L^2(\\Omega)" }, { "math_id": 99, "text": "u\\in C(\\Omega)" }, { "math_id": 100, "text": "u|_{\\partial\\Omega}." }, { "math_id": 101, "text": "u\\in W^{k,p}(\\Omega)," }, { "math_id": 102, "text": "T: W^{1,p}(\\Omega)\\to L^p(\\partial\\Omega)" }, { "math_id": 103, "text": "\\begin{align}\nTu &= u|_{\\partial\\Omega} && u\\in W^{1,p}(\\Omega)\\cap C(\\overline{\\Omega}) \\\\\n\\|Tu\\|_{L^p(\\partial\\Omega)}&\\leqslant c(p,\\Omega)\\|u\\|_{W^{1,p}(\\Omega)} && u\\in W^{1,p}(\\Omega).\n\\end{align}" }, { "math_id": 104, "text": "W^{1-\\frac{1}{p},p}(\\partial\\Omega)." }, { "math_id": 105, "text": " W_0^{1,p}(\\Omega)= \\left \\{u\\in W^{1,p}(\\Omega): Tu=0 \\right \\}," }, { "math_id": 106, "text": " W_0^{1,p}(\\Omega):= \\left \\{u\\in W^{1,p}(\\Omega): \\exists \\{u_m\\}_{m=1}^\\infty\\subset C_c^\\infty(\\Omega), \\ \\text{such that} \\ u_m\\to u \\ \\textrm{in} \\ W^{1,p}(\\Omega) \\right \\}." }, { "math_id": 107, "text": "W^{k,p}(\\R^n)" }, { "math_id": 108, "text": " W^{k,p}(\\R^n) = H^{k,p}(\\R^n) := \\Big \\{f \\in L^p(\\R^n) : \\mathcal{F}^{-1} \\Big[\\big(1 + |\\xi|^2\\big)^{\\frac{k}{2}}\\mathcal{F}f \\Big] \\in L^p(\\R^n) \\Big \\}," }, { "math_id": 109, "text": "\\|f\\|_{H^{k,p}(\\R^n)} := \\left\\| \\mathcal{F}^{-1} \\Big[ \\big(1 + |\\xi|^2\\big)^{\\frac{k}{2}} \\mathcal{F}f \\Big] \\right\\|_{L^p(\\R^n)}." }, { "math_id": 110, "text": "H^{s,p}(\\R^n) := \\left \\{f \\in \\mathcal S'(\\R^n) : \\mathcal{F}^{-1} \\left [\\big(1 + |\\xi|^2 \\big)^{\\frac{s}{2}}\\mathcal{F}f \\right ] \\in L^p(\\R^n) \\right \\} " }, { "math_id": 111, "text": " s \\geq 0, H^{s,p}(\\Omega)" }, { "math_id": 112, "text": "H^{s,p}(\\R^n)" }, { "math_id": 113, "text": "\\|f\\|_{H^{s,p}(\\Omega)} := \\inf \\left \\{\\|g\\|_{H^{s,p}(\\R^n)} : g \\in H^{s,p}(\\R^n), g|_{\\Omega} = f \\right \\} ." }, { "math_id": 114, "text": " H^{k+1,p}(\\R^n) \\hookrightarrow H^{s',p}(\\R^n) \\hookrightarrow H^{s,p}(\\R^n) \\hookrightarrow H^{k,p}(\\R^n), \\quad k \\leqslant s \\leqslant s' \\leqslant k+1 " }, { "math_id": 115, "text": "W^{k,p}(\\R^n)." }, { "math_id": 116, "text": " \\left [ W^{k,p}(\\R^n), W^{k+1,p}(\\R^n) \\right ]_\\theta = H^{s,p}(\\R^n)," }, { "math_id": 117, "text": "1 \\leqslant p \\leqslant \\infty, \\ 0 < \\theta < 1, \\ s= (1-\\theta)k + \\theta (k+1)= k+\\theta. " }, { "math_id": 118, "text": "1 \\leqslant p < \\infty, \\theta \\in (0, 1)" }, { "math_id": 119, "text": "f \\in L^p(\\Omega)," }, { "math_id": 120, "text": "[f]_{\\theta, p, \\Omega} :=\\left(\\int_{\\Omega} \\int_{\\Omega} \\frac{|f(x)-f(y)|^p}{|x-y|^{\\theta p + n}} \\; dx \\; dy \\right )^{\\frac{1}{p}}." }, { "math_id": 121, "text": "\\theta = s - \\lfloor s \\rfloor \\in (0,1)" }, { "math_id": 122, "text": "W^{s,p}(\\Omega)" }, { "math_id": 123, "text": "W^{s,p}(\\Omega) := \\left\\{f \\in W^{\\lfloor s \\rfloor, p}(\\Omega) : \\sup_{|\\alpha| = \\lfloor s \\rfloor} [D^\\alpha f]_{\\theta, p, \\Omega} < \\infty \\right\\}." }, { "math_id": 124, "text": "\\|f \\| _{W^{s, p}(\\Omega)} := \\|f\\|_{W^{\\lfloor s \\rfloor,p}(\\Omega)} + \\sup_{|\\alpha| = \\lfloor s \\rfloor} [D^\\alpha f]_{\\theta, p, \\Omega}." }, { "math_id": 125, "text": " W^{k+1,p}(\\Omega) \\hookrightarrow W^{s',p}(\\Omega) \\hookrightarrow W^{s,p}(\\Omega) \\hookrightarrow W^{k, p}(\\Omega), \\quad k \\leqslant s \\leqslant s' \\leqslant k+1." }, { "math_id": 126, "text": " W^{s,p}(\\Omega) = \\left (W^{k,p}(\\Omega), W^{k+1,p}(\\Omega) \\right)_{\\theta, p} , \\quad k \\in \\N, s \\in (k, k+1), \\theta = s - \\lfloor s \\rfloor ." }, { "math_id": 127, "text": "\n\\lim_{s \\nearrow 1} \\; (1 - s)\n\\int_{\\Omega} \\int_{\\Omega} \\frac{|f(x)-f(y)|^p}{|x-y|^{s p + n}} \\; dx \\; dy \n= \n\\frac{\n2 \\pi^{\\frac{n - 1}{2}}\n\\Gamma (\\frac{p + 1}{2})}\n{p \\Gamma (\\frac{p + n}{2})}\n\\int_{\\Omega} \\vert \\nabla f\\vert^p;\n" }, { "math_id": 128, "text": "\n\\limsup_{s \\nearrow 1}\\; (1 - s)\n\\int_{\\Omega} \\int_{\\Omega} \\frac{|f(x)-f(y)|^p}{|x-y|^{s p + n}} \\; dx \\; dy < \\infty\n" }, { "math_id": 129, "text": "L^p (\\Omega)" }, { "math_id": 130, "text": "W^{1,p} (\\Omega)" }, { "math_id": 131, "text": "A : W^{k,p}(\\Omega) \\to W^{k,p}(\\R^n)" }, { "math_id": 132, "text": "\\Omega." }, { "math_id": 133, "text": "H^s(\\Omega)" }, { "math_id": 134, "text": " u \\in H^s(\\Omega)" }, { "math_id": 135, "text": "Au \\in H^s(\\R^n)." }, { "math_id": 136, "text": "H^s_0(\\Omega)" }, { "math_id": 137, "text": "C^\\infty_c(\\Omega)" }, { "math_id": 138, "text": "\\left.\\left(u,\\frac{du}{dn}, \\dots, \\frac{d^k u}{dn^k}\\right)\\right|_G" }, { "math_id": 139, "text": "H^s_0" }, { "math_id": 140, "text": "u\\in H^s_0(\\Omega)" }, { "math_id": 141, "text": "\\tilde u \\in L^2(\\R^n)" }, { "math_id": 142, "text": "\\tilde u(x)= \\begin{cases} u(x) & x \\in \\Omega \\\\ 0 & \\text{else} \\end{cases}" }, { "math_id": 143, "text": "s > \\tfrac{1}{2}." }, { "math_id": 144, "text": "u \\mapsto \\tilde u" }, { "math_id": 145, "text": "H^s(\\R^n)" }, { "math_id": 146, "text": "n + \\tfrac{1}{2}" }, { "math_id": 147, "text": "Ef := \\begin{cases} f & \\textrm{on} \\ \\Omega, \\\\ 0 & \\textrm{otherwise} \\end{cases}" }, { "math_id": 148, "text": "L^p(\\R^n)." }, { "math_id": 149, "text": " \\| Ef \\|_{L^p(\\R^n)}= \\| f \\|_{L^p(\\Omega)}." }, { "math_id": 150, "text": "W^{1,p}(\\R^n)." }, { "math_id": 151, "text": " E: W^{1,p}(\\Omega)\\to W^{1,p}(\\R^n)," }, { "math_id": 152, "text": "u\\in W^{1,p}(\\Omega): Eu = u" }, { "math_id": 153, "text": "\\| Eu \\|_{W^{1,p}(\\R^n)}\\leqslant C \\|u\\|_{W^{1,p}(\\Omega)}." }, { "math_id": 154, "text": "Eu" }, { "math_id": 155, "text": "W^{k,p}" }, { "math_id": 156, "text": "k \\geqslant m" }, { "math_id": 157, "text": "k - \\tfrac{n}{p} \\geqslant m - \\tfrac{n}{q}" }, { "math_id": 158, "text": "W^{k,p}\\subseteq W^{m,q}" }, { "math_id": 159, "text": "k > m" }, { "math_id": 160, "text": "k - \\tfrac{n}{p} > m - \\tfrac{n}{q}" }, { "math_id": 161, "text": "W^{m,\\infty}" } ]
https://en.wikipedia.org/wiki?curid=611964
61197620
Textual variants in the Gospel of Mark
Differences in New Testament manuscripts Textual variants in the Gospel of Mark are the subject of the study called textual criticism of the New Testament. Textual variants in manuscripts arise when a copyist makes deliberate or inadvertent alterations to a text that is being reproduced. An abbreviated list of textual variants in this particular book is given in this article below. Origen, writing in the 3rd century, was one of the first who made remarks about differences between manuscripts of texts that were eventually collected as the New Testament. He declared his preferences among variant readings. For example, in , he favored "Barabbas" against "Jesus Barabbas" ("In Matt. Comm. ser." 121). In , he preferred "Bethabara" over "Bethany" as the location where John was baptizing ("Commentary on John" VI.40 (24)). "Gergeza" was preferred over "Geraza" or "Gadara" ("Commentary on John" VI.40 (24) – see ). Most of the variations are not significant and some common alterations include the deletion, rearrangement, repetition, or replacement of one or more words when the copyist's eye returns to a similar word in the wrong location of the original text. If their eye skips to an earlier word, they may create a repetition (error of dittography). If their eye skips to a later word, they may create an omission. They may resort to performing a rearranging of words to retain the overall meaning without compromising the context. In other instances, the copyist may add text from memory from a similar or parallel text in another location. Otherwise, they may also replace some text of the original with an alternative reading. Spellings occasionally change. Synonyms may be substituted. A pronoun may be changed into a proper noun (such as "he said" becoming "Jesus said"). John Mill's 1707 Greek New Testament was estimated to contain some 30,000 variants in its accompanying textual apparatus which was based on "nearly 100 [Greek] manuscripts." Peter J. Gurry puts the number of non-spelling variants among New Testament manuscripts around 500,000, though he acknowledges his estimate is higher than all previous ones. Legend. A guide to the sigla (symbols and abbreviations) most frequently used in the body of this article. Textual variants. For a list of many variants not noted here, see the ECM of Mark. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;Textual variants in the Gospel of Mark &lt;onlyinclude&gt;&lt;br&gt; &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;17 Textual variants in Mark 1 Ἰησοῦ Χριστοῦ ("of Jesus Christ") – ‭א* Θ 28c 530 582* 820* 1021 1436 1555* 1692 2430 2533 l2211 copsa(ms) arm geo1 Origengr Origenlat Victorinus-Pettau Asterius Serapion Titus-Bostra Basil Cyril-Jerusalem Severian Jerome3/6 Hesychius WHtext Rivmg NM Ἰησοῦ Χριστοῦ υἱοῦ θεοῦ ("of Jesus Christ son of God") – ‭א1 B D L W 732 1602 2427 Diatessaronp WHmg (NA [υἱοῦ θεοῦ]) Ἰησοῦ Χριστοῦ υἱοῦ τοῦ θεοῦ ("of Jesus Christ son of the God") – A E F Gsupp H K Δ Π Σ "ƒ"1 "ƒ"13 33 180 205 565 579 597 700 892 1006 1009 1010 1071 1079 1195 1216 1230 1242 1243 1253 1292 1342 1344 1365 1424 1505 1546 1646 2148 2174 Byz Lect eth geo2 slav ς Ἰησοῦ Χριστοῦ τοῦ θεοῦ ("of Jesus Christ of the God") – 055 pc τοῦ κυρίου Ἰησοῦ Χριστοῦ ("of the lord Jesus Christ") – syrpal Ἰησοῦ Χριστοῦ υἱοῦ τοῦ κυρίου ("of Jesus Christ son of the lord") – 1241 Ἰησοῦ ("of Jesus") – 28* καθὼς γέγραπται ("Just as it is written") – Alexandrian text-type: Westcott and Hort 1881, Westcott and Hort / [NA27 and UBS4 variants] 1864–94, Tischendorf 8th Edition, Nestle 1904 ὡς γέγραπται ("As it is written") – ς Byz: Stephanus Textus Receptus 1550, Scrivener's Textus Receptus 1894, RP Byzantine Majority Text 2005, Greek Orthodox Church ἐν τοῖς προφήταις ("in the prophets") – A E F G H K P W Π Σ "ƒ"13 28 180 579 597 1006 1009 1010 1079 1195 1216 1230 1242 1253 1292 1342 1344 1365 1424 1505 1546 1646 Byz Lect vgms syrh copbo(ms)(mg) arm eth slav Irenaeuslat2/3 Asterius Photius Theophylact ς ND Dio. Byz: Stephanus Textus Receptus 1550, Scrivener's Textus Receptus 1894, RP Byzantine Majority Text 2005, Greek Orthodox Church ἐν τῷ Ἠσαἲᾳ τῷ προφήτῃ ("in the Isaiah the prophet") – א B L D 22 33 565 892 1241 2427 Origen1/4. Alexandrian text-type: Westcott and Hort 1881, Westcott and Hort / [NA27 and UBS4 variants] 1864–94, Tischendorf 8th Edition, Nestle 1904 ἐν Ἠσαἲᾳ τῷ προφήτῃ ("in Isaiah the prophet") – D Θ "ƒ"1 205 372 700 1071 1243 2174 2737 pc l253 arm geo Irenaeusgr Origen3/4 Severian (Jerome) Augustine Hesychius Victor-Antioch ἐν τῷ Ἠσαΐᾳ τῷ προφήτῃ or ἐν Ἠσαΐᾳ τῷ προφήτῃ ("in (the) Isaiah the prophet") – ita itaur itb itc itd itf itff2 itl itq vg syrp syrh(mg) syrpal copsa copbo goth Irenaeuslat1/3 Irenaeuslat NR CEI Riv TILC Nv NM ἐν Ἠσαΐᾳ ("in Isaiah") – Victorinus-Pettau Ambrosiaster Serapion Titus-Bostra Basil Epiphanius Chromatius ἐν Ἠσαΐᾳ καὶ ἐν τοῖς προφήταις ("in Isaiah and in the prophets") – itr1(vid) Ἰδοὺ ("Behold...") – B D Θ 28* 565 pc it vg cop Irenaeuslat. Alexandrian text-type: Westcott and Hort 1881, Westcott and Hort / [NA27 and UBS4 variants] 1864–94, Nestle 1904 Ἰδού, ἐγὼ ("Behold, I...") – ‭א A L W "ƒ"1 "ƒ"13 Byz vgst vgcl syrh copsa(ms) copbo(ms) Origen Eusebius ς. Tischendorf 8th Edition. Byz: Stephanus Textus Receptus 1550, Scrivener's Textus Receptus 1894, RP Byzantine Majority Text 2005, Greek Orthodox Church τὴν ὁδόν σου· ("the way of you:") – ‭א B D K L P W Θ Π Φ 700* 2427 2766 al it vg syrp coppt Irenaeuslat WH NR CEI Riv TILC Nv NM. Alexandrian text-type: Westcott and Hort 1881, Westcott and Hort / [NA27 and UBS4 variants] 1864–94, Tischendorf 8th Edition, Nestle 1904 τὴν ὁδόν σου ἔμπροσθέν σου. ("the way of you before you.") – A Δ "ƒ"1 "ƒ"13 33 565 1342 Byz itf itff2 itl vgcl syrh copsa(mss) copbo(pt) goth Origen Eusebius ς ND Dio. Byz: Stephanus Textus Receptus 1550, Scrivener's Textus Receptus 1894, RP Byzantine Majority Text 2005, Greek Orthodox Church Mark 1:4 ὁ βαπτίζων ἐν τῇ ἐρήμῳ καὶ ("the Baptist in the wilderness and") – ‭א L Δ 205 1342 copbo geo1 slavms (NA [ὁ]) TILC ὁ βαπτίζων ἐν τῇ ἐρήμῳ ("the Baptist in the wilderness") – B 33 2427 pc copbo(mss) WH NR Riv Nv NM βαπτίζων ἐν τῇ ἐρήμῳ καὶ ("baptising in the wilderness and") – A E F G H K Pvid W Π Σ "ƒ"1 "ƒ"13 180 565 579 1006 1009 1010 1071 1079 1195 1216 1230 1241 1242 1243 1253 1292 1344 1365 1424 1505 1546 1646 2148 2174 Byz Lect (l751) (l1074) itf syrh syrpal (copsa omitted καὶ) goth arm eth slavmss ς (CEI) (Dio) βαπτίζων ἐν τῇ ἐρήμῳ ("baptising in the wilderness") – 892 ἐν τῇ ἐρήμῳ βαπτίζων καὶ ("in the wilderness baptising and") – D Θ 28 700 l2211 ita itaur itb itc itff1 itl itq itr1 itt vg syrp (Eusebius Cyril-Jerusalem omitted καὶ) Jerome Augustine ND ἐν τῇ ἐρήμῳ καὶ ("in the wilderness and") – geo2 Mark 1:5 πάντες, καὶ ἐβαπτίζοντο ὑπ' αὐτοῦ ἐν τῷ Ἰορδάνῃ ποταμῷ ("all, and [they] were baptised by him in the Jordan river") – B D L 28 33 892 1241 pc it vg cop? Origen WH NR CEI Riv (TILC) (Nv) NM πάντες, ἐβαπτίζοντο ὑπ' αὐτοῦ ἐν τῷ Ἰορδάνῃ ποταμῷ ("all, [they] were baptised by him in the Jordan river") – ‭א* pc καὶ ἐβαπτίζοντο πάντες ἐν τῷ Ἰορδάνῃ ποταμῷ ὑπ' αὐτοῦ ("and [they] were baptised all in the Jordan river by him") – A W "ƒ"1 700 Byz syrh ς ND Dio καὶ πάντες ἐβαπτίζοντο ἐν τῷ Ἰορδάνῃ ποταμῷ ὑπ' αὐτοῦ ("and all were baptised in the Jordan river by him") – "ƒ"13 565 pc καὶ ἐβαπτίζοντο ἐν τῷ Ἰορδάνῃ ποταμῷ ὑπ' αὐτοῦ ("and [they] were baptised in the Jordan river by him") – Θ pc Mark 1:5 ποταμῷ ("[in the] river") – Byz ς WH "omitted" – D W Θ 28 565 799 ita Eusebius Mark 1:6 καὶ ζώνην δερματίνην περὶ τὴν ὀσφὺν αὐτοῦ ("and a belt of leather around the waist of him") – Byz itaur itc itf itl itq vg ς WH "omitted" – D ita itb itd itff2 itr1 itt vgms Mark 1:7 ὀπίσω μου ("after me") – Byz ς [WH] ὀπίσω ("after") – B Origen "omitted" – Δ 1424 itt itff2 Mark 1:7 κύψας ("having stooped down") – Byz ς WH "omitted" – D Θ "ƒ"13 28* 565 pc it Mark 1:8 ἐγὼ ("I") – WH. Alexandrian text-type: Westcott and Hort 1881, Westcott and Hort / [NA27 and UBS4 variants] 1864–94, Tischendorf 8th Edition, Nestle 1904 ἐγὼ μέν ("I indeed") – Byz ς. Byz: Stephanus Textus Receptus 1550, Scrivener's Textus Receptus 1894, RP Byzantine Majority Text 2005, Greek Orthodox Church Mark 1:8 ὕδατι ("[with] water") – ‭א B H Δ 33 892* 1006 1216 1243 1342 2427 vg arm geo Origen Jerome Augustine WH. Alexandrian text-type: Westcott and Hort 1881, Westcott and Hort / [NA27 and UBS4 variants] 1864–94, Tischendorf 8th Edition, Nestle 1904 ἐν ὕδατι ("in water") – A E F G K L P W (Θ μέν "before" ἐν) Π Σ "ƒ"1 "ƒ"13 28 157 180 205 565 579 700 892c 1009 1010 1071 1079 1195 1230 1241 1242 1253 1292 1344 1365 1424 1505 1546 1646 2148 2174 Byz Lect itaur itb itc itf itl itq itt vgmss copsa copbo goth eth Hippolytus ς. Byz: Stephanus Textus Receptus 1550, Scrivener's Textus Receptus 1894, RP Byzantine Majority Text 2005, Greek Orthodox Church. Compare Matthew 3:11; John 1:26. ἐν ὕδατι ("in water") inserted after λέγων in Mark 1:7 – D ita itd itff2 itr1 Mark 1:8 π̣ν̣ι αγ̣[ιω] ("the Holy Spirit") – 𝔓137. π̣ν̣ι is a "nomen sacrum" abbreviation of πν(ευματ)ι, see Papyrus 137 § Particular readings. πνεύματι ἁγίῳ ("the Holy Spirit") – B L itaur itb itt vg syrp? syrh? syrpal? arm geo Augustine WH ἐν πνεύματι ἁγίῳ ("with the Holy Spirit") – א A D K W Δ Θ Π 0133 "ƒ"1 "ƒ"13 8 33 565 700 892 1009 1010 1071 1079 1216 1230 1242 1253 1344 1365 1546 1646 2148 2174 Byz Lectm (ita) itc itd itf itff2 itl itq (itr1) copsa copbo syrp? syrh? syrpal? goth eth Hippolytus Origen ς. Compare Matthew 3:12; Luke 3:16. ἐν πνεύματι ἁγίῳ καὶ πυρί ("with the Holy Spirit and fire") – P 1195 1241 ℓ "44m" syrh*. Compare Matthew 3:12; Luke 3:16. Mark 1:13 καὶ ἦν ἐν τῇ ερημω ("he was in the wilderness") – א A B D L Θ 33. 579. 892. 1342. καὶ ἦν ἐκει ἐν τῇ ερημω ("he was there in the wilderness") – W Δ 157. 1241. "Byz" καὶ ἦν ἐκει ("he was there") – 28. 517. 565. 700. "ƒ"1 Family Π syrs Omit – "ƒ"13 Hiatus – C Ψ syrc Mark 1:14 εὐαγγέλιον – א B L Θ "ƒ"1 "ƒ"13 28. 33. 565. 892 εὐαγγέλιον τῆς βασιλείας – Α, D Κ, W Δ Π 074 0133 0135 28mg, 700. 1009. 1010. 1071. 1079. 1195. 1216. 1230. 1241. 1242. 1253. 1344. 1365. 1546. 1646. 2148. 2174. "Byz", Lect, lat, syrp, copbo σπλαγχνισθεις ("filled with compassion") – All manuscripts except those listed below οργισθεις ("irritated; angry") – D a ff2 r1 &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;2 Textual variants in Mark 2 Mark 2:16 ἐσθίει ("eating") - B D W ita, b, d, e, ff2, r1, u[w] ἐσθίει καὶ πίνει ("eating and drinking") - 𝔓88 A "ƒ"1 "ƒ"13 2. 28. 33. 157. 180. 597. 892. 1006. 1010. 1292. 1505. formula_0 / "Byz" E F H "Lect" itq, vgms, syrp,h, copsams, [w]τ ἐσθίεται (=ἐσθίετε?) ("eating") - Θ ἐσθίει ό διδάσκαλος ύμων ("your teacher eating") - (see Mt 9:11) א 1342. itaur, vgms (Origenlat), DHH ἐσθίει καὶ πίνει ό διδάσκαλος ύμων ("eating and drinking, your teacher") - L Δ "ƒ"13 1071. 1243. 1346. it(c),f, vg, copbo, Augustine ἐσθίετε καὶ πίνετε ("[are you] eating and drinking") - (see Lk 5:30) Σ 124. 565. 700. 1241. 1424. ℓ "547"ℓ "866" srypal, arm, geo, Diatessaron ἐσθίειτε καὶ πίνειτε ("[are you] eating and drinking") - G ό διδάσκαλος ύμων ἐσθίει καὶ πίνει ("your teacher eating and drinking") - C 579 ℓ "890" it1, copsamss, eth ἐσθίετε ("[are you] eating") - 1424. Mark 2:26 ἐπὶ Ἀβιαθαρ ἀρχιερέως ("when Abiatar was high priest") – א A B K L 892. 1010. 1195. 1216. 1230. 1242. 1344. 1365. 1646. 2174. "Byz", ℓ "69" ℓ "70" ℓ "76" ℓ "80" ℓ "150" ℓ "299" ℓ "1127" ℓ "1634" ℓ "1761" arm ἐπὶ Ἀβιαθαρ τοῦ ἀρχιερέως ("when Abiatar was high priest") – A C Θ Π 074 ἐπὶ Ἀβιαθαρ τοῦ ἱερέως ("when Abiatar was priest") – Δ itf phrase is omitted by manuscripts D W 1009. 1546. ita, b, d, e, ff2, i, r1, t, syrs &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;2 Textual variants in Mark 3 Mark 3:7 ἀπὸ τῆς Γαλιλαίας ἠκολούθησεν, καὶ ἀπὸ τῆς Ἰουδαίας - B L 565. 728.vid ἀπὸ τῆς Γαλιλαίας ἠκολούθησεν αὐτῷ, καὶ ἀπὸ τῆς Ἰουδαίας - 61.c 427. 555.c 732. 892. ℓ "950" Byz ἀπὸ τῆς Γαλιλαίας ἠκολούθησαν αὐτῷ, καὶ ἀπὸ τῆς Ἰουδαίας - Φvid 0211 "ƒ"13:(13. 346. 543. 826.) 4. 23. 154. 179. 273. 349. 351. 372. 382. 513. 517. 544. 695. 716. 733. 752. 766. 780. 792. 803. 873. 954. 979. 1009. 1047. 1084. 1241. 1326. 1337. 1396. 1424. 1506. 1515. 1546. 1645. 1654. 1675. 2538. 2737. 2766. ℓ "211" ℓ "387" ℓ "770" ℓ "773" ℓ "2211" syh ἀπὸ τῆς Γαλιλαίας καὶ ἀπὸ τῆς Ἰουδαίας ἠκολούθησαν - א C ἀπὸ τῆς Γαλιλαίας καὶ ἀπὸ τῆς Ἰουδαίας ἠκολούθησαν αὐτῷ - Δ 377. 1071. 1342. ἀπὸ τῆς Γαλιλαίας ἠκολούθησεν αὐτῷ, καὶ ἀπὸ Ἱεροσολύμων καὶ ἀπὸ τῆς Ἰουδαίας καὶ πέραν τοῦ Ἰορδάνου - "ƒ"1:(1. 131. 205.) 1253. (ἐκ τῆς Γαλιλαίας) 2193.* 2886. καὶ ἀπὸ τῆς Ἰουδαίας καὶ πέραν τοῦ Ἰορδάνου ἀπὸ τῆς Γαλιλαίας ἠκολούθησεν αὐτῷ, καὶ ἀπὸ Ἱεροσολύμων - 118. 209. 1582. Mark 3:14 δώδεκα, ἵνα ὦσιν μετʼ αὐτοῦ - Cc2 L "ƒ"1:(1. 205. 209. 1582.) 33. 382. 427. 544. 565. 579. 732. 740. 792. 892. 1342. 1424. 2193. 2542. 2766. 2886. ℓ "950" Byz ἵνα ὦσιν δώδεκα μετʼ αὐτοῦ - D 79. δώδεκα, ἵνα ὦσιν περὶ αὐτὸν - 700. δώδεκαμαθητας ἵνα ὦσιν μετʼ αὐτοῦ οὓς καὶ ἀποστόλους ὠνόμασεν - W δώδεκα οὓς καὶ ἀποστόλους ὠνόμασεν ἵνα ὦσιν μετʼ αὐτοῦ - א B (ὁνόμασεν - Θ "ƒ"13:(13. 124. 346. 543. 788. 826. 828. 1689.) 69. 238. 377. 807. 983. 1160. syh(ms) ἵνα ὦσιν μετʼ αὐτοῦ δώδεκα οὓς καὶ ἀποστόλους ὠνόμασεν - Δ δώδεκα, ἵνα ὦσιν μετʼ αὐτοῦ και ἵνα ἀποστέλλει αὐτοὺς οὓς καὶ ἀποστόλους ὠνόμασεν - Φvid &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;3 Textual variants in Mark 4 Mark 4:19 η αγαπη του πλουτου ("the love of wealth") – Δ η απατη του πλουτου ("the illusion of wealth") – א A B C E "Byz" απαται του πλουτου ("the illusions of wealth") – W απαται του κοσμου ("the illusions of world") – D (Θ 565.) Mark 4:19 και αι περι τα λοιπα επιθυμιαι ("and the desire for other things") – rest of mss omit – D (Θ) W "ƒ"1 28. (565. 700.) it Mark 4:24 καὶ προστεθήσεται ὑμῖν – א B C L Δ 700. 892. καὶ προστεθήσεται ὑμῖν τοῖς ἀκούουσιν – A K Π 0107 "Byz" omit – codices D W 565. &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;3 Textual variants in Mark 5 Mark 5:9 απεκριθη λεγων – E 565. 700. 1010. απεκριθη – D λεγει αυτω – rest of mss. Mark 5:9 λεγιων ονομα μοι – א B C L Δ λεγεων – A W Θ "ƒ"1 "ƒ"13 "Byz" Mark 5:37 ουδενα μετ' αυτου συνακολουθεσαι – א B C L Δ 892. ουδενα αυτω συνακολουθεσαι – A Θ 0132 0133c "ƒ"13 "Byz" ουδενα αυτω παρακολουθεσαι – D W 0133* "ƒ"1 28. 565. 700. "pc" ουδενα αυτω ακολουθεσαι – A K 33. 1241. "al" &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;3 Textual variants in Mark 6 Mark 6:3 ο αδελφος Ἰακώβου - 565. 700. 892.c (Θ 2542.) lat και ἀδελφὸς Ἰακώβου – B C Δ 579. 1241. 1424. και ο αδελφος Ἰακώβου - א D L 892.* sams bopt ἀδελφὸς δὲ Ἰακώβου – A K N W "ƒ"1 "ƒ"13 28. "Byz" q syh sams Mark 6:33 ἐκεῖ καὶ προῆλθον αὐτούς – א B 0187 (omit εκει), 892. ℓ "49" ℓ "69" ℓ "70" ℓ "299" ℓ "303" ℓ "333" ℓ "1579" (ℓ "950" αυτους), itaur, vg, (copsa,bo) ἐκει καὶ προσηλθον αὐτοῖς – L 1241 (Δ Θ ℓ "10" αὐτοῖς) ℓ "12" ℓ "80" ℓ "184" ℓ "211" ℓ "1127" arm, geo ἐκεῖ καὶ συνῆλθον αὐτῷ – Dgr itb ἐκεῖ καὶ συνῆλθον αὐτοῦ – 28. 700. ἐκεῖ καὶ ἢλθον αὐτοῦ – 565. it(a),d,ff,i,r, Diatessaron καὶ ἢλθον ἐκεῖ – "ƒ"1 προηλθον αὐτὸν ἐκεῖ – Peshitta πρὸς αὐτούς καὶ συνῆλθον πρὸς αὐτον – 33. ἐκεῖ καὶ προῆλθον αὐτοῖς καὶ συνῆλθον πρὸς αὐτον – K Π ("ƒ"13 συνεισηλθον προς αὐτούς) 1009. 1010. 1071. 1079. 1195. 1216. 1230. 1242. 1365. 1546. 1646. 2148. 2174. "Byz" ἐκεῖ καὶ προῆλθον αὐτοῖς καὶ συνέδραμον πρὸς αὐτον – A ἐκει – W ℓ "150" itc Mark 6:51 ἐξίσταντο – א B L Δ 28. 892. itc, ff2, i, l vg syrs copsa, bo, geo ἐξεπλήσσοντο – "ƒ"1 ἐξίσταντο καὶ ἐθαύμαζον – A D K W X Θ Π "ƒ"13 33. 565. 700. 1009. 1010. 1079. 1195. 1216. 1230. 1241. 1242. 1253. 1344. 1365. 1546. 1646. 2148. 2174. "Byz", Lect ἐθαύμαζον καὶ ἐξίσταντο – 517. 1424. &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;2 Textual variants in Mark 7 Mark 7:2 πυγμη – A B D K L X Θ Π πυκνα – א W vg omit – Δ syrs sa Mark 7:16 verse is omitted by א Β L Δ 28. &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;1 Textual variants in Mark 8 Mark 8:10 τὰ μέρη Δαλμανουθά – א A Β C K L X Δ Π 0131 33. 700. 892. 1009. 1010. 1195. 1216. 1230. 1242. 1253. 1344. 1365. 1546. 1646. 2148. 2174. "Byz", Lect, it, vg, syr, cop τὰ ὂρη Δαλμανουθά – 1071 τὸ ὂρος Δαλμανοῦναι – W τὰ ὅρια Δαλμανουθά – 1241 τὸ ὂρος Μαγεδά – 28 τὰ ὅρια Μελεγαδά – Dgr τὰ μέρη Μαγδαλά – Θ "ƒ"1 "ƒ"13 ℓ "80" τὰ μέρη Μαγεδά – 565. &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;1 Textual variants in Mark 9 Mark 9:49 πας γαρ πυρι αλισθησεται – (א εν πυρι) B L W Δ "ƒ"1 "ƒ"13 28 565. 700. ℓ"260" syrs sa πασα γαρ θυσια αλι αλισθησεται – D it πας γαρ πυρι αλισθησεται και πασα θυσια αλι αλισθησεται – A (C εν πυρι) K (X πυρι αλι αλισθησεται) Π (Ψ θυσια αναλωθησεται) πας γαρ πυρι αναλωθησεται και πασα θυσια αλι αλισθησεται – Θ &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;3 Textual variants in Mark 10 Mark 10:1 εἰς τὰ ὅρια τῆς Ἰουδαίας καὶ πέραν τοῦ Ἰορδάνου, ("to the region/border of Judea, and/also/even/namely beyond the Jordan,") – Alexandrian text-type: Westcott and Hort 1881, Westcott and Hort / [NA27 and UBS4 variants], Tischendorf 8th Edition 1864–94, Nestle 1904 εἰς τὰ ὅρια τῆς Ἰουδαίας διὰ τοῦ πέραν τοῦ Ἰορδάνου· ("to the region/border of Judea by/through the [land] beyond the Jordan.") – "Byz": Stephanus Textus Receptus 1550, Scrivener's Textus Receptus 1894, RP Byzantine Majority Text 2005, Greek Orthodox Church Mark 10:2 προσελθόντες Φαρισαῖοι ("the Pharisees came") – A B K L Γ Δ Ψ "ƒ"13 28. 700. 892. 1010. 1079. 1546. 1646. "Byz" copbo goth προσελθόντες οἱ Φαρισαῖοι (word order varies) – א C X verse omitted by D a, b, d, k, r1, syrsin (syrcur) Mark 10:47 Ναζαρηνός – B L W Δ Θ Ψ Ναζορηνός – D Ναζωρινός – 28 Ναζωραιός – א A C &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;1 Textual variants in Mark 11 Mark 11:26 Verse omitted by א B L W Δ Ψ 565. 700. 892. 1216. k, l, syrs,pal, cop Verse included by K X Θ Π 28. "Byz" &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;1 Textual variants in Mark 12 Mark 12:19 τέκνον - אc2a B L W Δ Θ Codex Athous Lavrensis "ƒ"1:(1. 118. 205. 209. 872. 1582.) 579. 700. 892. 1093. 1342. 1654. 2193. 2542. 2786. 2886. it τέκνα - 𝔓45 D "ƒ"13:(13. 69. 124. 346. 543. 788. 826. 828. 983. 1689.) 28. 427. 565. 732. 740. 792. 1424. 1542.s 1593. ℓ "950" Byz lat sys,h,p co go eth &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;1 Textual variants in Mark 13 Mark 13:32 οὐδὲ ὁ υἱός - All manuscripts except those cited below omit - X 389. 983. 1273. 1689. vgms &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;5 Textual variants in Mark 14 Mark 14:30 πρὶν ἢ δὶς ἀλέκτορα φωνῆσαι ("before that the rooster has crowed twice") – A "Byz" πρὶν ἀλέκτορα φωνῆσαι ("before the rooster has crowed") – א C* aeth, arm, Western text-type: D cu2 lat.afr-eur Mark 14:39 τὸν αὐτὸν λόγον εἰπών ("spoke the same words") – omitted by D a, b, c, d, ff2, k, (syrcur) Mark 14:68 καὶ άλέκτωρ ἐφώνησεν ("and the rooster crowed") – inserted by Western and Byzantine text-types after προαύλιον; not found in Alexandrian text-type (א B L it, 17 "c", me) Mark 14:72a εὐθὺς ("immediately") – Alexandrian text-type; omitted by "Byz" ἐκ δευτέρου ("for the second time") – omitted by א "c", L "c", vg.cod Mark 14:72b πριν αλεκτορα φωνηϲαι τριϲ με απαρνηϲη ("before the rooster has crowed thrice me you will have denied") – א "c"; several other mss also omit δίς ("twice") mss such as A and "Byz" do include δίς ("twice"), but in varying word orders: Πρὶν ἀλέκτορα δὶς φωνῆσαι τρίς με ἀπαρνήσῃ ("before the rooster twice has crowed thrice me you will have denied") – Westcott and Hort 1881, Westcott and Hort / [NA27 and UBS4 variants], Nestle 1904 Πρὶν ἀλέκτορα φωνῆσαι δίς, ἀπαρνήσῃ με τρίς ("before the rooster has crowed twice, you will have denied me thrice") – Stephanus Textus Receptus 1550, Scrivener's Textus Receptus 1894, RP Byzantine Majority Text 2005, Greek Orthodox Church πρὶν ἀλέκτορα φωνῆσαι δὶς τρίς με ἀπαρνήσῃ ("before the rooster has crowed twice thrice me you will have denied") – Tischendorf 8th Edition 1864–94 &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;4 Textual variants in Mark 15 Mark 15:28 Verse omitted by א B C D Ψ k syrs cop (see Ps 22:2) ἐγκατέλιπές με ("forsaken me") – א B Ψ 059 vg, syrs, p, copsa, bo, fay, geo ἐγκατέλειπές με – L 0112 565. 892. με ἐγκατέλιπες (see Mt 27:46) – C P, X Δ Θ Π2, "ƒ"1 "ƒ"13 28. 700. 1010. 1071. 1079. 1195. 1216. 1230. 1241. 1242. 1253. 1344. 1365. 1546. 1646. 2148. 2174. "Byz", Lect, it, goth με ἐγκατέλειπες – A Π* με ἐγκατέλειπας – K 1009. (ℓ "70") με ἐγκατέλιπας – 33. ὠνείδισάς με ("insult me") – (D) itc, (i), (k) Mark 15:40 Μαρία ἡ Ἰακώβου τοῦ μικροῦ καὶ Ἰωσῆτος μήτηρ, ("Mary the mother of James the Less and Joses,") – אc2 B (ἡ Ἰωσῆτος) Δ Θ 0184 "ƒ"1 1542.s* ℓ "844" Μαρία ἡ τοῦ Ἰακώβου τοῦ μικροῦ καὶ Ἰωσῆ μήτηρ, ("Mary the mother of James the Less and Joses,") – A Γ 700. 1241. "Byz" Mark 15:47 Μαρία ἡ Ἰωσῆτος ("Mary the mother of Joses") – אc2 B Δ Ψ* 083 "ƒ"1 syp, h Μαρία Ἰωσῆ ("Mary the mother of Joses") – 28. 205. 209. 273. 427. 579. 700. 732. 892. 1424. 1593. 2193.c 2738. 2886. ℓ "60"(1) ℓ "387"(1) ℓ "950" "Byz" &lt;br&gt;&lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;1 Textual variants in Mark 16 Mark 16:8-20 Entire pericope omitted by א B 304 &lt;/onlyinclude&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak{M}" } ]
https://en.wikipedia.org/wiki?curid=61197620
61199339
Lists of uniform tilings on the sphere, plane, and hyperbolic plane
In geometry, many uniform tilings on sphere, euclidean plane, and hyperbolic plane can be made by Wythoff construction within a fundamental triangle, (p q r), defined by internal angles as π/p, π/q, and π/r. Special cases are right triangles (p q 2). Uniform solutions are constructed by a single generator point with 7 positions within the fundamental triangle, the 3 corners, along the 3 edges, and the triangle interior. All vertices exist at the generator, or a reflected copy of it. Edges exist between a generator point and its image across a mirror. Up to 3 face types exist centered on the fundamental triangle corners. Right triangle domains can have as few as 1 face type, making regular forms, while general triangles have at least 2 triangle types, leading at best to a quasiregular tiling. There are different notations for expressing these uniform solutions, Wythoff symbol, Coxeter diagram, and Coxeter's t-notation. Simple tiles are generated by Möbius triangles with whole numbers p,q,r, while Schwarz triangles allow rational numbers p,q,r and allow star polygon faces, and have overlapping elements. 7 generator points. The seven generator points with each set of formula_0 (and a few special forms): There are three special cases: Symmetry triangles. There are 4 symmetry classes of reflection on the sphere, and three in the Euclidean plane. A few of the infinitely many such patterns in the hyperbolic plane are also listed. (Increasing any of the numbers defining a hyperbolic or Euclidean tiling makes another hyperbolic tiling.) Point groups: Euclidean (affine) groups: Hyperbolic groups: The above symmetry groups only include the integer solutions on the sphere. The list of Schwarz triangles includes rational numbers, and determine the full set of solutions of nonconvex uniform polyhedra. In the tilings above, each triangle is a fundamental domain, colored by even and odd reflections. Summary spherical, Euclidean and hyperbolic tilings. Selected tilings created by the Wythoff construction are given below. Spherical tilings ("r" = 2). Some overlapping spherical tilings ("r" = 2). Tilings are shown as polyhedra. Some of the forms are degenerate, given with brackets for vertex figures, with overlapping edges or vertices. Dihedral symmetry ("q" = "r" = 2). Spherical tilings with dihedral symmetry exist for all formula_8 many with digon faces which become degenerate polyhedra. Two of the eight forms (Rectified and cantillated) are replications and are skipped in the table. Euclidean and hyperbolic tilings ("r" = 2). Some representative hyperbolic tilings are given, and shown as a Poincaré disk projection. Euclidean and hyperbolic tilings ("r" &gt; 2). The Coxeter–Dynkin diagram is given in a linear form, although it is actually a triangle, with the trailing segment r connecting to the first node. External links. &lt;templatestyles src="Dmbox/styles.css" /&gt; This article includes . &lt;br&gt; If an [ internal link] incorrectly led you here, you may wish to change the link to point directly to the intended article.
[ { "math_id": 0, "text": "p, q, r" }, { "math_id": 1, "text": "p\\ q\\ (r\\ s)\\ |" }, { "math_id": 2, "text": "p\\ q\\ r\\ |" }, { "math_id": 3, "text": "p\\ q\\ s\\ |" }, { "math_id": 4, "text": "|\\ p\\ q\\ r" }, { "math_id": 5, "text": "|\\ p\\ q\\ r\\ s" }, { "math_id": 6, "text": "p = 2, 3, 4 \\dots " }, { "math_id": 7, "text": "4p" }, { "math_id": 8, "text": "p = 2, 3, 4, \\dots " } ]
https://en.wikipedia.org/wiki?curid=61199339
612
Arithmetic mean
Type of average of a collection of numbers In mathematics and statistics, the arithmetic mean ( ), arithmetic average, or just the "mean" or "average" (when the context is clear) is the sum of a collection of numbers divided by the count of numbers in the collection. The collection is often a set of results from an experiment, an observational study, or a survey. The term "arithmetic mean" is preferred in some mathematics and statistics contexts because it helps distinguish it from other types of means, such as geometric and harmonic. In addition to mathematics and statistics, the arithmetic mean is frequently used in economics, anthropology, history, and almost every academic field to some extent. For example, per capita income is the arithmetic average income of a nation's population. While the arithmetic mean is often used to report central tendencies, it is not a robust statistic: it is greatly influenced by outliers (values much larger or smaller than most others). For skewed distributions, such as the distribution of income for which a few people's incomes are substantially higher than most people's, the arithmetic mean may not coincide with one's notion of "middle". In that case, robust statistics, such as the median, may provide a better description of central tendency. Definition. The arithmetic mean of a set of observed data is equal to the sum of the numerical values of each observation, divided by the total number of observations. Symbolically, for a data set consisting of the values formula_0, the arithmetic mean is defined by the formula: formula_1 For example, if the monthly salaries of formula_2 employees are formula_3, then the arithmetic mean is: formula_4 If the data set is a statistical population (i.e., consists of every possible observation and not just a subset of them), then the mean of that population is called the "population mean" and denoted by the Greek letter formula_5. If the data set is a statistical sample (a subset of the population), it is called the "sample mean" (which for a data set formula_6 is denoted as formula_7). The arithmetic mean can be similarly defined for vectors in multiple dimensions, not only scalar values; this is often referred to as a centroid. More generally, because the arithmetic mean is a convex combination (meaning its coefficients sum to formula_8), it can be defined on a convex space, not only a vector space. Motivating properties. The arithmetic mean has several properties that make it interesting, especially as a measure of central tendency. These include: Contrast with median. The arithmetic mean may be contrasted with the median. The median is defined such that no more than half the values are larger, and no more than half are smaller than it. If elements in the data increase arithmetically when placed in some order, then the median and arithmetic average are equal. For example, consider the data sample formula_17. The mean is formula_18, as is the median. However, when we consider a sample that cannot be arranged to increase arithmetically, such as formula_19, the median and arithmetic average can differ significantly. In this case, the arithmetic average is formula_20, while the median is formula_21. The average value can vary considerably from most values in the sample and can be larger or smaller than most. There are applications of this phenomenon in many fields. For example, since the 1980s, the median income in the United States has increased more slowly than the arithmetic average of income. Generalizations. Weighted average. A weighted average, or weighted mean, is an average in which some data points count more heavily than others in that they are given more weight in the calculation. For example, the arithmetic mean of formula_22 and formula_23 is formula_24, or equivalently formula_25. In contrast, a "weighted" mean in which the first number receives, for example, twice as much weight as the second (perhaps because it is assumed to appear twice as often in the general population from which these numbers were sampled) would be calculated as formula_26. Here the weights, which necessarily sum to one, are formula_27 and formula_28, the former being twice the latter. The arithmetic mean (sometimes called the "unweighted average" or "equally weighted average") can be interpreted as a special case of a weighted average in which all weights are equal to the same number (formula_29 in the above example and formula_30 in a situation with formula_31 numbers being averaged). Continuous probability distributions. If a numerical property, and any sample of data from it, can take on any value from a continuous range instead of, for example, just integers, then the probability of a number falling into some range of possible values can be described by integrating a continuous probability distribution across this range, even when the naive probability for a sample number taking one certain value from infinitely many is zero. In this context, the analog of a weighted average, in which there are infinitely many possibilities for the precise value of the variable in each range, is called the "mean of the probability distribution". The most widely encountered probability distribution is called the normal distribution; it has the property that all measures of its central tendency, including not just the mean but also the median mentioned above and the mode (the three Ms), are equal. This equality does not hold for other probability distributions, as illustrated for the log-normal distribution here. Angles. Particular care is needed when using cyclic data, such as phases or angles. Taking the arithmetic mean of 1° and 359° yields a result of 180°. This is incorrect for two reasons: In general application, such an oversight will lead to the average value artificially moving towards the middle of the numerical range. A solution to this problem is to use the optimization formulation (that is, define the mean as the central point: the point about which one has the lowest dispersion) and redefine the difference as a modular distance (i.e., the distance on the circle: so the modular distance between 1° and 359° is 2°, not 358°). Symbols and encoding. The arithmetic mean is often denoted by a bar (vinculum or macron), as in formula_10. Some software (text processors, web browsers) may not display the "x̄" symbol correctly. For example, the HTML symbol "x̄" combines two codes — the base letter "x" plus a code for the line above ( ̄ or ¯). In some document formats (such as PDF), the symbol may be replaced by a "¢" (cent) symbol when copied to a text processor such as Microsoft Word. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_1,\\dots,x_n" }, { "math_id": 1, "text": "\\bar{x}=\\frac{1}{n}\\left (\\sum_{i=1}^n{x_i}\\right)\n=\\frac{x_1+x_2+\\dots+x_n}{n}" }, { "math_id": 2, "text": "10" }, { "math_id": 3, "text": "\\{2500,2700,2400,2300,2550,2650,2750,2450,2600,2400\\}" }, { "math_id": 4, "text": "\\frac{2500+2700+2400+2300+2550+2650+2750+2450+2600+2400}{10}=2530" }, { "math_id": 5, "text": "\\mu" }, { "math_id": 6, "text": "X" }, { "math_id": 7, "text": "\\overline{X}" }, { "math_id": 8, "text": "1" }, { "math_id": 9, "text": "x_1,\\dotsc,x_n" }, { "math_id": 10, "text": "\\bar{x}" }, { "math_id": 11, "text": "(x_1-\\bar{x})+\\dotsb+(x_n-\\bar{x})=0" }, { "math_id": 12, "text": "x_i-\\bar{x}" }, { "math_id": 13, "text": "a" }, { "math_id": 14, "text": "\\overline{x + a} = \\bar{x} + a" }, { "math_id": 15, "text": "(x_i-\\bar{x})^2" }, { "math_id": 16, "text": "\\text{avg}(ca_{1},\\cdots,ca_{n})=c\\cdot\\text{avg}(a_{1},\\cdots,a_{n})." }, { "math_id": 17, "text": "\\{1,2,3,4\\}" }, { "math_id": 18, "text": "2.5" }, { "math_id": 19, "text": "\\{1,2,4,8,16\\}" }, { "math_id": 20, "text": "6.2" }, { "math_id": 21, "text": "4" }, { "math_id": 22, "text": "3" }, { "math_id": 23, "text": "5" }, { "math_id": 24, "text": "\\frac{3+5}{2}=4" }, { "math_id": 25, "text": "3 \\cdot \\frac{1}{2}+5 \\cdot \\frac{1}{2}=4" }, { "math_id": 26, "text": "3 \\cdot \\frac{2}{3}+5 \\cdot \\frac{1}{3}=\\frac{11}{3}" }, { "math_id": 27, "text": "\\frac{2}{3}" }, { "math_id": 28, "text": "\\frac{1}{3}" }, { "math_id": 29, "text": "\\frac{1}{2}" }, { "math_id": 30, "text": "\\frac{1}{n}" }, { "math_id": 31, "text": "n" }, { "math_id": 32, "text": "2\\pi" }, { "math_id": 33, "text": "\\tau" } ]
https://en.wikipedia.org/wiki?curid=612
612000
Hadamard's inequality
Theorem In mathematics, Hadamard's inequality (also known as Hadamard's theorem on determinants) is a result first published by Jacques Hadamard in 1893. It is a bound on the determinant of a matrix whose entries are complex numbers in terms of the lengths of its column vectors. In geometrical terms, when restricted to real numbers, it bounds the volume in Euclidean space of "n" dimensions marked out by "n" vectors "vi" for 1 ≤ "i" ≤ "n" in terms of the lengths of these vectors ||"vi"&amp;hairsp;||. Specifically, Hadamard's inequality states that if "N" is the matrix having columns "vi", then formula_0 If the "n" vectors are non-zero, equality in Hadamard's inequality is achieved if and only if the vectors are orthogonal. Alternate forms and corollaries. A corollary is that if the entries of an "n" by "n" matrix "N" are bounded by "B", so |"Nij"&amp;hairsp;| ≤ "B" for all "i" and "j", then formula_1 In particular, if the entries of "N" are +1 and −1 only then formula_2 In combinatorics, matrices "N" for which equality holds, i.e. those with orthogonal columns, are called Hadamard matrices. More generally, suppose that "N" is a complex matrix of order "n", whose entries are bounded by |"Nij"&amp;hairsp;| ≤ 1, for each "i", "j" between 1 and "n". Then Hadamard's inequality states that formula_3 Equality in this bound is attained for a real matrix "N" if and only if "N" is a Hadamard matrix. A positive-semidefinite matrix "P" can be written as "N"*"N", where "N"* denotes the conjugate transpose of "N" (see Decomposition of a semidefinite matrix). Then formula_4 So, the determinant of a positive definite matrix is less than or equal to the product of its diagonal entries. Sometimes this is also known as Hadamard's inequality. Proof. The result is trivial if the matrix "N" is singular, so assume the columns of "N" are linearly independent. By dividing each column by its length, it can be seen that the result is equivalent to the special case where each column has length 1, in other words if "ei" are unit vectors and "M" is the matrix having the "ei" as columns then and equality is achieved if and only if the vectors are an orthogonal set. The general result now follows: formula_5 To prove (1), consider "P" ="M*M" where "M*" is the conjugate transpose of "M", and let the eigenvalues of "P" be λ1, λ2, … λ"n". Since the length of each column of "M" is 1, each entry in the diagonal of "P" is 1, so the trace of "P" is "n". Applying the inequality of arithmetic and geometric means, formula_6 so formula_7 If there is equality then each of the "λ""i"'s must all be equal and their sum is "n", so they must all be 1. The matrix "P" is Hermitian, therefore diagonalizable, so it is the identity matrix—in other words the columns of "M" are an orthonormal set and the columns of "N" are an orthogonal set. Many other proofs can be found in the literature. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left| \\det(N) \\right| \\le \\prod_{i=1}^n \\|v_i\\|." }, { "math_id": 1, "text": "\\left| \\det(N) \\right| \\le B^n n^{n/2}." }, { "math_id": 2, "text": "\\left| \\det(N) \\right| \\le n^{n/2}." }, { "math_id": 3, "text": "|\\operatorname{det}(N)| \\leq n^{n/2}." }, { "math_id": 4, "text": "\\det(P)=\\det(N)^2 \\le \\prod_{i=1}^n \\|v_i\\|^2 = \\prod_{i=1}^n p_{ii}." }, { "math_id": 5, "text": "\\left| \\det N \\right| = \\bigg (\\prod_{i=1}^n \\|v_i\\| \\bigg) \\left|\\det M\\right| \\leq \\prod_{i=1}^n \\|v_i\\|." }, { "math_id": 6, "text": "\\det P = \\prod_{i=1}^n \\lambda_i \\le \\bigg({1 \\over n}\\sum_{i=1}^n \\lambda_i\\bigg)^n = \\left({1 \\over n} \\operatorname{tr} P \\right)^n = 1^n = 1," }, { "math_id": 7, "text": " \\left| \\det M \\right| = \\sqrt{\\det P} \\le 1." } ]
https://en.wikipedia.org/wiki?curid=612000
612027
KASUMI
Block cipher KASUMI is a block cipher used in UMTS, GSM, and GPRS mobile communications systems. In UMTS, KASUMI is used in the confidentiality ("f8") and integrity algorithms ("f9") with names UEA1 and UIA1, respectively. In GSM, KASUMI is used in the A5/3 key stream generator and in GPRS in the GEA3 key stream generator. KASUMI was designed for 3GPP to be used in UMTS security system by the Security Algorithms Group of Experts (SAGE), a part of the European standards body ETSI. Because of schedule pressures in 3GPP standardization, instead of developing a new cipher, SAGE agreed with 3GPP technical specification group (TSG) for system aspects of 3G security (SA3) to base the development on an existing algorithm that had already undergone some evaluation. They chose the cipher algorithm MISTY1 developed and patented by Mitsubishi Electric Corporation. The original algorithm was slightly modified for easier hardware implementation and to meet other requirements set for 3G mobile communications security. KASUMI is named after the original algorithm MISTY1 — 霞み (hiragana かすみ, romaji "kasumi") is the Japanese word for "mist". In January 2010, Orr Dunkelman, Nathan Keller and Adi Shamir released a paper showing that they could break Kasumi with a related-key attack and very modest computational resources; this attack is ineffective against MISTY1. Description. KASUMI algorithm is specified in a 3GPP technical specification. KASUMI is a block cipher with 128-bit key and 64-bit input and output. The core of KASUMI is an eight-round Feistel network. The round functions in the main Feistel network are irreversible Feistel-like network transformations. In each round the round function uses a round key which consists of eight 16-bit sub keys derived from the original 128-bit key using a fixed key schedule. Key schedule. The 128-bit key "K" is divided into eight 16-bit sub keys "Ki": formula_0 Additionally a modified key "K"', similarly divided into 16-bit sub keys "K'i", is used. The modified key is derived from the original key by XORing with 0x123456789ABCDEFFEDCBA9876543210 (chosen as a "nothing up my sleeve" number). Round keys are either derived from the sub keys by bitwise rotation to left by a given amount and from the modified sub keys (unchanged). The round keys are as follows: formula_1 Sub key index additions are cyclic so that if "i+j" is greater than 8 one has to subtract 8 from the result to get the actual sub key index. The algorithm. KASUMI algorithm processes the 64-bit word in two 32-bit halves, left (formula_2) and right (formula_3). The input word is concatenation of the left and right halves of the first round: formula_4. In each round the right half is XOR'ed with the output of the round function after which the halves are swapped: formula_5 where "KLi", "KOi", "KIi" are round keys for the "i"th round. The round functions for even and odd rounds are slightly different. In each case the round function is a composition of two functions "FLi" and "FOi". For an odd round formula_6 and for an even round formula_7. The output is the concatenation of the outputs of the last round. formula_8. Both "FL" and "FO" functions divide the 32-bit input data to two 16-bit halves. The "FL" function is an irreversible bit manipulation while the "FO" function is an irreversible three round Feistel-like network. Function FL. The 32-bit input "x" of formula_9 is divided to two 16-bit halves formula_10. First the left half of the input formula_11 is ANDed bitwise with round key formula_12 and rotated left by one bit. The result of that is XOR'ed to the right half of the input formula_13 to get the right half of the output formula_14. formula_15 Then the right half of the output formula_14 is ORed bitwise with the round key formula_16 and rotated left by one bit. The result of that is XOR'ed to the left half of the input formula_11 to get the left half of the output formula_17. formula_18 Output of the function is concatenation of the left and right halves formula_19. Function FO. The 32-bit input "x" of formula_20 is divided into two 16-bit halves formula_21, and passed through three rounds of a Feistel network. In each of the three rounds (indexed by "j" that takes values 1, 2, and 3) the left half is modified to get the new right half and the right half is made the left half of the next round. formula_22 The output of the function is formula_23. Function FI. The function "FI" is an irregular Feistel-like network. The 16-bit input formula_24 of the function formula_25 is divided to two halves formula_21 of which formula_26 is 9 bits wide and formula_27 is 7 bits wide. Bits in the left half formula_26 are first shuffled by 9-bit substitution box (S-box) "S9" and the result is XOR'ed with the zero-extended right half formula_27 to get the new 9-bit right half formula_28. formula_29 Bits of the right half formula_27 are shuffled by 7-bit S-box "S7" and the result is XOR'ed with the seven least significant bits ("LS7") of the new right half formula_28 to get the new 7-bit left half formula_30. formula_31 The intermediate word formula_32 is XORed with the round key KI to get formula_33 of which formula_34 is 7 bits wide and formula_35 is 9 bits wide. formula_36 Bits in the right half formula_35 are then shuffled by 9-bit S-box "S9" and the result is XOR'ed with the zero-extended left half formula_34 to get the new 9-bit right half of the output formula_37. formula_38 Finally the bits of the left half formula_34 are shuffled by 7-bit S-box "S7" and the result is XOR'ed with the seven least significant bits ("LS7") of the right half of the output formula_37 to get the 7-bit left half formula_39 of the output. formula_40 The output is the concatenation of the final left and right halves formula_41. Substitution boxes. The substitution boxes (S-boxes) S7 and S9 are defined by both bit-wise AND-XOR expressions and look-up tables in the specification. The bit-wise expressions are intended to hardware implementation but nowadays it is customary to use the look-up tables even in the HW design. S7 is defined by the following array: int S7[128] = { 54, 50, 62, 56, 22, 34, 94, 96, 38, 6, 63, 93, 2, 18,123, 33, 55,113, 39,114, 21, 67, 65, 12, 47, 73, 46, 27, 25,111,124, 81, 53, 9,121, 79, 52, 60, 58, 48,101,127, 40,120,104, 70, 71, 43, 20,122, 72, 61, 23,109, 13,100, 77, 1, 16, 7, 82, 10,105, 98, 117,116, 76, 11, 89,106, 0,125,118, 99, 86, 69, 30, 57,126, 87, 112, 51, 17, 5, 95, 14, 90, 84, 91, 8, 35,103, 32, 97, 28, 66, 102, 31, 26, 45, 75, 4, 85, 92, 37, 74, 80, 49, 68, 29,115, 44, 64,107,108, 24,110, 83, 36, 78, 42, 19, 15, 41, 88,119, 59, 3 S9 is defined by the following array: int S9[512] = { 167,239,161,379,391,334, 9,338, 38,226, 48,358,452,385, 90,397, 183,253,147,331,415,340, 51,362,306,500,262, 82,216,159,356,177, 175,241,489, 37,206, 17, 0,333, 44,254,378, 58,143,220, 81,400, 95, 3,315,245, 54,235,218,405,472,264,172,494,371,290,399, 76, 165,197,395,121,257,480,423,212,240, 28,462,176,406,507,288,223, 501,407,249,265, 89,186,221,428,164, 74,440,196,458,421,350,163, 232,158,134,354, 13,250,491,142,191, 69,193,425,152,227,366,135, 344,300,276,242,437,320,113,278, 11,243, 87,317, 36, 93,496, 27, 487,446,482, 41, 68,156,457,131,326,403,339, 20, 39,115,442,124, 475,384,508, 53,112,170,479,151,126,169, 73,268,279,321,168,364, 363,292, 46,499,393,327,324, 24,456,267,157,460,488,426,309,229, 439,506,208,271,349,401,434,236, 16,209,359, 52, 56,120,199,277, 465,416,252,287,246, 6, 83,305,420,345,153,502, 65, 61,244,282, 173,222,418, 67,386,368,261,101,476,291,195,430, 49, 79,166,330, 280,383,373,128,382,408,155,495,367,388,274,107,459,417, 62,454, 132,225,203,316,234, 14,301, 91,503,286,424,211,347,307,140,374, 35,103,125,427, 19,214,453,146,498,314,444,230,256,329,198,285, 50,116, 78,410, 10,205,510,171,231, 45,139,467, 29, 86,505, 32, 72, 26,342,150,313,490,431,238,411,325,149,473, 40,119,174,355, 185,233,389, 71,448,273,372, 55,110,178,322, 12,469,392,369,190, 1,109,375,137,181, 88, 75,308,260,484, 98,272,370,275,412,111, 336,318, 4,504,492,259,304, 77,337,435, 21,357,303,332,483, 18, 47, 85, 25,497,474,289,100,269,296,478,270,106, 31,104,433, 84, 414,486,394, 96, 99,154,511,148,413,361,409,255,162,215,302,201, 266,351,343,144,441,365,108,298,251, 34,182,509,138,210,335,133, 311,352,328,141,396,346,123,319,450,281,429,228,443,481, 92,404, 485,422,248,297, 23,213,130,466, 22,217,283, 70,294,360,419,127, 312,377, 7,468,194, 2,117,295,463,258,224,447,247,187, 80,398, 284,353,105,390,299,471,470,184, 57,200,348, 63,204,188, 33,451, 97, 30,310,219, 94,160,129,493, 64,179,263,102,189,207,114,402, 438,477,387,122,192, 42,381, 5,145,118,180,449,293,323,136,380, 43, 66, 60,455,341,445,202,432, 8,237, 15,376,436,464, 59,461 Cryptanalysis. In 2001, an impossible differential attack on six rounds of KASUMI was presented by Kühn (2001). In 2003 Elad Barkan, Eli Biham and Nathan Keller demonstrated man-in-the-middle attacks against the GSM protocol which avoided the A5/3 cipher and thus breaking the protocol. This approach does not attack the A5/3 cipher, however. The full version of their paper was published later in 2006. In 2005, Israeli researchers Eli Biham, Orr Dunkelman and Nathan Keller published a related-key rectangle (boomerang) attack on KASUMI that can break all 8 rounds faster than exhaustive search. The attack requires 254.6 chosen plaintexts, each of which has been encrypted under one of four related keys, and has a time complexity equivalent to 276.1 KASUMI encryptions. While this is obviously not a practical attack, it invalidates some proofs about the security of the 3GPP protocols that had relied on the presumed strength of KASUMI. In 2010, Dunkelman, Keller and Shamir published a new attack that allows an adversary to recover a full A5/3 key by related-key attack. The time and space complexities of the attack are low enough that the authors carried out the attack in two hours on an Intel Core 2 Duo desktop computer even using the unoptimized reference KASUMI implementation. The authors note that this attack may not be applicable to the way A5/3 is used in 3G systems; their main purpose was to discredit 3GPP's assurances that their changes to MISTY wouldn't significantly impact the security of the algorithm. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K=K_1 \\| K_2 \\| K_3 \\| K_4 \\| K_5 \\| K_6 \\| K_7 \\| K_8\\," }, { "math_id": 1, "text": "\n\\begin{array}{lcl}\nKL_{i,1} & = & {\\rm ROL}(K_i,1) \\\\\nKL_{i,2} & = & K'_{i+2} \\\\\nKO_{i,1} & = & {\\rm ROL}(K_{i+1},5) \\\\\nKO_{i,2} & = & {\\rm ROL}(K_{i+5},8) \\\\\nKO_{i,3} & = & {\\rm ROL}(K_{i+6},13) \\\\\nKI_{i,1} & = & K'_{i+4} \\\\\nKI_{i,2} & = & K'_{i+3} \\\\\nKI_{i,3} & = & K'_{i+7}\n\\end{array}\n" }, { "math_id": 2, "text": "L_i" }, { "math_id": 3, "text": "R_i" }, { "math_id": 4, "text": "{\\rm input} = R_0\\|L_0\\," }, { "math_id": 5, "text": "\\begin{array}{rcl}L_i & = & F_i(KL_i,KO_i,KI_i,L_{i-1})\\oplus R_{i-1} \\\\ R_i & = & L_{i-1}\\end{array}" }, { "math_id": 6, "text": "F_i(K_i,L_{i-1})=FO(KO_i, KI_i, FL(KL_i, L_{i-1}))\\," }, { "math_id": 7, "text": "F_i(K_i,L_{i-1})=FL(KL_i, FO(KO_i, KI_i, L_{i-1}))\\," }, { "math_id": 8, "text": "{\\rm output} = R_8\\|L_8\\," }, { "math_id": 9, "text": "FL(KL_i,x)" }, { "math_id": 10, "text": "x=l\\|r" }, { "math_id": 11, "text": "l" }, { "math_id": 12, "text": "KL_{i,1}" }, { "math_id": 13, "text": "r" }, { "math_id": 14, "text": "r'" }, { "math_id": 15, "text": "r'= {\\rm ROL}(l \\wedge KL_{i,1},1) \\oplus r" }, { "math_id": 16, "text": "KL_{i,2}" }, { "math_id": 17, "text": "l'" }, { "math_id": 18, "text": "l'= {\\rm ROL}(r' \\vee KL_{i,2},1) \\oplus l" }, { "math_id": 19, "text": "x'=l'\\|r'" }, { "math_id": 20, "text": "FO(KO_i, KI_i, x)" }, { "math_id": 21, "text": "x=l_0\\|r_0" }, { "math_id": 22, "text": "\n\\begin{array}{lcl}\nr_j & = & FI(KI_{i,j}, l_{j-1} \\oplus KO_{i,j}) \\oplus r_{j-1} \\\\\nl_j & = & r_{j-1}\n\\end{array}\n" }, { "math_id": 23, "text": "x' = l_3\\|r_3" }, { "math_id": 24, "text": "x" }, { "math_id": 25, "text": "FI(Ki,x)" }, { "math_id": 26, "text": "l_0" }, { "math_id": 27, "text": "r_0" }, { "math_id": 28, "text": "r_1" }, { "math_id": 29, "text": "r_1=S9(l_0)\\oplus (00\\|r_0)\\," }, { "math_id": 30, "text": "l_1" }, { "math_id": 31, "text": "l_1=S7(r_0)\\oplus LS7(r_1)\\," }, { "math_id": 32, "text": "x_1=l_1\\|r_1" }, { "math_id": 33, "text": "x_2=l_2\\|r_2" }, { "math_id": 34, "text": "l_2" }, { "math_id": 35, "text": "r_2" }, { "math_id": 36, "text": "x_2=KI\\oplus x_1" }, { "math_id": 37, "text": "r_3" }, { "math_id": 38, "text": "r_3=S9(r_2)\\oplus (00\\|l_2)\\," }, { "math_id": 39, "text": "l_3" }, { "math_id": 40, "text": "l_3=S7(l_2)\\oplus LS7(r_3)\\," }, { "math_id": 41, "text": "x'=l_3\\|r_3" } ]
https://en.wikipedia.org/wiki?curid=612027
612029
Cyclic model
Cosmological models involving indefinite, self-sustaining cycles A cyclic model (or oscillating model) is any of several cosmological models in which the universe follows infinite, or indefinite, self-sustaining cycles. For example, the oscillating universe theory briefly considered by Albert Einstein in 1930 theorized a universe following an eternal series of oscillations, each beginning with a Big Bang and ending with a Big Crunch; in the interim, the universe would expand for a period of time before the gravitational attraction of matter causes it to collapse back in and undergo a bounce. Overview. In the 1920s, theoretical physicists, most notably Albert Einstein, considered the possibility of a cyclic model for the universe as an (everlasting) alternative to the model of an expanding universe. In 1922, Alexander Friedmann introduced the Oscillating Universe Theory. However, work by Richard C. Tolman in 1934 showed that these early attempts failed because of the cyclic problem: according to the Second Law of Thermodynamics, entropy can only increase. This implies that successive cycles grow longer and larger. Extrapolating back in time, cycles before the present one become shorter and smaller culminating again in a Big Bang and thus not replacing it. This puzzling situation remained for many decades until the early 21st century when the recently discovered dark energy component provided new hope for a consistent cyclic cosmology. In 2011, a five-year survey of 200,000 galaxies and spanning 7 billion years of cosmic time confirmed that "dark energy is driving our universe apart at accelerating speeds." One new cyclic model is the brane cosmology model of the creation of the universe, derived from the earlier ekpyrotic model. It was proposed in 2001 by Paul Steinhardt of Princeton University and Neil Turok of Cambridge University. The theory describes a universe exploding into existence not just once, but repeatedly over time. The theory could potentially explain why a repulsive form of energy known as the cosmological constant, which is accelerating the expansion of the universe, is several orders of magnitude smaller than predicted by the standard Big Bang model. A different cyclic model relying on the notion of phantom energy was proposed in 2007 by Lauris Baum and Paul Frampton of the University of North Carolina at Chapel Hill. Other cyclic models include conformal cyclic cosmology and loop quantum cosmology. The Steinhardt–Turok model. In this cyclic model, two parallel orbifold planes or M-branes collide periodically in a higher-dimensional space. The visible four-dimensional universe lies on one of these branes. The collisions correspond to a reversal from contraction to expansion, or a Big Crunch followed immediately by a Big Bang. The matter and radiation we see today were generated during the most recent collision in a pattern dictated by quantum fluctuations created before the branes. After billions of years the universe reached the state we observe today; after additional billions of years it will ultimately begin to contract again. Dark energy corresponds to a force between the branes, and serves the crucial role of solving the monopole, horizon, and flatness problems. Moreover, the cycles can continue indefinitely into the past and the future, and the solution is an attractor, so it can provide a complete history of the universe. As Richard C. Tolman showed, the earlier cyclic model failed because the universe would undergo inevitable thermodynamic heat death. However, the newer cyclic model evades this by having a net expansion each cycle, preventing entropy from building up. However, there remain major open issues in the model. Foremost among them is that colliding branes are not understood by string theorists, and nobody knows if the scale invariant spectrum will be destroyed by the big crunch. Moreover, as with cosmic inflation, while the general character of the forces (in the ekpyrotic scenario, a force between branes) required to create the vacuum fluctuations is known, there is no candidate from particle physics. The Baum–Frampton model. This more recent cyclic model of 2007 assumes an exotic form of dark energy called phantom energy, which possesses negative kinetic energy and would usually cause the universe to end in a Big Rip. This condition is achieved if the universe is dominated by dark energy with a cosmological equation of state parameter formula_0 satisfying the condition formula_1, for energy density formula_2 and pressure p. By contrast, Steinhardt–Turok assume formula_3. In the Baum–Frampton model, a septillionth (or less) of a second (i.e. 10−24 seconds or less) before the would-be Big Rip, a turnaround occurs and only one causal patch is retained as our universe. The generic patch contains no quark, lepton or force carrier; only dark energy – and its entropy thereby vanishes. The adiabatic process of contraction of this much smaller universe takes place with constant vanishing entropy and with no matter including no black holes which disintegrated before turnaround. The idea that the universe "comes back empty" is a central new idea of this cyclic model, and avoids many difficulties confronting matter in a contracting phase such as excessive structure formation, proliferation and expansion of black holes, as well as going through phase transitions such as those of QCD and electroweak symmetry restoration. Any of these would tend strongly to produce an unwanted premature bounce, simply to avoid violation of the second law of thermodynamics. The condition of formula_4 may be logically inevitable in a truly infinitely cyclic cosmology because of the entropy problem. Nevertheless, many technical back up calculations are necessary to confirm consistency of the approach. Although the model borrows ideas from string theory, it is not necessarily committed to strings, or to higher dimensions, yet such speculative devices may provide the most expeditious methods to investigate the internal consistency. The value of formula_0 in the Baum–Frampton model can be made arbitrarily close to, but must be less than, −1. See also. Physical cosmologies: Religion: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "w" }, { "math_id": 1, "text": "w\\equiv \\frac{p}{\\rho} <-1" }, { "math_id": 2, "text": "{\\rho}" }, { "math_id": 3, "text": "w {\\geq}-1" }, { "math_id": 4, "text": "w <-1" } ]
https://en.wikipedia.org/wiki?curid=612029
612057
Potential well
Concept in quantum mechanics A potential well is the region surrounding a local minimum of potential energy. Energy captured in a potential well is unable to convert to another type of energy (kinetic energy in the case of a gravitational potential well) because it is captured in the local minimum of a potential well. Therefore, a body may not proceed to the global minimum of potential energy, as it would naturally tend to do due to entropy. Overview. Energy may be released from a potential well if sufficient energy is added to the system such that the local maximum is surmounted. In quantum physics, potential energy may escape a potential well without added energy due to the probabilistic characteristics of quantum particles; in these cases a particle may be imagined to tunnel "through" the walls of a potential well. The graph of a 2D potential energy function is a potential energy surface that can be imagined as the Earth's surface in a landscape of hills and valleys. Then a potential well would be a valley surrounded on all sides with higher terrain, which thus could be filled with water (e.g., be a lake) without any water flowing away toward another, lower minimum (e.g. sea level). In the case of gravity, the region around a mass is a gravitational potential well, unless the density of the mass is so low that tidal forces from other masses are greater than the gravity of the body itself. A potential hill is the opposite of a potential well, and is the region surrounding a local maximum. Quantum confinement. Quantum confinement can be observed once the diameter of a material is of the same magnitude as the de Broglie wavelength of the electron wave function. When materials are this small, their electronic and optical properties deviate substantially from those of bulk materials. A particle behaves as if it were free when the confining dimension is large compared to the wavelength of the particle. During this state, the bandgap remains at its original energy due to a continuous energy state. However, as the confining dimension decreases and reaches a certain limit, typically in nanoscale, the energy spectrum becomes discrete. As a result, the bandgap becomes size-dependent. As the size of the particles decreases, the electrons and electron holes come closer, and the energy required to activate them increases, which ultimately results in a blueshift in light emission. Specifically, the effect describes the phenomenon resulting from electrons and electron holes being squeezed into a dimension that approaches a critical quantum measurement, called the exciton Bohr radius. In current application, a quantum dot such as a small sphere confines in three dimensions, a quantum wire confines in two dimensions, and a quantum well confines only in one dimension. These are also known as zero-, one- and two-dimensional potential wells, respectively. In these cases they refer to the number of dimensions in which a confined particle can act as a free carrier. See external links, below, for application examples in biotechnology and solar cell technology. Quantum mechanics view. The electronic and optical properties of materials are affected by size and shape. Well-established technical achievements including quantum dots were derived from size manipulation and investigation for their theoretical corroboration on quantum confinement effect. The major part of the theory is the behaviour of the exciton resembles that of an atom as its surrounding space shortens. A rather good approximation of an exciton's behaviour is the 3-D model of a particle in a box. The solution of this problem provides a sole mathematical connection between energy states and the dimension of space. Decreasing the volume or the dimensions of the available space, increases the energy of the states. Shown in the diagram is the change in electron energy level and bandgap between nanomaterial and its bulk state. The following equation shows the relationship between energy level and dimension spacing: formula_0 formula_1 Research results provide an alternative explanation of the shift of properties at nanoscale. In the bulk phase, the surfaces appear to control some of the macroscopically observed properties. However, in nanoparticles, surface molecules do not obey the expected configuration in space. As a result, surface tension changes tremendously. Classical mechanics view. The Young–Laplace equation can give a background on the investigation of the scale of forces applied to the surface molecules: formula_2 Under the assumption of spherical shape formula_3 and resolving the Young–Laplace equation for the new radii formula_4 (nm), we estimate the new formula_5(GPa). The smaller the radii, the greater the pressure is present. The increase in pressure at the nanoscale results in strong forces toward the interior of the particle. Consequently, the molecular structure of the particle appears to be different from the bulk mode, especially at the surface. These abnormalities at the surface are responsible for changes of inter-atomic interactions and bandgap. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\psi_{n_x,n_y,n_z} = \\sqrt{\\frac{8}{L_x L_y L_z}} \\sin \\left( \\frac{n_x \\pi x}{L_x} \\right) \\sin \\left( \\frac{n_y \\pi y}{L_y} \\right) \\sin \\left( \\frac{n_z \\pi z}{L_z} \\right)" }, { "math_id": 1, "text": "E_{n_x,n_y,n_z} = \\frac{\\hbar^2\\pi^2}{2m} \\left[ \\left( \\frac{n_x}{L_x} \\right)^2 + \\left( \\frac{n_y}{L_y} \\right)^2 + \\left( \\frac{n_z}{L_z} \\right)^2 \\right]" }, { "math_id": 2, "text": "\\begin{align}\n\\Delta P &= \\gamma \\nabla \\cdot \\hat n \\\\\n&= 2 \\gamma H \\\\\n&= \\gamma \\left(\\frac{1}{R_1} + \\frac{1}{R_2}\\right)\n\\end{align}" }, { "math_id": 3, "text": "R_1=R_2=R" }, { "math_id": 4, "text": "R" }, { "math_id": 5, "text": "\\Delta P" } ]
https://en.wikipedia.org/wiki?curid=612057
61208339
Brown's vasomotor index
Brown's vasomotor index is a test to assess the degree of vasospasm in peripheral arterial disease. The same test is also used to check if sympathectomy is a possible management option for peripheral arterial disease. Procedure. The specific nerve of the suspected ischemic limb is anesthetized using local anesthesia. In case of lower limbs, the whole limb could be anesthetized using spinal anesthesia. If the ischemic disease is at the stage of vasospasm, the nerve block relives the sympathetic vasospasm and the temperature of the limb rises after the anesthetic block. The rise in skin temperature of the limb is compared to the rise in mouth temperature for reporting Brown's vasomotor index (BVI). It is mathematically expressed as: formula_0 where formula_1 is the rise in skin temperature and formula_2 is the rise in mouth temperature. Interpretation. In a healthy adult, Brown's Vasomotor Index is 1. If Brown's vasomotor index is more than 3.5, sympathectomy may be beneficial for the patient. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "BVI = \\frac{R_s - R_m}{R_m}" }, { "math_id": 1, "text": "R_s" }, { "math_id": 2, "text": "R_m" } ]
https://en.wikipedia.org/wiki?curid=61208339
61209678
Advanced eXtensible Interface
Computer bus protocol The Advanced eXtensible Interface (AXI) is an on-chip communication bus protocol and is part of the Advanced Microcontroller Bus Architecture specification (AMBA). AXI had been introduced in 2003 with the AMBA3 specification. In 2010, a new revision of AMBA, AMBA4, defined the AXI4, AXI4-Lite and AXI4-Stream protocols. AXI is royalty-free and its specification is freely available from ARM. AMBA AXI specifies many optional signals, which can be included depending on the specific requirements of the design, making AXI a versatile bus for numerous applications. While the communication over an AXI bus is between a single initiator and a single target, the specification includes detailed descriptions and signals to include N:M interconnects, able to extend the bus to topologies with multiple initiators and targets. AMBA AXI4, AXI4-Lite and AXI4-Stream have been adopted by Xilinx and many of its partners as a main communication bus in their products. Thread IDs. Thread IDs allow a single initiator port to support multiple threads, where each thread has in-order access to the AXI address space, however each thread ID initiated from a single initiator port may complete out of order with respect to each other. For instance in the case where one thread ID is blocked by a slow peripheral, another thread ID may continue independently of the order of the first thread ID. Another example, one thread on a CPU may be assigned a thread ID for a particular initiator port memory access such as read addr1, write addr1, read addr1, and this sequence will complete in order because each transaction has the same initiator port thread ID. Another thread running on the CPU may have another initiator port thread ID assigned to it, and its memory access will be in order as well but may be intermixed with the first thread IDs transactions. Thread IDs on an initiator port are not globally defined, thus an AXI switch with multiple initiator ports will internally prefix the initiator port index to the thread ID, and provide this concatenated thread ID to the target device, then on return of the transaction to its initiator port of origin, this thread ID prefix will be used to locate the initiator port and the prefix will be truncated. This is why the target port thread ID is wider in bits than the initiator port thread ID. AXI-Lite bus is an AXI bus that only supports a single ID thread per initiator. This bus is typically used for an end point that only needs to communicate with a single initiator device at a time, for example, a simple peripheral such as a UART. In contrast, a CPU is capable of initiating transactions to multiple peripherals and address spaces at a time, and will support more than one thread ID on its AXI initiator ports and AXI target ports. This is why a CPU will typically support a full spec AXI bus. A typical example of a front side AXI switch would include a full specification AXI initiator connected to a CPU initiator, and several AXI-Lite targets connected to the AXI switch from different peripheral devices. Handshake. AXI defines a basic handshake mechanism, composed by an codice_0 and codice_1 signal. The codice_0 signal is driven by the source to inform the destination entity that the payload on the channel is valid and can be read from that clock cycle onwards. Similarly, the codice_1 signal is driven by the receiving entity to notify that it is prepared to receive data. When both the codice_0 and codice_1 signals are high in the same clock cycle, the data payload is considered transferred and the source can either provide a new data payload, by keeping high codice_0, or terminate the transmission, by de-asserting codice_0. An individual data transfer, so a clock cycle when both codice_0 and codice_1 are high, is called "beat". Two main rules are defined for the control of these signals: Thanks to this handshake mechanism, both the source and the destination can control the flow of data, throttling the speed if needed. Channels. In the AXI specification, five channels are described: Other than some basic ordering rules, each channel is independent from each other and has its own couple of codice_13 handshake signals. AXI. Signals. &lt;templatestyles src="Reflist/styles.css" /&gt; Bursts. AXI is a burst-based protocol, meaning that there may be multiple data transfers (or beats) for a single request. This makes it useful in the cases where it is necessary to transfer large amount of data from or to a specific pattern of addresses. In AXI, bursts can be of three types, selected by the signals ARBURST (for reads) or AWBURST (for writes): In FIXED bursts, each beat within the transfer has the same address. This is useful for repeated access at the same memory location, such as when reading or writing a FIFO. formula_0 In INCR bursts, on the other hand, each beat has an address equal to the previous one plus the transfer size. This burst type is commonly used to read or write sequential memory areas. formula_1 WRAP bursts are similar to the INCR ones, as each transfer has an address equal to the previous one plus the transfer size. However, with WRAP bursts, if the address of the current beat reaches the "Higher Address boundary", it is reset to the "Wrap boundary": formula_2 with formula_3 Transactions. Reads. To start a read transaction, the initiator has to provide on the Read address channel: Additionally, the other auxiliary signals, if present, are used to define more specific transfers. After the usual ARVALID/ARREADY handshake, the target has to provide on the Read data channel: plus any other optional signals. Each beat of the target's response is done with a RVALID/RREADY handshake and, on the last transfer, the target has to assert RLAST to inform that no more beats will follow without a new read request. Writes. To start a write operation, the initiator has to provide both the address information and the data ones. The address information are provided over the Write address channel, in a similar manner as a read operation: and, if present, all the optional signals. An initiator has also to provide the data related to the specified address(es) on the Write data channel: Like in the read path, on the last data word, WLAST has to be asserted by the initiator. After the completion of both the transactions, the target has to send back to the initiator the status of the write over the Write response channel, by returning the result over the BRESP signal. AXI4-Lite. AXI4-Lite is a subset of the AXI4 protocol, providing a register-like structure with reduced features and complexity. Notable differences are: AXI4-Lite removes part of the AXI4 signals but follows the AXI4 specification for the remaining ones. Being a subset of AXI4, AXI4-Lite transactions are fully compatible with AXI4 devices, permitting the interoperability between AXI4-Lite initiators and AXI4 targets without additional conversion logic. AXI-Stream. AXI4-Stream is a simplified, lightweight bus protocol designed specifically for high-speed streaming data applications. It supports only unidirectional data flow, without the need for addressing or complex handshaking. An AXI Stream is similar to an AXI write data channel, with some important differences on how the data is arranged: AXI5 Stream protocol introduces wake-up signaling and signal protection using parity. A single AXI Stream transmitter can drive multiple streams which may be interleaved but reordering is not permitted. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathit{Address} = \\mathit{StartAddress}" }, { "math_id": 1, "text": "\\mathit{Address}_i = \\mathit{StartAddress} + \\mathit{i} \\cdot \\mathit{TransferSize}" }, { "math_id": 2, "text": "\\mathit{Address}_i = \\mathit{WrapBoundary} + (\\mathit{StartAddress} + \\mathit{i} \\cdot \\mathit{TransferSize})\\ \\mathrm{mod}\\ (\\mathit{BurstLength} \\cdot \\mathit{TransferSize})" }, { "math_id": 3, "text": "\\mathit{WrapBoundary} = \\left\\lfloor \\frac{\\mathit{StartAddress}}{\\mathit{NumberBytes} \\cdot \\mathit{BurstLength}} \\right\\rfloor \\cdot (\\mathit{NumberBytes} \\cdot \\mathit{BurstLength})" } ]
https://en.wikipedia.org/wiki?curid=61209678
61212319
Random-fuzzy variable
In measurements, the measurement obtained can suffer from two types of uncertainties. The first is the random uncertainty which is due to the noise in the process and the measurement. The second contribution is due to the systematic uncertainty which may be present in the measuring instrument. Systematic errors, if detected, can be easily compensated as they are usually constant throughout the measurement process as long as the measuring instrument and the measurement process are not changed. But it can not be accurately known while using the instrument if there is a systematic error and if there is, how much? Hence, systematic uncertainty could be considered as a contribution of a fuzzy nature. This systematic error can be approximately modeled based on our past data about the measuring instrument and the process. Statistical methods can be used to calculate the total uncertainty from both systematic and random contributions in a measurement. But, the computational complexity is very high and hence, are not desirable. L.A.Zadeh introduced the concepts of fuzzy variables and fuzzy sets. Fuzzy variables are based on the theory of possibility and hence are possibility distributions. This makes them suitable to handle any type of uncertainty, i.e., both systematic and random contributions to the total uncertainty. Random-fuzzy variable (RFV) is a type 2 fuzzy variable, defined using the mathematical possibility theory, used to represent the entire information associated to a measurement result. It has an internal possibility distribution and an external possibility distribution called membership functions. The internal distribution is the uncertainty contributions due to the systematic uncertainty and the bounds of the RFV are because of the random contributions. The external distribution gives the uncertainty bounds from all contributions. Definition. A Random-fuzzy Variable (RFV) is defined as a type 2 fuzzy variable which satisfies the following conditions: An RFV can be seen in the figure. The external membership function is the distribution in blue and the internal membership function is the distribution in red. Both the membership functions are possibility distributions. Both the internal and external membership functions have a unitary value of possibility only in the rectangular part of the RFV. So, all three conditions have been satisfied. If there are only systematic errors in the measurement, then the RFV simply becomes a fuzzy variable which consists of just the internal membership function. Similarly, if there is no systematic error, then the RFV becomes a fuzzy variable with just the random contributions and therefore, is just the possibility distribution of the random contributions. Construction. A Random-fuzzy variable can be constructed using an Internal possibility distribution("rinternal") and a random possibility distribution("rrandom"). The random distribution("rrandom"). "rrandom" is the possibility distribution of the random contributions to the uncertainty. Any measurement instrument or process suffers from random error contributions due to intrinsic noise or other effects. This is completely random in nature and is a normal probability distribution when several random contributions are combined according to the Central limit theorem. But, there can also be random contributions from other probability distributions such as a uniform distribution, gamma distribution and so on. The probability distribution can be modeled from the measurement data. Then, the probability distribution can be used to model an equivalent possibility distribution using the maximally specific probability-possibility transformation. Some common probability distributions and the corresponding possibility distributions can be seen in the figures. The internal distribution("rinternal"). "rinternal" is the internal distribution in the RFV which is the possibility distribution of the systematic contribution to the total uncertainty. This distribution can be built based on the information that is available about the measuring instrument and the process. The largest possible distribution is the uniform or rectangular possibility distribution. This means that every value in the specified interval is equally possible. This actually represents the state of total ignorance according to the theory of evidence which means it represents a scenario in which there is maximum lack of information. This distribution is used for the systematic error when we have absolutely no idea about the systematic error except that it belongs to a particular interval of values. This is quite common in measurements. But, in certain cases, it may be known that certain values have a higher or lower degrees of belief than certain other values. In this case, depending on the degrees of belief for the values, an appropriate possibility distribution could be constructed. The construction of the external distribution("rexternal") and the RFV. After modeling the random and internal possibility distribution, the external membership function, rexternal, of the RFV can be constructed by using the following equation: formula_0 where formula_1 is the mode of formula_2, which is the peak in the membership function of formula_3 and "Tmin" is the minimum triangular norm. RFV can also be built from the internal and random distributions by considering the "α"-cuts of the two possibility distributions(PDs). An "α"-cut of a fuzzy variable F can be defined as formula_4 So, essentially an "α"-cut is the set of values for which the value of the membership function formula_5 of the fuzzy variable is greater than "α". So, this gives the upper and lower bounds of the fuzzy variable F for each "α"-cut. The "α"-cut of an RFV, however, has 4 specific bounds and is given by formula_6. formula_7 and formula_8 are the lower and upper bounds respectively of the external membership function("rexternal") which is a fuzzy variable on its own. formula_9 and formula_10 are the lower and upper bounds respectively of the internal membership function("rinternal") which is a fuzzy variable on its own. To build the RFV, let us consider the "α"-cuts of the two PDs i.e., "rrandom" and "rinternal" for the same value of "α". This gives the lower and upper bounds for the two "α"-cuts. Let them be formula_11 and formula_12 for the random and internal distributions respectively. formula_11 can be again divided into two sub-intervals formula_13 and formula_14 where formula_1 is the mode of the fuzzy variable. Then, the "α"-cut for the RFV for the same value of "α", formula_6 can be defined by formula_15 formula_16 formula_17 formula_18 Using the above equations, the "α"-cuts are calculated for every value of "α" which gives us the final plot of the RFV. A Random-Fuzzy variable is capable of giving a complete picture of the random and systematic contributions to the total uncertainty from the "α"-cuts for any confidence level as the confidence level is nothing but "1-α". An example for the construction of the corresponding external membership function("rexternal") and the RFV from a random PD and an internal PD can be seen in the following figure. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r_{\\textit{external}}(x)=\\sup_{x^\\prime}T_{min}[r_{\\textit{random}}(x-x^\\prime+x^{*}), r_{\\textit{internal}}(x^\\prime)] " }, { "math_id": 1, "text": "x^{*}" }, { "math_id": 2, "text": "r_{\\textit{random}}" }, { "math_id": 3, "text": "r_{random}" }, { "math_id": 4, "text": "F_{\\alpha } = \\{a\\,\\vert\\,\\mu _{\\rm F} (a) \\geq \\alpha\\}\\qquad\\textit{where}\\qquad0\\leq\\alpha\\leq1" }, { "math_id": 5, "text": "\\mu _{\\rm F} (a)" }, { "math_id": 6, "text": "RFV^{\\alpha} = [X_{a}^{\\alpha}, X_{b}^{\\alpha}, X_{c}^{\\alpha}, X_{d}^{\\alpha}]" }, { "math_id": 7, "text": "X_{a}^{\\alpha}" }, { "math_id": 8, "text": "X_{d}^{\\alpha}" }, { "math_id": 9, "text": "X_{b}^{\\alpha}" }, { "math_id": 10, "text": "X_{c}^{\\alpha}" }, { "math_id": 11, "text": "[X_{LR}^{\\alpha}, X_{UR}^{\\alpha}]" }, { "math_id": 12, "text": "[X_{LI}^{\\alpha}, X_{UI}^{\\alpha}]" }, { "math_id": 13, "text": "[X_{LR}^{\\alpha}, x^{*}]" }, { "math_id": 14, "text": "[x^{*}, X_{UR}^{\\alpha}]" }, { "math_id": 15, "text": "X_{a}^{\\alpha} = X_{LI}^{\\alpha}-(x^{*}-X_{LR}^{\\alpha})" }, { "math_id": 16, "text": "X_{b}^{\\alpha} = X_{LI}^{\\alpha}" }, { "math_id": 17, "text": "X_{c}^{\\alpha} = X_{UI}^{\\alpha}" }, { "math_id": 18, "text": "X_{d}^{\\alpha} = X_{UI}^{\\alpha}-(X_{UR}^{\\alpha}-x^{*})" } ]
https://en.wikipedia.org/wiki?curid=61212319
61213
Laurent series
Power series with negative powers In mathematics, the Laurent series of a complex function formula_1 is a representation of that function as a power series which includes terms of negative degree. It may be used to express complex functions in cases where a Taylor series expansion cannot be applied. The Laurent series was named after and first published by Pierre Alphonse Laurent in 1843. Karl Weierstrass had previously described it in a paper written in 1841 but not published until 1894. Definition. The Laurent series for a complex function formula_1 about a point formula_0 is given by formula_2 where formula_3 and formula_0 are constants, with formula_3 defined by a contour integral that generalizes Cauchy's integral formula: formula_4 The path of integration formula_5 is counterclockwise around a Jordan curve enclosing formula_0 and lying in an annulus formula_6 in which formula_1 is holomorphic (analytic). The expansion for formula_1 will then be valid anywhere inside the annulus. The annulus is shown in red in the figure on the right, along with an example of a suitable path of integration labeled formula_5. If we take formula_5 to be a circle formula_7, where formula_8, this just amounts to computing the complex Fourier coefficients of the restriction of formula_9 to formula_5. The fact that these integrals are unchanged by a deformation of the contour formula_5 is an immediate consequence of Green's theorem. One may also obtain the Laurent series for a complex function formula_1 at formula_10. However, this is the same as when formula_11 (see the example below). In practice, the above integral formula may not offer the most practical method for computing the coefficients formula_3 for a given function formula_1; instead, one often pieces together the Laurent series by combining known Taylor expansions. Because the Laurent expansion of a function is unique whenever it exists, any expression of this form that equals the given function formula_1 in some annulus must actually be the Laurent expansion of formula_1. Convergent Laurent series. Laurent series with complex coefficients are an important tool in complex analysis, especially to investigate the behavior of functions near singularities. Consider for instance the function formula_12 with formula_13. As a real function, it is infinitely differentiable everywhere; as a complex function however it is not differentiable at formula_14. By replacing formula_15 with formula_16 in the power series for the exponential function, we obtain its Laurent series which converges and is equal to formula_17 for all complex numbers formula_15 except at the singularity formula_14. The graph opposite shows formula_18 in black and its Laurent approximations formula_19 for formula_20 = 1, 2, 3, 4, 5, 6, 7 and 50. As formula_21, the approximation becomes exact for all (complex) numbers formula_15 except at the singularity formula_22. More generally, Laurent series can be used to express holomorphic functions defined on an annulus, much as power series are used to express holomorphic functions defined on a disc. Suppose formula_23 is a given Laurent series with complex coefficients formula_3 and a complex center formula_0. Then there exists a unique inner radius formula_24 and outer radius formula_25 such that: It is possible that formula_24 may be zero or formula_25 may be infinite; at the other extreme, it's not necessarily true that formula_24 is less than formula_25. These radii can be computed as follows: formula_27 We take formula_25 to be infinite when this latter lim sup is zero. Conversely, if we start with an annulus of the form formula_26 and a holomorphic function formula_1 defined on formula_6, then there always exists a unique Laurent series with center formula_0 which converges (at least) on formula_6 and represents the function formula_1. As an example, consider the following rational function, along with its partial fraction expansion: formula_28 This function has singularities at formula_29 and formula_30, where the denominator of the expression is zero and the expression is therefore undefined. A Taylor series about formula_31 (which yields a power series) will only converge in a disc of radius 1, since it "hits" the singularity at 1. However, there are three possible Laurent expansions about 0, depending on the radius of formula_32: The case formula_45; i.e., a holomorphic function formula_1 which may be undefined at a single point formula_0, is especially important. The coefficient formula_46 of the Laurent expansion of such a function is called the residue of formula_1 at the singularity formula_0; it plays a prominent role in the residue theorem. For an example of this, consider formula_47 This function is holomorphic everywhere except at formula_31. To determine the Laurent expansion about formula_48, we use our knowledge of the Taylor series of the exponential function: formula_49 We find that the residue is 2. One example for expanding about formula_10: formula_50 Uniqueness. Suppose a function formula_1 holomorphic on the annulus formula_51 has two Laurent series: formula_52 Multiply both sides by formula_53, where k is an arbitrary integer, and integrate on a path γ inside the annulus, formula_54 The series converges uniformly on formula_55, where "ε" is a positive number small enough for "γ" to be contained in the constricted closed annulus, so the integration and summation can be interchanged. Substituting the identity formula_56 into the summation yields formula_57 Hence the Laurent series is unique. Laurent polynomials. A Laurent polynomial is a Laurent series in which only finitely many coefficients are non-zero. Laurent polynomials differ from ordinary polynomials in that they may have terms of negative degree. Principal part. The principal part of a Laurent series is the series of terms with negative degree, that is formula_58 If the principal part of formula_9 is a finite sum, then formula_9 has a pole at formula_0 of order equal to (negative) the degree of the highest term; on the other hand, if formula_9 has an essential singularity at formula_0, the principal part is an infinite sum (meaning it has infinitely many non-zero terms). If the inner radius of convergence of the Laurent series for formula_9 is 0, then formula_9 has an essential singularity at formula_0 if and only if the principal part is an infinite sum, and has a pole otherwise. If the inner radius of convergence is positive, formula_9 may have infinitely many negative terms but still be regular at formula_0, as in the example above, in which case it is represented by a "different" Laurent series in a disk about formula_0. Laurent series with only finitely many negative terms are well-behaved—they are a power series divided by formula_59, and can be analyzed similarly—while Laurent series with infinitely many negative terms have complicated behavior on the inner circle of convergence. Multiplication and sum. Laurent series cannot in general be multiplied. Algebraically, the expression for the terms of the product may involve infinite sums which need not converge (one cannot take the convolution of integer sequences). Geometrically, the two Laurent series may have non-overlapping annuli of convergence. Two Laurent series with only "finitely" many negative terms can be multiplied: algebraically, the sums are all finite; geometrically, these have poles at formula_0, and inner radius of convergence 0, so they both converge on an overlapping annulus. Thus when defining formal Laurent series, one requires Laurent series with only finitely many negative terms. Similarly, the sum of two convergent Laurent series need not converge, though it is always defined formally, but the sum of two bounded below Laurent series (or any Laurent series on a punctured disk) has a non-empty annulus of convergence. Also, for a field formula_60, by the sum and multiplication defined above, formal Laurent series would form a field formula_61 which is also the field of fractions of the ring formula_62 of formal power series. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c" }, { "math_id": 1, "text": "f(z)" }, { "math_id": 2, "text": "f(z) = \\sum_{n=-\\infty}^\\infty a_n(z-c)^n," }, { "math_id": 3, "text": "a_n" }, { "math_id": 4, "text": "a_n =\\frac{1}{2\\pi i}\\oint_\\gamma \\frac{f(z)}{(z-c)^{n+1}} \\, dz." }, { "math_id": 5, "text": "\\gamma" }, { "math_id": 6, "text": "A" }, { "math_id": 7, "text": " |z-c| = \\varrho" }, { "math_id": 8, "text": "r < \\varrho < R" }, { "math_id": 9, "text": "f" }, { "math_id": 10, "text": " z = \\infty" }, { "math_id": 11, "text": " R \\rightarrow \\infty" }, { "math_id": 12, "text": "f(x) = e^{-1/x^2}" }, { "math_id": 13, "text": "f(0) = 0" }, { "math_id": 14, "text": "x = 0" }, { "math_id": 15, "text": "x" }, { "math_id": 16, "text": "-1/x^2" }, { "math_id": 17, "text": "f(x)" }, { "math_id": 18, "text": "e^{-1/x^2}" }, { "math_id": 19, "text": "\\sum_{n=0}^N (-1)^n \\, {x^{-2n}\\over n!}" }, { "math_id": 20, "text": "N" }, { "math_id": 21, "text": "N\\to\\infty" }, { "math_id": 22, "text": "x=0" }, { "math_id": 23, "text": "\\sum_{n=-\\infty}^\\infty a_n ( z - c )^n" }, { "math_id": 24, "text": "r" }, { "math_id": 25, "text": "R" }, { "math_id": 26, "text": "A=\\{z:r<|z-c|<R\\}" }, { "math_id": 27, "text": "\\begin{align}\n r &= \\limsup_{n\\to\\infty} |a_{-n}|^\\frac{1}{n}, \\\\\n {1 \\over R} &= \\limsup_{n\\to\\infty} |a_n|^\\frac{1}{n}.\n\\end{align}" }, { "math_id": 28, "text": "\n f(z) = \\frac{1}{(z - 1)(z - 2i)} \n = \\frac{1 + 2i}{5}\\left(\\frac{1}{z - 1} - \\frac{1}{z - 2i}\\right)\n." }, { "math_id": 29, "text": "z=1" }, { "math_id": 30, "text": "z=2i" }, { "math_id": 31, "text": "z=0" }, { "math_id": 32, "text": "z" }, { "math_id": 33, "text": "f(z) = \\frac{1 + 2i}{5} \\sum_{n=0}^\\infty \\left(\\frac{1}{(2i)^{n + 1}} - 1\\right)z^n." }, { "math_id": 34, "text": "\\frac{1}{z-a} = - \\frac{1}{a} \\sum_{n=0}^\\infty \\left( \\tfrac{z}{a} \\right)^n " }, { "math_id": 35, "text": "|z| < |a| " }, { "math_id": 36, "text": "1<z<2" }, { "math_id": 37, "text": "f(z) = \\frac{1 + 2i}{5} \\left(\\sum_{n=1}^\\infty z^{-n} + \\sum_{n=0}^\\infty \\frac{1}{(2i)^{n + 1}} z^n\\right)." }, { "math_id": 38, "text": " \\frac{1}{z - a} = \\frac{1}{z}\\sum_{n=0}^\\infty \\left(\\frac{a}{z}\\right)^n " }, { "math_id": 39, "text": "|z| > |a|" }, { "math_id": 40, "text": "2<z<\\infty" }, { "math_id": 41, "text": "z = \\infty" }, { "math_id": 42, "text": " f(z) = \\frac{1 + 2i}{5} \\sum_{n=1}^\\infty \\left(1 - (2i)^{n - 1}\\right) z^{-n}." }, { "math_id": 43, "text": "(x-1)(x-2i)" }, { "math_id": 44, "text": "x^{-n}" }, { "math_id": 45, "text": "r=0" }, { "math_id": 46, "text": "a_{-1}" }, { "math_id": 47, "text": "f(z) = {e^z \\over z} + e^{{1}/{z}}." }, { "math_id": 48, "text": "c=0" }, { "math_id": 49, "text": "f(z) = \\cdots + \\left( {1 \\over 3!} \\right) z^{-3} + \\left( {1 \\over 2!} \\right) z^{-2} + 2z^{-1} + 2 + \\left( {1 \\over 2!} \\right) z + \\left( {1 \\over 3!} \\right) z^2 + \\left( {1 \\over 4!} \\right) z^3 + \\cdots." }, { "math_id": 50, "text": " f(z) = \\sqrt{1 + z^2} - z = z\\left(\\sqrt{1 + \\frac{1}{z^2}} - 1\\right) = z\\left(\\frac{1}{2z^2} - \\frac{1}{8z^4} + \\frac{1}{16z^6} - \\cdots\\right) = \\frac{1}{2z} - \\frac{1}{8z^3} + \\frac{1}{16z^5} - \\cdots." }, { "math_id": 51, "text": "r<|z-c|<R" }, { "math_id": 52, "text": "f(z) = \\sum_{n=-\\infty}^{\\infty} a_{n} (z-c)^n = \\sum_{n=-\\infty}^{\\infty} b_{n} (z-c)^n." }, { "math_id": 53, "text": "(z-c)^{-k-1}" }, { "math_id": 54, "text": "\\oint_{\\gamma}\\,\\sum_{n=-\\infty}^{\\infty} a_{n} (z-c)^{n-k-1}\\,dz = \\oint_{\\gamma}\\,\\sum_{n=-\\infty}^{\\infty} b_{n} (z-c)^{n-k-1}\\,dz." }, { "math_id": 55, "text": "r+\\varepsilon \\leq |z-c| \\leq R-\\varepsilon" }, { "math_id": 56, "text": "\\oint_{\\gamma}\\,(z-c)^{n-k-1}\\,dz = 2\\pi i\\delta_{nk}" }, { "math_id": 57, "text": "a_k = b_k." }, { "math_id": 58, "text": "\\sum_{k=-\\infty}^{-1} a_k (z-c)^k." }, { "math_id": 59, "text": "z^k" }, { "math_id": 60, "text": "F" }, { "math_id": 61, "text": "F((x))" }, { "math_id": 62, "text": "F[[x]]" }, { "math_id": 63, "text": "z=e^{\\pi i w}" } ]
https://en.wikipedia.org/wiki?curid=61213
6122
Continuous function
Mathematical function with no sudden changes In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as "discontinuities". More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity. Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology. A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity. As an example, the function "H"("t") denoting the height of a growing flower at time t would be considered continuous. In contrast, the function "M"("t") denoting the amount of money in a bank account at time t would be considered discontinuous since it "jumps" at each point in time when money is deposited or withdrawn. History. A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of formula_0 as follows: an infinitely small increment formula_1 of the independent variable "x" always produces an infinitely small change formula_2 of the dependent variable "y" (see e.g. "Cours d'Analyse", p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s, but the work wasn't published until the 1930s. Like Bolzano, Karl Weierstrass denied continuity of a function at a point "c" unless it was defined at and on both sides of "c", but Édouard Goursat allowed the function to be defined only at and on one side of "c", and Camille Jordan allowed it even if the function was defined only at "c". All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854. Real functions. Definition. A real function that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. A more mathematically rigorous definition is given below. Continuity of real functions is usually defined in terms of limits. A function "f" with variable x is "continuous at" the real number c, if the limit of formula_3 as x tends to c, is equal to formula_4 There are several different definitions of the (global) continuity of a function, which depend on the nature of its domain. A function is continuous on an open interval if the interval is contained in the function's domain and the function is continuous at every interval point. A function that is continuous on the interval formula_5 (the whole real line) is often called simply a continuous function; one also says that such a function is "continuous everywhere". For example, all polynomial functions are continuous everywhere. A function is continuous on a semi-open or a closed interval; if the interval is contained in the domain of the function, the function is continuous at every interior point of the interval, and the value of the function at each endpoint that belongs to the interval is the limit of the values of the function when the variable tends to the endpoint from the interior of the interval. For example, the function formula_6 is continuous on its whole domain, which is the closed interval formula_7 Many commonly encountered functions are partial functions that have a domain formed by all real numbers, except some isolated points. Examples include the reciprocal function formula_8 and the tangent function formula_9 When they are continuous on their domain, one says, in some contexts, that they are continuous, although they are not continuous everywhere. In other contexts, mainly when one is interested in their behavior near the exceptional points, one says they are discontinuous. A partial function is "discontinuous" at a point if the point belongs to the topological closure of its domain, and either the point does not belong to the domain of the function or the function is not continuous at the point. For example, the functions formula_10 and formula_11 are discontinuous at 0, and remain discontinuous whichever value is chosen for defining them at 0. A point where a function is discontinuous is called a "discontinuity". Using mathematical notation, several ways exist to define continuous functions in the three senses mentioned above. Let formula_12 be a function defined on a subset formula_13 of the set formula_14 of real numbers. This subset formula_13 is the domain of "f". Some possible choices include In the case of the domain formula_13 being defined as an open interval, formula_19 and formula_20 do not belong to formula_13, and the values of formula_21 and formula_22 do not matter for continuity on formula_13. Definition in terms of limits of functions. The function "f" is "continuous at some point" "c" of its domain if the limit of formula_3 as "x" approaches "c" through the domain of "f", exists and is equal to formula_4 In mathematical notation, this is written as formula_23 In detail this means three conditions: first, "f" has to be defined at "c" (guaranteed by the requirement that "c" is in the domain of "f"). Second, the limit of that equation has to exist. Third, the value of this limit must equal formula_4 Definition in terms of neighborhoods. A neighborhood of a point "c" is a set that contains, at least, all points within some fixed distance of "c". Intuitively, a function is continuous at a point "c" if the range of "f" over the neighborhood of "c" shrinks to a single point formula_24 as the width of the neighborhood around "c" shrinks to zero. More precisely, a function "f" is continuous at a point "c" of its domain if, for any neighborhood formula_25 there is a neighborhood formula_26 in its domain such that formula_27 whenever formula_28 As neighborhoods are defined in any topological space, this definition of a continuous function applies not only for real functions but also when the domain and the codomain are topological spaces and is thus the most general definition. It follows that a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous. Definition in terms of limits of sequences. One can instead require that for any sequence formula_29 of points in the domain which converges to "c", the corresponding sequence formula_30 converges to formula_4 In mathematical notation, formula_31 Weierstrass and Jordan definitions (epsilon–delta) of continuous functions. Explicitly including the definition of the limit of a function, we obtain a self-contained definition: Given a function formula_32 as above and an element formula_33 of the domain formula_13, formula_34 is said to be continuous at the point formula_33 when the following holds: For any positive real number formula_35 however small, there exists some positive real number formula_36 such that for all formula_37 in the domain of formula_34 with formula_38 the value of formula_39 satisfies formula_40 Alternatively written, continuity of formula_32 at formula_41 means that for every formula_35 there exists a formula_36 such that for all formula_42: formula_43 More intuitively, we can say that if we want to get all the formula_39 values to stay in some small neighborhood around formula_44 we need to choose a small enough neighborhood for the formula_37 values around formula_45 If we can do that no matter how small the formula_46 neighborhood is, then formula_34 is continuous at formula_45 In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology. Weierstrass had required that the interval formula_47 be entirely within the domain formula_13, but Jordan removed that restriction. Definition in terms of control of the remainder. In proofs and numerical analysis, we often need to know how fast limits are converging, or in other words, control of the remainder. We can formalize this to a definition of continuity. A function formula_48 is called a control function if A function formula_50 is "C"-continuous at formula_33 if there exists such a neighbourhood formula_51 that formula_52 A function is continuous in formula_33 if it is "C"-continuous for some control function "C". This approach leads naturally to refining the notion of continuity by restricting the set of admissible control functions. For a given set of control functions formula_53 a function is formula_53-continuous if it is formula_54-continuous for some formula_55 For example, the Lipschitz and Hölder continuous functions of exponent α below are defined by the set of control functions formula_56 respectively formula_57 Definition using oscillation. Continuity can also be defined in terms of oscillation: a function "f" is continuous at a point formula_33 if and only if its oscillation at that point is zero; in symbols, formula_58 A benefit of this definition is that it quantifies discontinuity: the oscillation gives how much the function is discontinuous at a point. This definition is helpful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than formula_59 (hence a formula_60 set) – and gives a rapid proof of one direction of the Lebesgue integrability condition. The oscillation is equivalent to the formula_61 definition by a simple re-arrangement and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given formula_62 there is no formula_63 that satisfies the formula_61 definition, then the oscillation is at least formula_64 and conversely if for every formula_59 there is a desired formula_65 the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space. Definition using the hyperreals. Cauchy defined the continuity of a function in the following intuitive terms: an infinitesimal change in the independent variable corresponds to an infinitesimal change of the dependent variable (see "Cours d'analyse", page 34). Non-standard analysis is a way of making this mathematically rigorous. The real line is augmented by adding infinite and infinitesimal numbers to form the hyperreal numbers. In nonstandard analysis, continuity can be defined as follows. &lt;templatestyles src="Block indent/styles.css"/&gt;A real-valued function "f" is continuous at x if its natural extension to the hyperreals has the property that for all infinitesimal "dx", formula_66 is infinitesimal (see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity. Construction of continuous functions. Checking the continuity of a given function can be simplified by checking one of the above defining properties for the building blocks of the given function. It is straightforward to show that the sum of two functions, continuous on some domain, is also continuous on this domain. Given formula_67 then the sum of continuous functions formula_68 (defined by formula_69 for all formula_70) is continuous in formula_71 The same holds for the product of continuous functions, formula_72 is continuous in formula_71 Combining the above preservations of continuity and the continuity of constant functions and of the identity function formula_74 on formula_14, one arrives at the continuity of all polynomial functions on formula_14, such as formula_75 (pictured on the right). In the same way, it can be shown that the reciprocal of a continuous function formula_76 is continuous in formula_79 This implies that, excluding the roots of formula_80 the quotient of continuous functions formula_81 is also continuous on formula_84. For example, the function (pictured) formula_85 is defined for all real numbers formula_86 and is continuous at every such point. Thus, it is a continuous function. The question of continuity at formula_87 does not arise since formula_87 is not in the domain of formula_88 There is no continuous function formula_89 that agrees with formula_90 for all formula_91 Since the function sine is continuous on all reals, the sinc function formula_92 is defined and continuous for all real formula_93 However, unlike the previous example, "G" can be extended to a continuous function on all real numbers, by defining the value formula_94 to be 1, which is the limit of formula_95 when "x" approaches 0, i.e., formula_96 Thus, by setting formula_97 the sinc-function becomes a continuous function on all real numbers. The term removable singularity is used in such cases when (re)defining values of a function to coincide with the appropriate limits make a function continuous at specific points. A more involved construction of continuous functions is the function composition. Given two continuous functions formula_98 their composition, denoted as formula_99 and defined by formula_100 is continuous. This construction allows stating, for example, that formula_101 is continuous for all formula_102 Examples of discontinuous functions. An example of a discontinuous function is the Heaviside step function formula_103, defined by formula_104 Pick for instance formula_105. Then there is no formula_63-neighborhood around formula_106, i.e. no open interval formula_107 with formula_108 that will force all the formula_109 values to be within the formula_59-neighborhood of formula_110, i.e. within formula_111. Intuitively, we can think of this type of discontinuity as a sudden jump in function values. Similarly, the signum or sign function formula_112 is discontinuous at formula_106 but continuous everywhere else. Yet another example: the function formula_113 is continuous everywhere apart from formula_106. Besides plausible continuities and discontinuities like above, there are also functions with a behavior, often coined pathological, for example, Thomae's function, formula_114 is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet's function, the indicator function for the set of rational numbers, formula_115 is nowhere continuous. Properties. A useful lemma. Let formula_39 be a function that is continuous at a point formula_116 and formula_117 be a value such formula_118 Then formula_119 throughout some neighbourhood of formula_45 "Proof:" By the definition of continuity, take formula_120 , then there exists formula_121 such that formula_122 Suppose there is a point in the neighbourhood formula_123 for which formula_124 then we have the contradiction formula_125 Intermediate value theorem. The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states: If the real-valued function "f" is continuous on the closed interval formula_126 and "k" is some number between formula_21 and formula_127 then there is some number formula_128 such that formula_129 For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child's height must have been 1.25 m. As a consequence, if "f" is continuous on formula_130 and formula_21 and formula_22 differ in sign, then, at some point formula_128 formula_24 must equal zero. Extreme value theorem. The extreme value theorem states that if a function "f" is defined on a closed interval formula_130 (or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists formula_131 with formula_132 for all formula_133 The same is true of the minimum of "f". These statements are not, in general, true if the function is defined on an open interval formula_134 (or any set that is not both closed and bounded), as, for example, the continuous function formula_135 defined on the open interval (0,1), does not attain a maximum, being unbounded above. Relation to differentiability and integrability. Every differentiable function formula_136 is continuous, as can be shown. The converse does not hold: for example, the absolute value function formula_137 is everywhere continuous. However, it is not differentiable at formula_106 (but is so everywhere else). Weierstrass's function is also everywhere continuous but nowhere differentiable. The derivative "f′"("x") of a differentiable function "f"("x") need not be continuous. If "f′"("x") is continuous, "f"("x") is said to be "continuously differentiable". The set of such functions is denoted formula_138 More generally, the set of functions formula_139 (from an open interval (or open subset of formula_14) formula_140 to the reals) such that "f" is formula_141 times differentiable and such that the formula_141-th derivative of "f" is continuous is denoted formula_142 See differentiability class. In the field of computer graphics, properties related (but not identical) to formula_143 are sometimes called formula_144 (continuity of position), formula_145 (continuity of tangency), and formula_146 (continuity of curvature); see Smoothness of curves and surfaces. Every continuous function formula_147 is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable but discontinuous) sign function shows. Pointwise and uniform limits. Given a sequence formula_148 of functions such that the limit formula_149 exists for all formula_150, the resulting function formula_39 is referred to as the pointwise limit of the sequence of functions formula_151 The pointwise limit function need not be continuous, even if all functions formula_152 are continuous, as the animation at the right shows. However, "f" is continuous if all functions formula_152 are continuous and the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, and trigonometric functions are continuous. Directional Continuity. Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is right-continuous if no jump occurs when the limit point is approached from the right. Formally, "f" is said to be right-continuous at the point "c" if the following holds: For any number formula_153 however small, there exists some number formula_36 such that for all "x" in the domain with formula_154 the value of formula_39 will satisfy formula_155 This is the same condition as continuous functions, except it is required to hold for "x" strictly larger than "c" only. Requiring it instead for all "x" with formula_156 yields the notion of left-continuous functions. A function is continuous if and only if it is both right-continuous and left-continuous. Semicontinuity. A function "f" is lower semi-continuous if, roughly, any jumps that might occur only go down, but not up. That is, for any formula_35 there exists some number formula_36 such that for all "x" in the domain with formula_157 the value of formula_39 satisfies formula_158 The reverse condition is upper semi-continuity. Continuous functions between metric spaces. The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set formula_159 equipped with a function (called metric) formula_160 that can be thought of as a measurement of the distance of any two elements in "X". Formally, the metric is a function formula_161 that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces formula_162 and formula_163 and a function formula_164 then formula_34 is continuous at the point formula_165 (with respect to the given metrics) if for any positive real number formula_35 there exists a positive real number formula_36 such that all formula_166 satisfying formula_167 will also satisfy formula_168 As in the case of real functions above, this is equivalent to the condition that for every sequence formula_169 in formula_159 with limit formula_170 we have formula_171 The latter condition can be weakened as follows: formula_34 is continuous at the point formula_172 if and only if for every convergent sequence formula_169 in formula_159 with limit formula_172, the sequence formula_173 is a Cauchy sequence, and formula_172 is in the domain of formula_34. The set of points at which a function between metric spaces is continuous is a formula_60 set – this follows from the formula_61 definition of continuity. This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator formula_174 between normed vector spaces formula_175 and formula_176 (which are vector spaces equipped with a compatible norm, denoted formula_177) is continuous if and only if it is bounded, that is, there is a constant formula_178 such that formula_179 for all formula_180 Uniform, Hölder and Lipschitz continuity. The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way formula_63 depends on formula_59 and "c" in the definition above. Intuitively, a function "f" as above is uniformly continuous if the formula_63 does not depend on the point "c". More precisely, it is required that for every real number formula_153 there exists formula_36 such that for every formula_181 with formula_182 we have that formula_183 Thus, any uniformly continuous function is continuous. The converse does not generally hold but holds when the domain space "X" is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces. A function is Hölder continuous with exponent α (a real number) if there is a constant "K" such that for all formula_184 the inequality formula_185 holds. Any Hölder continuous function is uniformly continuous. The particular case formula_186 is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant "K" such that the inequality formula_187 holds for any formula_188 The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations. Continuous functions between topological spaces. Another, more abstract, notion of continuity is the continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set "X" together with a topology on "X", which is a set of subsets of "X" satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing one to talk about the neighborhoods of a given point. The elements of a topology are called open subsets of "X" (with respect to the topology). A function formula_164 between two topological spaces "X" and "Y" is continuous if for every open set formula_189 the inverse image formula_190 is an open subset of "X". That is, "f" is a function between the sets "X" and "Y" (not on the elements of the topology formula_191), but the continuity of "f" depends on the topologies used on "X" and "Y". This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in "Y" are closed in "X". An extreme example: if a set "X" is given the discrete topology (in which every subset is open), all functions formula_192 to any topological space "T" are continuous. On the other hand, if "X" is equipped with the indiscrete topology (in which the only open subsets are the empty set and "X") and the space "T" set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose codomain is indiscrete is continuous. Continuity at a point. The translation in the language of neighborhoods of the formula_193-definition of continuity leads to the following definition of the continuity at a point: This definition is equivalent to the same statement with neighborhoods restricted to open neighborhoods and can be restated in several ways by using preimages rather than images. Also, as every set that contains a neighborhood is also a neighborhood, and formula_195 is the largest subset U of X such that formula_196 this definition may be simplified into: As an open set is a set that is a neighborhood of all its points, a function formula_194 is continuous at every point of "X" if and only if it is a continuous function. If "X" and "Y" are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at "x" and "f"("x") instead of all neighborhoods. This gives back the above formula_61 definition of continuity in the context of metric spaces. In general topological spaces, there is no notion of nearness or distance. If, however, the target space is a Hausdorff space, it is still true that "f" is continuous at "a" if and only if the limit of "f" as "x" approaches "a" is "f"("a"). At an isolated point, every function is continuous. Given formula_197 a map formula_194 is continuous at formula_37 if and only if whenever formula_198 is a filter on formula_159 that converges to formula_37 in formula_199 which is expressed by writing formula_200 then necessarily formula_201 in formula_202 If formula_203 denotes the neighborhood filter at formula_37 then formula_194 is continuous at formula_37 if and only if formula_204 in formula_202 Moreover, this happens if and only if the prefilter formula_205 is a filter base for the neighborhood filter of formula_39 in formula_202 Alternative definitions. Several equivalent definitions for a topological structure exist; thus, several equivalent ways exist to define a continuous function. Sequences and nets. In several contexts, the topology of a space is conveniently specified in terms of limit points. This is often accomplished by specifying when a point is the limit of a sequence. Still, for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition. In detail, a function formula_194 is sequentially continuous if whenever a sequence formula_169 in formula_159 converges to a limit formula_206 the sequence formula_173 converges to formula_207 Thus, sequentially continuous functions "preserve sequential limits." Every continuous function is sequentially continuous. If formula_159 is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if formula_159 is a metric space, sequential continuity and continuity are equivalent. For non-first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve the limits of nets, and this property characterizes continuous functions. For instance, consider the case of real-valued functions of one real variable: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — A function formula_208 is continuous at formula_33 if and only if it is sequentially continuous at that point. Closure operator and interior operator definitions. In terms of the interior operator, a function formula_194 between topological spaces is continuous if and only if for every subset formula_209 formula_210 In terms of the closure operator, formula_194 is continuous if and only if for every subset formula_211 formula_212 That is to say, given any element formula_166 that belongs to the closure of a subset formula_211 formula_39 necessarily belongs to the closure of formula_213 in formula_202 If we declare that a point formula_37 is close to a subset formula_214 if formula_215 then this terminology allows for a plain English description of continuity: formula_34 is continuous if and only if for every subset formula_211 formula_34 maps points that are close to formula_216 to points that are close to formula_217 Similarly, formula_34 is continuous at a fixed given point formula_166 if and only if whenever formula_37 is close to a subset formula_211 then formula_39 is close to formula_217 Instead of specifying topological spaces by their open subsets, any topology on formula_159 can alternatively be determined by a closure operator or by an interior operator. Specifically, the map that sends a subset formula_216 of a topological space formula_159 to its topological closure formula_218 satisfies the Kuratowski closure axioms. Conversely, for any closure operator formula_219 there exists a unique topology formula_220 on formula_159 (specifically, formula_221) such that for every subset formula_211 formula_222 is equal to the topological closure formula_223 of formula_216 in formula_224 If the sets formula_159 and formula_225 are each associated with closure operators (both denoted by formula_226) then a map formula_194 is continuous if and only if formula_227 for every subset formula_228 Similarly, the map that sends a subset formula_216 of formula_159 to its topological interior formula_229 defines an interior operator. Conversely, any interior operator formula_230 induces a unique topology formula_220 on formula_159 (specifically, formula_231) such that for every formula_211 formula_232 is equal to the topological interior formula_233 of formula_216 in formula_224 If the sets formula_159 and formula_225 are each associated with interior operators (both denoted by formula_234) then a map formula_194 is continuous if and only if formula_235 for every subset formula_236 Filters and prefilters. Continuity can also be characterized in terms of filters. A function formula_194 is continuous if and only if whenever a filter formula_198 on formula_159 converges in formula_159 to a point formula_197 then the prefilter formula_237 converges in formula_225 to formula_207 This characterization remains true if the word "filter" is replaced by "prefilter." Properties. If formula_194 and formula_238 are continuous, then so is the composition formula_239 If formula_194 is continuous and The possible topologies on a fixed set "X" are partially ordered: a topology formula_240 is said to be coarser than another topology formula_241 (notation: formula_242) if every open subset with respect to formula_240 is also open with respect to formula_243 Then, the identity map formula_244 is continuous if and only if formula_242 (see also comparison of topologies). More generally, a continuous function formula_245 stays continuous if the topology formula_246 is replaced by a coarser topology and/or formula_247 is replaced by a finer topology. Homeomorphisms. Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. If an open map "f" has an inverse function, that inverse is continuous, and if a continuous map "g" has an inverse, that inverse is open. Given a bijective function "f" between two topological spaces, the inverse function formula_248 need not be continuous. A bijective continuous function with a continuous inverse function is called a homeomorphism. If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism. Defining topologies via continuous functions. Given a function formula_249 where "X" is a topological space and "S" is a set (without a specified topology), the final topology on "S" is defined by letting the open sets of "S" be those subsets "A" of "S" for which formula_250 is open in "X". If "S" has an existing topology, "f" is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on "S". Thus, the final topology is the finest topology on "S" that makes "f" continuous. If "f" is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by "f". Dually, for a function "f" from a set "S" to a topological space "X", the initial topology on "S" is defined by designating as an open set every subset "A" of "S" such that formula_251 for some open subset "U" of "X". If "S" has an existing topology, "f" is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on "S". Thus, the initial topology is the coarsest topology on "S" that makes "f" continuous. If "f" is injective, this topology is canonically identified with the subspace topology of "S", viewed as a subset of "X". A topology on a set "S" is uniquely determined by the class of all continuous functions formula_252 into all topological spaces "X". Dually, a similar idea can be applied to maps formula_253 Related notions. If formula_254 is a continuous function from some subset formula_255 of a topological space formula_159 then a of formula_34 to formula_159 is any continuous function formula_256 such that formula_257 for every formula_258 which is a condition that often written as formula_259 In words, it is any continuous function formula_256 that restricts to formula_34 on formula_260 This notion is used, for example, in the Tietze extension theorem and the Hahn–Banach theorem. If formula_254 is not continuous, then it could not possibly have a continuous extension. If formula_225 is a Hausdorff space and formula_255 is a dense subset of formula_159 then a continuous extension of formula_254 to formula_199 if one exists, will be unique. The Blumberg theorem states that if formula_261 is an arbitrary function then there exists a dense subset formula_13 of formula_14 such that the restriction formula_262 is continuous; in other words, every function formula_263 can be restricted to some dense subset on which it is continuous. Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function formula_194 between particular types of partially ordered sets formula_159 and formula_225 is continuous if for each directed subset formula_216 of formula_199 we have formula_264 Here formula_265 is the supremum with respect to the orderings in formula_159 and formula_266 respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology. In category theory, a functor formula_267 between two categories is called continuous if it commutes with small limits. That is to say, formula_268 for any small (that is, indexed by a set formula_269 as opposed to a class) diagram of objects in formula_270. A continuity space is a generalization of metric spaces and posets, which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y = f(x)" }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "f(x+\\alpha)-f(x)" }, { "math_id": 3, "text": "f(x)," }, { "math_id": 4, "text": "f(c)." }, { "math_id": 5, "text": "(-\\infty, +\\infty)" }, { "math_id": 6, "text": "f(x) = \\sqrt{x}" }, { "math_id": 7, "text": "[0,+\\infty)." }, { "math_id": 8, "text": "x \\mapsto \\frac {1}{x}" }, { "math_id": 9, "text": "x\\mapsto \\tan x." }, { "math_id": 10, "text": "x\\mapsto \\frac {1}{x}" }, { "math_id": 11, "text": "x\\mapsto \\sin(\\frac {1}{x})" }, { "math_id": 12, "text": "f : D \\to \\R" }, { "math_id": 13, "text": "D" }, { "math_id": 14, "text": "\\R" }, { "math_id": 15, "text": "D = \\R " }, { "math_id": 16, "text": " D " }, { "math_id": 17, "text": "D = [a, b] = \\{x \\in \\R \\mid a \\leq x \\leq b \\} " }, { "math_id": 18, "text": "D = (a, b) = \\{x \\in \\R \\mid a < x < b \\} " }, { "math_id": 19, "text": "a" }, { "math_id": 20, "text": "b" }, { "math_id": 21, "text": "f(a)" }, { "math_id": 22, "text": "f(b)" }, { "math_id": 23, "text": "\\lim_{x \\to c}{f(x)} = f(c)." }, { "math_id": 24, "text": "f(c)" }, { "math_id": 25, "text": "N_1(f(c))" }, { "math_id": 26, "text": "N_2(c)" }, { "math_id": 27, "text": "f(x) \\in N_1(f(c))" }, { "math_id": 28, "text": "x\\in N_2(c)." }, { "math_id": 29, "text": "(x_n)_{n \\in \\N}" }, { "math_id": 30, "text": "\\left(f(x_n)\\right)_{n\\in \\N}" }, { "math_id": 31, "text": "\\forall (x_n)_{n \\in \\N} \\subset D:\\lim_{n\\to\\infty} x_n = c \\Rightarrow \\lim_{n\\to\\infty} f(x_n) = f(c)\\,." }, { "math_id": 32, "text": "f : D \\to \\mathbb{R}" }, { "math_id": 33, "text": "x_0" }, { "math_id": 34, "text": "f" }, { "math_id": 35, "text": "\\varepsilon > 0," }, { "math_id": 36, "text": "\\delta > 0" }, { "math_id": 37, "text": "x" }, { "math_id": 38, "text": "x_0 - \\delta < x < x_0 + \\delta," }, { "math_id": 39, "text": "f(x)" }, { "math_id": 40, "text": "f\\left(x_0\\right) - \\varepsilon < f(x) < f(x_0) + \\varepsilon." }, { "math_id": 41, "text": "x_0 \\in D" }, { "math_id": 42, "text": "x \\in D" }, { "math_id": 43, "text": "\\left|x - x_0\\right| < \\delta ~~\\text{ implies }~~ |f(x) - f(x_0)| < \\varepsilon." }, { "math_id": 44, "text": "f\\left(x_0\\right)," }, { "math_id": 45, "text": "x_0." }, { "math_id": 46, "text": "f(x_0)" }, { "math_id": 47, "text": "x_0 - \\delta < x < x_0 + \\delta" }, { "math_id": 48, "text": "C: [0,\\infty) \\to [0,\\infty]" }, { "math_id": 49, "text": "\\inf_{\\delta > 0} C(\\delta) = 0" }, { "math_id": 50, "text": "f : D \\to R" }, { "math_id": 51, "text": "N(x_0)" }, { "math_id": 52, "text": "|f(x) - f(x_0)| \\leq C\\left(\\left|x - x_0\\right|\\right) \\text{ for all } x \\in D \\cap N(x_0)" }, { "math_id": 53, "text": "\\mathcal{C}" }, { "math_id": 54, "text": "C" }, { "math_id": 55, "text": "C \\in \\mathcal{C}." }, { "math_id": 56, "text": "\\mathcal{C}_{\\mathrm{Lipschitz}} = \\{C : C(\\delta) = K|\\delta| ,\\ K > 0\\}" }, { "math_id": 57, "text": "\\mathcal{C}_{\\text{Hölder}-\\alpha} = \\{C : C(\\delta) = K |\\delta|^\\alpha, \\ K > 0\\}." }, { "math_id": 58, "text": "\\omega_f(x_0) = 0." }, { "math_id": 59, "text": "\\varepsilon" }, { "math_id": 60, "text": "G_{\\delta}" }, { "math_id": 61, "text": "\\varepsilon-\\delta" }, { "math_id": 62, "text": "\\varepsilon_0" }, { "math_id": 63, "text": "\\delta" }, { "math_id": 64, "text": "\\varepsilon_0," }, { "math_id": 65, "text": "\\delta," }, { "math_id": 66, "text": "f(x + dx) - f(x)" }, { "math_id": 67, "text": "f, g \\colon D \\to \\R," }, { "math_id": 68, "text": "s = f + g" }, { "math_id": 69, "text": "s(x) = f(x) + g(x)" }, { "math_id": 70, "text": "x\\in D" }, { "math_id": 71, "text": "D." }, { "math_id": 72, "text": "p = f \\cdot g" }, { "math_id": 73, "text": "p(x) = f(x) \\cdot g(x)" }, { "math_id": 74, "text": "I(x) = x" }, { "math_id": 75, "text": "f(x) = x^3 + x^2 - 5 x + 3" }, { "math_id": 76, "text": "r = 1/f" }, { "math_id": 77, "text": "r(x) = 1/f(x)" }, { "math_id": 78, "text": "f(x) \\neq 0" }, { "math_id": 79, "text": "D\\setminus \\{x : f(x) = 0\\}." }, { "math_id": 80, "text": "g," }, { "math_id": 81, "text": "q = f / g" }, { "math_id": 82, "text": "q(x) = f(x)/g(x)" }, { "math_id": 83, "text": "g(x) \\neq 0" }, { "math_id": 84, "text": "D\\setminus \\{x:g(x) = 0\\}" }, { "math_id": 85, "text": "y(x) = \\frac{2x-1}{x+2}" }, { "math_id": 86, "text": "x \\neq -2" }, { "math_id": 87, "text": "x = -2" }, { "math_id": 88, "text": "y." }, { "math_id": 89, "text": "F : \\R \\to \\R" }, { "math_id": 90, "text": "y(x)" }, { "math_id": 91, "text": "x \\neq -2." }, { "math_id": 92, "text": "G(x) = \\sin(x)/x," }, { "math_id": 93, "text": "x \\neq 0." }, { "math_id": 94, "text": "G(0)" }, { "math_id": 95, "text": "G(x)," }, { "math_id": 96, "text": "G(0) = \\lim_{x\\to 0} \\frac{\\sin x}{x} = 1." }, { "math_id": 97, "text": "\nG(x) = \n\\begin{cases}\n\\frac {\\sin (x)}x & \\text{ if }x \\ne 0\\\\\n1 & \\text{ if }x = 0,\n\\end{cases}\n" }, { "math_id": 98, "text": "g : D_g \\subseteq \\R \\to R_g \\subseteq \\R \\quad \\text{ and } \\quad f : D_f \\subseteq \\R \\to R_f \\subseteq D_g," }, { "math_id": 99, "text": "c = g \\circ f : D_f \\to \\R," }, { "math_id": 100, "text": "c(x) = g(f(x))," }, { "math_id": 101, "text": "e^{\\sin(\\ln x)}" }, { "math_id": 102, "text": "x > 0." }, { "math_id": 103, "text": "H" }, { "math_id": 104, "text": "H(x) = \\begin{cases}\n1 & \\text{ if } x \\ge 0\\\\\n0 & \\text{ if } x < 0\n\\end{cases}\n" }, { "math_id": 105, "text": "\\varepsilon = 1/2" }, { "math_id": 106, "text": "x = 0" }, { "math_id": 107, "text": "(-\\delta,\\;\\delta)" }, { "math_id": 108, "text": "\\delta > 0," }, { "math_id": 109, "text": "H(x)" }, { "math_id": 110, "text": "H(0)" }, { "math_id": 111, "text": "(1/2,\\;3/2)" }, { "math_id": 112, "text": "\n\\sgn(x) = \\begin{cases}\n\\;\\;\\ 1 & \\text{ if }x > 0\\\\\n\\;\\;\\ 0 & \\text{ if }x = 0\\\\\n-1 & \\text{ if }x < 0\n\\end{cases}\n" }, { "math_id": 113, "text": "f(x) = \\begin{cases}\n \\sin\\left(x^{-2}\\right)&\\text{ if }x \\neq 0\\\\\n 0&\\text{ if }x = 0\n\\end{cases}" }, { "math_id": 114, "text": "f(x)=\\begin{cases}\n1 &\\text{ if } x=0\\\\\n\\frac{1}{q}&\\text{ if } x = \\frac{p}{q} \\text{(in lowest terms) is a rational number}\\\\\n 0&\\text{ if }x\\text{ is irrational}.\n\\end{cases}" }, { "math_id": 115, "text": "D(x)=\\begin{cases}\n 0&\\text{ if }x\\text{ is irrational } (\\in \\R \\setminus \\Q)\\\\\n 1&\\text{ if }x\\text{ is rational } (\\in \\Q)\n\\end{cases}" }, { "math_id": 116, "text": "x_0," }, { "math_id": 117, "text": "y_0" }, { "math_id": 118, "text": "f\\left(x_0\\right)\\neq y_0." }, { "math_id": 119, "text": "f(x)\\neq y_0" }, { "math_id": 120, "text": "\\varepsilon =\\frac{|y_0-f(x_0)|}{2}>0" }, { "math_id": 121, "text": "\\delta>0" }, { "math_id": 122, "text": "\\left|f(x)-f(x_0)\\right| < \\frac{\\left|y_0 - f(x_0)\\right|}{2} \\quad \\text{ whenever } \\quad |x-x_0| < \\delta" }, { "math_id": 123, "text": "|x-x_0|<\\delta" }, { "math_id": 124, "text": "f(x)=y_0;" }, { "math_id": 125, "text": "\\left|f(x_0)-y_0\\right| < \\frac{\\left|f(x_0) - y_0\\right|}{2}." }, { "math_id": 126, "text": "[a, b]," }, { "math_id": 127, "text": "f(b)," }, { "math_id": 128, "text": "c \\in [a, b]," }, { "math_id": 129, "text": "f(c) = k." }, { "math_id": 130, "text": "[a, b]" }, { "math_id": 131, "text": "c \\in [a, b]" }, { "math_id": 132, "text": "f(c) \\geq f(x)" }, { "math_id": 133, "text": "x \\in [a, b]." }, { "math_id": 134, "text": "(a, b)" }, { "math_id": 135, "text": "f(x) = \\frac{1}{x}," }, { "math_id": 136, "text": "f : (a, b) \\to \\R" }, { "math_id": 137, "text": "f(x)=|x| = \\begin{cases}\n \\;\\;\\ x & \\text{ if }x \\geq 0\\\\\n -x & \\text{ if }x < 0\n\\end{cases}" }, { "math_id": 138, "text": "C^1((a, b))." }, { "math_id": 139, "text": "f : \\Omega \\to \\R" }, { "math_id": 140, "text": "\\Omega" }, { "math_id": 141, "text": "n" }, { "math_id": 142, "text": "C^n(\\Omega)." }, { "math_id": 143, "text": "C^0, C^1, C^2" }, { "math_id": 144, "text": "G^0" }, { "math_id": 145, "text": "G^1" }, { "math_id": 146, "text": "G^2" }, { "math_id": 147, "text": "f : [a, b] \\to \\R" }, { "math_id": 148, "text": "f_1, f_2, \\dotsc : I \\to \\R" }, { "math_id": 149, "text": "f(x) := \\lim_{n \\to \\infty} f_n(x)" }, { "math_id": 150, "text": "x \\in D," }, { "math_id": 151, "text": "\\left(f_n\\right)_{n \\in N}." }, { "math_id": 152, "text": "f_n" }, { "math_id": 153, "text": "\\varepsilon > 0" }, { "math_id": 154, "text": "c < x < c + \\delta," }, { "math_id": 155, "text": "|f(x) - f(c)| < \\varepsilon." }, { "math_id": 156, "text": "c - \\delta < x < c" }, { "math_id": 157, "text": "|x - c| < \\delta," }, { "math_id": 158, "text": "f(x) \\geq f(c) - \\epsilon." }, { "math_id": 159, "text": "X" }, { "math_id": 160, "text": "d_X," }, { "math_id": 161, "text": "d_X : X \\times X \\to \\R" }, { "math_id": 162, "text": "\\left(X, d_X\\right)" }, { "math_id": 163, "text": "\\left(Y, d_Y\\right)" }, { "math_id": 164, "text": "f : X \\to Y" }, { "math_id": 165, "text": "c \\in X" }, { "math_id": 166, "text": "x \\in X" }, { "math_id": 167, "text": "d_X(x, c) < \\delta" }, { "math_id": 168, "text": "d_Y(f(x), f(c)) < \\varepsilon." }, { "math_id": 169, "text": "\\left(x_n\\right)" }, { "math_id": 170, "text": "\\lim x_n = c," }, { "math_id": 171, "text": "\\lim f\\left(x_n\\right) = f(c)." }, { "math_id": 172, "text": "c" }, { "math_id": 173, "text": "\\left(f\\left(x_n\\right)\\right)" }, { "math_id": 174, "text": "T : V \\to W" }, { "math_id": 175, "text": "V" }, { "math_id": 176, "text": "W" }, { "math_id": 177, "text": "\\|x\\|" }, { "math_id": 178, "text": "K" }, { "math_id": 179, "text": "\\|T(x)\\| \\leq K \\|x\\|" }, { "math_id": 180, "text": "x \\in V." }, { "math_id": 181, "text": "c, b \\in X" }, { "math_id": 182, "text": "d_X(b, c) < \\delta," }, { "math_id": 183, "text": "d_Y(f(b), f(c)) < \\varepsilon." }, { "math_id": 184, "text": "b, c \\in X," }, { "math_id": 185, "text": "d_Y (f(b), f(c)) \\leq K \\cdot (d_X (b, c))^\\alpha" }, { "math_id": 186, "text": "\\alpha = 1" }, { "math_id": 187, "text": "d_Y (f(b), f(c)) \\leq K \\cdot d_X (b, c)" }, { "math_id": 188, "text": "b, c \\in X." }, { "math_id": 189, "text": "V \\subseteq Y," }, { "math_id": 190, "text": "f^{-1}(V) = \\{x \\in X \\; | \\; f(x) \\in V \\}" }, { "math_id": 191, "text": "T_X" }, { "math_id": 192, "text": "f : X \\to T" }, { "math_id": 193, "text": "(\\varepsilon, \\delta)" }, { "math_id": 194, "text": "f : X \\to Y" }, { "math_id": 195, "text": "f^{-1}(V)" }, { "math_id": 196, "text": "f(U) \\subseteq V," }, { "math_id": 197, "text": "x \\in X," }, { "math_id": 198, "text": "\\mathcal{B}" }, { "math_id": 199, "text": "X," }, { "math_id": 200, "text": "\\mathcal{B} \\to x," }, { "math_id": 201, "text": "f(\\mathcal{B}) \\to f(x)" }, { "math_id": 202, "text": "Y." }, { "math_id": 203, "text": "\\mathcal{N}(x)" }, { "math_id": 204, "text": "f(\\mathcal{N}(x)) \\to f(x)" }, { "math_id": 205, "text": "f(\\mathcal{N}(x))" }, { "math_id": 206, "text": "x," }, { "math_id": 207, "text": "f(x)." }, { "math_id": 208, "text": "f : A \\subseteq \\R \\to \\R" }, { "math_id": 209, "text": "B \\subseteq Y," }, { "math_id": 210, "text": "f^{-1}\\left(\\operatorname{int}_Y B\\right) ~\\subseteq~ \\operatorname{int}_X\\left(f^{-1}(B)\\right)." }, { "math_id": 211, "text": "A \\subseteq X," }, { "math_id": 212, "text": "f\\left(\\operatorname{cl}_X A\\right) ~\\subseteq~ \\operatorname{cl}_Y (f(A))." }, { "math_id": 213, "text": "f(A)" }, { "math_id": 214, "text": "A \\subseteq X" }, { "math_id": 215, "text": "x \\in \\operatorname{cl}_X A," }, { "math_id": 216, "text": "A" }, { "math_id": 217, "text": "f(A)." }, { "math_id": 218, "text": "\\operatorname{cl}_X A" }, { "math_id": 219, "text": "A \\mapsto \\operatorname{cl} A" }, { "math_id": 220, "text": "\\tau" }, { "math_id": 221, "text": "\\tau := \\{ X \\setminus \\operatorname{cl} A : A \\subseteq X \\}" }, { "math_id": 222, "text": "\\operatorname{cl} A" }, { "math_id": 223, "text": "\\operatorname{cl}_{(X, \\tau)} A" }, { "math_id": 224, "text": "(X, \\tau)." }, { "math_id": 225, "text": "Y" }, { "math_id": 226, "text": "\\operatorname{cl}" }, { "math_id": 227, "text": "f(\\operatorname{cl} A) \\subseteq \\operatorname{cl} (f(A))" }, { "math_id": 228, "text": "A \\subseteq X." }, { "math_id": 229, "text": "\\operatorname{int}_X A" }, { "math_id": 230, "text": "A \\mapsto \\operatorname{int} A" }, { "math_id": 231, "text": "\\tau := \\{ \\operatorname{int} A : A \\subseteq X \\}" }, { "math_id": 232, "text": "\\operatorname{int} A" }, { "math_id": 233, "text": "\\operatorname{int}_{(X, \\tau)} A" }, { "math_id": 234, "text": "\\operatorname{int}" }, { "math_id": 235, "text": "f^{-1}(\\operatorname{int} B) \\subseteq \\operatorname{int}\\left(f^{-1}(B)\\right)" }, { "math_id": 236, "text": "B \\subseteq Y." }, { "math_id": 237, "text": "f(\\mathcal{B})" }, { "math_id": 238, "text": "g : Y \\to Z" }, { "math_id": 239, "text": "g \\circ f : X \\to Z." }, { "math_id": 240, "text": "\\tau_1" }, { "math_id": 241, "text": "\\tau_2" }, { "math_id": 242, "text": "\\tau_1 \\subseteq \\tau_2" }, { "math_id": 243, "text": "\\tau_2." }, { "math_id": 244, "text": "\\operatorname{id}_X : \\left(X, \\tau_2\\right) \\to \\left(X, \\tau_1\\right)" }, { "math_id": 245, "text": "\\left(X, \\tau_X\\right) \\to \\left(Y, \\tau_Y\\right)" }, { "math_id": 246, "text": "\\tau_Y" }, { "math_id": 247, "text": "\\tau_X" }, { "math_id": 248, "text": "f^{-1}" }, { "math_id": 249, "text": "f : X \\to S," }, { "math_id": 250, "text": "f^{-1}(A)" }, { "math_id": 251, "text": "A = f^{-1}(U)" }, { "math_id": 252, "text": "S \\to X" }, { "math_id": 253, "text": "X \\to S." }, { "math_id": 254, "text": "f : S \\to Y" }, { "math_id": 255, "text": "S" }, { "math_id": 256, "text": "F : X \\to Y" }, { "math_id": 257, "text": "F(s) = f(s)" }, { "math_id": 258, "text": "s \\in S," }, { "math_id": 259, "text": "f = F\\big\\vert_S." }, { "math_id": 260, "text": "S." }, { "math_id": 261, "text": "f : \\R \\to \\R" }, { "math_id": 262, "text": "f\\big\\vert_D : D \\to \\R" }, { "math_id": 263, "text": "\\R \\to \\R" }, { "math_id": 264, "text": "\\sup f(A) = f(\\sup A)." }, { "math_id": 265, "text": "\\,\\sup\\," }, { "math_id": 266, "text": "Y," }, { "math_id": 267, "text": "F : \\mathcal C \\to \\mathcal D" }, { "math_id": 268, "text": "\\varprojlim_{i \\in I} F(C_i) \\cong F \\left(\\varprojlim_{i \\in I} C_i \\right)" }, { "math_id": 269, "text": "I," }, { "math_id": 270, "text": "\\mathcal C" } ]
https://en.wikipedia.org/wiki?curid=6122
61220
Spintronics
Solid-state electronics based on electron spin Spintronics (a portmanteau meaning spin transport electronics), also known as spin electronics, is the study of the intrinsic spin of the electron and its associated magnetic moment, in addition to its fundamental electronic charge, in solid-state devices. The field of spintronics concerns spin-charge coupling in metallic systems; the analogous effects in insulators fall into the field of multiferroics. Spintronics fundamentally differs from traditional electronics in that, in addition to charge state, electron spins are used as a further degree of freedom, with implications in the efficiency of data storage and transfer. Spintronic systems are most often realised in dilute magnetic semiconductors (DMS) and Heusler alloys and are of particular interest in the field of quantum computing and neuromorphic computing. History. Spintronics emerged from discoveries in the 1980s concerning spin-dependent electron transport phenomena in solid-state devices. This includes the observation of spin-polarized electron injection from a ferromagnetic metal to a normal metal by Johnson and Silsbee (1985) and the discovery of giant magnetoresistance independently by Albert Fert et al. and Peter Grünberg et al. (1988). The origin of spintronics can be traced to the ferromagnet/superconductor tunneling experiments pioneered by Meservey and Tedrow and initial experiments on magnetic tunnel junctions by Julliere in the 1970s. The use of semiconductors for spintronics began with the theoretical proposal of a spin field-effect-transistor by Datta and Das in 1990 and of the electric dipole spin resonance by Rashba in 1960. Theory. The spin of the electron is an intrinsic angular momentum that is separate from the angular momentum due to its orbital motion. The magnitude of the projection of the electron's spin along an arbitrary axis is formula_0, implying that the electron acts as a fermion by the spin-statistics theorem. Like orbital angular momentum, the spin has an associated magnetic moment, the magnitude of which is expressed as formula_1. In a solid, the spins of many electrons can act together to affect the magnetic and electronic properties of a material, for example endowing it with a permanent magnetic moment as in a ferromagnet. In many materials, electron spins are equally present in both the up and the down state, and no transport properties are dependent on spin. A spintronic device requires generation or manipulation of a spin-polarized population of electrons, resulting in an excess of spin up or spin down electrons. The polarization of any spin dependent property X can be written as formula_2. A net spin polarization can be achieved either through creating an equilibrium energy split between spin up and spin down. Methods include putting a material in a large magnetic field (Zeeman effect), the exchange energy present in a ferromagnet or forcing the system out of equilibrium. The period of time that such a non-equilibrium population can be maintained is known as the spin lifetime, formula_3. In a diffusive conductor, a spin diffusion length formula_4 can be defined as the distance over which a non-equilibrium spin population can propagate. Spin lifetimes of conduction electrons in metals are relatively short (typically less than 1 nanosecond). An important research area is devoted to extending this lifetime to technologically relevant timescales. The mechanisms of decay for a spin polarized population can be broadly classified as spin-flip scattering and spin dephasing. Spin-flip scattering is a process inside a solid that does not conserve spin, and can therefore switch an incoming spin up state into an outgoing spin down state. Spin dephasing is the process wherein a population of electrons with a common spin state becomes less polarized over time due to different rates of electron spin precession. In confined structures, spin dephasing can be suppressed, leading to spin lifetimes of milliseconds in semiconductor quantum dots at low temperatures. Superconductors can enhance central effects in spintronics such as magnetoresistance effects, spin lifetimes and dissipationless spin-currents. The simplest method of generating a spin-polarised current in a metal is to pass the current through a ferromagnetic material. The most common applications of this effect involve giant magnetoresistance (GMR) devices. A typical GMR device consists of at least two layers of ferromagnetic materials separated by a spacer layer. When the two magnetization vectors of the ferromagnetic layers are aligned, the electrical resistance will be lower (so a higher current flows at constant voltage) than if the ferromagnetic layers are anti-aligned. This constitutes a magnetic field sensor. Two variants of GMR have been applied in devices: (1) current-in-plane (CIP), where the electric current flows parallel to the layers and (2) current-perpendicular-to-plane (CPP), where the electric current flows in a direction perpendicular to the layers. Other metal-based spintronics devices: Spintronic-logic devices. Non-volatile spin-logic devices to enable scaling are being extensively studied. Spin-transfer, torque-based logic devices that use spins and magnets for information processing have been proposed. These devices are part of the ITRS exploratory road map. Logic-in memory applications are already in the development stage. A 2017 review article can be found in "Materials Today". A generalized circuit theory for spintronic integrated circuits has been proposed so that the physics of spin transport can be utilized by SPICE developers and subsequently by circuit and system designers for the exploration of spintronics for “beyond CMOS computing.” Applications. Read heads of magnetic hard drives are based on the GMR or TMR effect. Motorola developed a first-generation 256 kb magnetoresistive random-access memory (MRAM) based on a single magnetic tunnel junction and a single transistor that has a read/write cycle of under 50 nanoseconds. Everspin has since developed a 4 Mb version. Two second-generation MRAM techniques are in development: thermal-assisted switching (TAS) and spin-transfer torque (STT). Another design, racetrack memory, a novel memory architecture proposed by Dr. Stuart S. P. Parkin, encodes information in the direction of magnetization between domain walls of a ferromagnetic wire. In 2012, persistent spin helices of synchronized electrons were made to persist for more than a nanosecond, a 30-fold increase over earlier efforts, and longer than the duration of a modern processor clock cycle. Semiconductor-based spintronic devices. Doped semiconductor materials display dilute ferromagnetism. In recent years, dilute magnetic oxides (DMOs) including ZnO based DMOs and TiO2-based DMOs have been the subject of numerous experimental and computational investigations. Non-oxide ferromagnetic semiconductor sources (like manganese-doped gallium arsenide ), increase the interface resistance with a tunnel barrier, or using hot-electron injection. Spin detection in semiconductors has been addressed with multiple techniques: The latter technique was used to overcome the lack of spin-orbit interaction and materials issues to achieve spin transport in silicon. Because external magnetic fields (and stray fields from magnetic contacts) can cause large Hall effects and magnetoresistance in semiconductors (which mimic spin-valve effects), the only conclusive evidence of spin transport in semiconductors is demonstration of spin precession and dephasing in a magnetic field non-collinear to the injected spin orientation, called the Hanle effect. Applications. Applications using spin-polarized electrical injection have shown threshold current reduction and controllable circularly polarized coherent light output. Examples include semiconductor lasers. Future applications may include a spin-based transistor having advantages over MOSFET devices such as steeper sub-threshold slope. Magnetic-tunnel transistor: The magnetic-tunnel transistor with a single base layer has the following terminals: The magnetocurrent (MC) is given as: formula_5 And the transfer ratio (TR) is formula_6 MTT promises a highly spin-polarized electron source at room temperature. Storage media. Antiferromagnetic storage media have been studied as an alternative to ferromagnetism, especially since with antiferromagnetic material the bits can be stored as well as with ferromagnetic material. Instead of the usual definition 0 ↔ 'magnetisation upwards', 1 ↔ 'magnetisation downwards', the states can be, e.g., 0 ↔ 'vertically-alternating spin configuration' and 1 ↔ 'horizontally-alternating spin configuration'.). The main advantages of antiferromagnetic material are: Research is being done into how to read and write information to antiferromagnetic spintronics as their net zero magnetization makes this difficult compared to conventional ferromagnetic spintronics. In modern MRAM, detection and manipulation of ferromagnetic order by magnetic fields has largely been abandoned in favor of more efficient and scalable reading and writing by electrical current. Methods of reading and writing information by current rather than fields are also being investigated in antiferromagnets as fields are ineffective anyway. Writing methods currently being investigated in antiferromagnets are through spin-transfer torque and spin-orbit torque from the spin Hall effect and the Rashba effect. Reading information in antiferromagnets via magnetoresistance effects such as tunnel magnetoresistance is also being explored. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tfrac{1}{2}\\hbar" }, { "math_id": 1, "text": "\\mu=\\tfrac{\\sqrt{3}}{2}\\frac{q}{m_e}\\hbar" }, { "math_id": 2, "text": "P_X=\\frac{X_{\\uparrow}-X_{\\downarrow}}{X_{\\uparrow}+X_{\\downarrow}}" }, { "math_id": 3, "text": "\\tau" }, { "math_id": 4, "text": "\\lambda" }, { "math_id": 5, "text": "MC = \\frac{I_{c,p}-I_{c,ap}}{I_{c,ap}}" }, { "math_id": 6, "text": "TR = \\frac{I_C}{I_E}" } ]
https://en.wikipedia.org/wiki?curid=61220
61223388
Yoshiko Shirata
Japanese accounting scholar (born 1952) "Cindy" Yoshiko Shirata (born 2 December 1952) is a Japanese accounting scholar who specialized in corporate bankruptcy prediction. She is best known for her SAF2002 bankruptcy prediction model. Her bankruptcy prediction model has been used by major banks and rating companies in Japan. She is considered one of the most well-known experts to develop bankruptcy prediction models in Japan. Early life. After graduating from high school, Shirata first worked for Japan Airlines (JAL) as a cabin attendant in the 1970s. She then worked as an assistant to the Software Engineering Manager of Pr1me Computer Japan, and an advisor to the President of Spalding Japan, and as advisor to the vice-president of Teikoku Data Bank. Subsequently, she worked as a Managing Associate of Coopers and Lybrand Japan Co., Ltd.&lt;br&gt;Shirata graduated from the Doctoral Program in Management and Public Policy, University of Tsukuba. In 1994 she was awarded a Master of Business Administration (MBA) and in March 1999, a Doctor of Philosophy in Business Administration (DBA). Academic career. Shirata started teaching in 1995 as a part-time lecturer at University of Tsukuba, at Tsukuba College of Technology Japan and at Chuo Commerce College, Chuo University. From 1996 until 2005 she has been a part-time teacher at Ryutsu Keizai University. From 1996 to 2001 she was associate professor of accounting, Tsukuba College of Technology Japan and from 2001 until 2005 Professor of Accounting at Nihon University College of Economics. From 2005 to 2007 she was Professor of Accounting, Graduate School of Management Of Technology, at Shibaura Institute of Technology. From 2007 to 2014, she was Professor of Accounting, Graduate School of Business Sciences, University of Tsukuba Tokyo Campus. She is now the Specially Appointed Professor of Tokyo International University and also visiting professor of accounting at Toyo University. Shirata has also been a visiting professor at various universities, such as the Ludwig Maximilian University of Munich, Germany, Sheffield University Management School, University of Sheffield, U.K., and visiting researcher at The Research Institute for Innovation Management, Hosei University, Japan. Other activities. From 2006 to 2014, Shirata was a council member of the Science Council of Japan, nominated by the prime minister of Japan. She was also a secretary general of Science Council of Asia from 2011 to 2014. At present, she holds various positions such as: Awards and recognition. On March 9, 2010, Dr. Shirata won the Best Faculty Member in 2009 Award from the University of Tsukuba.&lt;br&gt;On June 16, 2017, she was named a Lifetime Achiever by Marquis Who's Who endorsing her as a leader in the accounting education industry. The SAF2002 bankruptcy prediction model. In 2003, Dr. Shirata introduced the SAF2002 bankruptcy prediction model. The acronym SAF stands for Simple Analysis of Failure. The SAF2002 model was developed by analyzing the financial data of 1,436 bankrupt companies and 3,434 non-bankrupt companies extracted by a systematic sampling method from 107,034 companies. The variables for the model were selected by using a Classification and Regression Tree (CART) type of decision tree learning approach to analyze the financial data of Japanese companies that entered bankruptcy between 1992 and 2001. The four variables of the model that the CART approach identified are: formula_0 Retained Earnings to Total Liabilities and Owners’ Equity,&lt;br&gt; formula_1 Net Income Before Tax to Total Liabilities and Owners’ Equity,&lt;br&gt; formula_2 Inventory Turnover Period, and&lt;br&gt; formula_3 Interest Expenses to Sales. Based on a linear model, which exhibited the most stable and discriminant results, the model's SAF value for each firm is based on the following equation: formula_4 A SAF value of 0.7 or below quickly raises a firm's bankruptcy risk. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(x_1)" }, { "math_id": 1, "text": "(x_2)" }, { "math_id": 2, "text": "(x_3)" }, { "math_id": 3, "text": "(x_4)" }, { "math_id": 4, "text": "SAF = 0.01036x_1 + 0.02682x_2 + 0.06610x_3 + 0.02368x_4 + 0.70773" } ]
https://en.wikipedia.org/wiki?curid=61223388
612245
Thompson groups
Three groups In mathematics, the Thompson groups (also called Thompson's groups, vagabond groups or chameleon groups) are three groups, commonly denoted formula_0, that were introduced by Richard Thompson in some unpublished handwritten notes in 1965 as a possible counterexample to the von Neumann conjecture. Of the three, "F" is the most widely studied, and is sometimes referred to as the Thompson group or Thompson's group. The Thompson groups, and "F" in particular, have a collection of unusual properties that have made them counterexamples to many general conjectures in group theory. All three Thompson groups are infinite but finitely presented. The groups "T" and "V" are (rare) examples of infinite but finitely-presented simple groups. The group "F" is not simple but its derived subgroup ["F","F"] is and the quotient of "F" by its derived subgroup is the free abelian group of rank 2. "F" is totally ordered, has exponential growth, and does not contain a subgroup isomorphic to the free group of rank 2. It is conjectured that "F" is not amenable and hence a further counterexample to the long-standing but recently disproved von Neumann conjecture for finitely-presented groups: it is known that "F" is not elementary amenable. introduced an infinite family of finitely presented simple groups, including Thompson's group "V" as a special case. Presentations. A finite presentation of "F" is given by the following expression: formula_1 where ["x","y"] is the usual group theory commutator, "xyx"−1"y"−1. Although "F" has a finite presentation with 2 generators and 2 relations, it is most easily and intuitively described by the infinite presentation: formula_2 The two presentations are related by "x"0="A", "x""n" = "A"1−"n""BA""n"−1 for "n"&gt;0. Other representations. The group "F" also has realizations in terms of operations on ordered rooted binary trees, and as a subgroup of the piecewise linear homeomorphisms of the unit interval that preserve orientation and whose non-differentiable points are dyadic rationals and whose slopes are all powers of 2. The group "F" can also be considered as acting on the unit circle by identifying the two endpoints of the unit interval, and the group "T" is then the group of automorphisms of the unit circle obtained by adding the homeomorphism "x"→"x"+1/2 mod 1 to "F". On binary trees this corresponds to exchanging the two trees below the root. The group "V" is obtained from "T" by adding the discontinuous map that fixes the points of the half-open interval [0,1/2) and exchanges [1/2,3/4) and [3/4,1) in the obvious way. On binary trees this corresponds to exchanging the two trees below the right-hand descendant of the root (if it exists). The Thompson group "F" is the group of order-preserving automorphisms of the free Jónsson–Tarski algebra on one generator. Amenability. The conjecture of Thompson that "F" is not amenable was further popularized by R. Geoghegan—see also the Cannon–Floyd–Parry article cited in the references below. Its current status is open: E. Shavgulidze published a paper in 2009 in which he claimed to prove that "F" is amenable, but an error was found, as is explained in the MR review. It is known that "F" is not elementary amenable, see Theorem 4.10 in Cannon–Floyd–Parry. If "F" is not amenable, then it would be another counterexample to the now disproved von Neumann conjecture for finitely-presented groups, which states that a finitely-presented group is amenable if and only if it does not contain a copy of the free group of rank 2. Connections with topology. The group "F" was rediscovered at least twice by topologists during the 1970s. In a paper that was only published much later but was in circulation as a preprint at that time, P. Freyd and A. Heller showed that the "shift map" on "F" induces an unsplittable homotopy idempotent on the Eilenberg–MacLane space "K(F,1)" and that this is universal in an interesting sense. This is explained in detail in Geoghegan's book (see references below). Independently, J. Dydak and P. Minc created a less well-known model of "F" in connection with a problem in shape theory. In 1979, R. Geoghegan made four conjectures about "F": (1) "F" has type FP∞; (2) All homotopy groups of "F" at infinity are trivial; (3) "F" has no non-abelian free subgroups; (4) "F" is non-amenable. (1) was proved by K. S. Brown and R. Geoghegan in a strong form: there is a K(F,1) with two cells in each positive dimension. (2) was also proved by Brown and Geoghegan in the sense that the cohomology H*(F,ZF) was shown to be trivial; since a previous theorem of M. Mihalik implies that "F" is simply connected at infinity, and the stated result implies that all homology at infinity vanishes, the claim about homotopy groups follows. (3) was proved by M. Brin and C. Squier. The status of (4) is discussed above. It is unknown if "F" satisfies the Farrell–Jones conjecture. It is even unknown if the Whitehead group of "F" (see Whitehead torsion) or the projective class group of "F" (see Wall's finiteness obstruction) is trivial, though it easily shown that "F" satisfies the strong Bass conjecture. D. Farley has shown that "F" acts as deck transformations on a locally finite CAT(0) cubical complex (necessarily of infinite dimension). A consequence is that "F" satisfies the Baum–Connes conjecture. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F \\subseteq T \\subseteq V" }, { "math_id": 1, "text": "\\langle A,B \\mid\\ [AB^{-1},A^{-1}BA] = [AB^{-1},A^{-2}BA^{2}] = \\mathrm{id} \\rangle" }, { "math_id": 2, "text": "\\langle x_0, x_1, x_2, \\dots\\ \\mid\\ x_k^{-1} x_n x_k = x_{n+1}\\ \\mathrm{for}\\ k<n \\rangle." } ]
https://en.wikipedia.org/wiki?curid=612245
61229
Straightedge and compass construction
Method of drawing geometric objects In geometry, straightedge-and-compass construction – also known as ruler-and-compass construction, Euclidean construction, or classical construction – is the construction of lengths, angles, and other geometric figures using only an idealized ruler and a pair of compasses. The idealized ruler, known as a straightedge, is assumed to be infinite in length, have only one edge, and no markings on it. The compass is assumed to have no maximum or minimum radius, and is assumed to "collapse" when lifted from the page, so it may not be directly used to transfer distances. (This is an unimportant restriction since, using a multi-step procedure, a distance can be transferred even with a collapsing compass; see compass equivalence theorem. Note however that whilst a non-collapsing compass held against a straightedge might seem to be equivalent to marking it, the neusis construction is still impermissible and this is what unmarked really means: see Markable rulers below.) More formally, the only permissible constructions are those granted by the first three postulates of Euclid's "Elements". It turns out to be the case that every point constructible using straightedge and compass may also be constructed using compass alone, or by straightedge alone if given a single circle and its center. Ancient Greek mathematicians first conceived straightedge-and-compass constructions, and a number of ancient problems in plane geometry impose this restriction. The ancient Greeks developed many constructions, but in some cases were unable to do so. Gauss showed that some polygons are constructible but that most are not. Some of the most famous straightedge-and-compass problems were proved impossible by Pierre Wantzel in 1837 using field theory, namely trisecting an arbitrary angle and doubling the volume of a cube (see § impossible constructions). Many of these problems are easily solvable provided that other geometric transformations are allowed; for example, neusis construction can be used to solve the former two problems. In terms of algebra, a length is constructible if and only if it represents a constructible number, and an angle is constructible if and only if its cosine is a constructible number. A number is constructible if and only if it can be written using the four basic arithmetic operations and the extraction of square roots but of no higher-order roots. Straightedge and compass tools. The "straightedge" and "compass" of straightedge-and-compass constructions are idealized versions of real-world rulers and compasses. Actual compasses do not collapse and modern geometric constructions often use this feature. A 'collapsing compass' would appear to be a less powerful instrument. However, by the compass equivalence theorem in Proposition 2 of Book 1 of Euclid's Elements, no power is lost by using a collapsing compass. Although the proposition is correct, its proofs have a long and checkered history. In any case, the equivalence is why this feature is not stipulated in the definition of the ideal compass. Each construction must be mathematically "exact". "Eyeballing" distances (looking at the construction and guessing at its accuracy) or using markings on a ruler, are not permitted. Each construction must also "terminate". That is, it must have a finite number of steps, and not be the limit of ever closer approximations. (If an unlimited number of steps is permitted, some otherwise-impossible constructions become possible by means of infinite sequences converging to a limit.) Stated this way, straightedge-and-compass constructions appear to be a parlour game, rather than a serious practical problem; but the purpose of the restriction is to ensure that constructions can be "proved" to be "exactly" correct. History. The ancient Greek mathematicians first attempted straightedge-and-compass constructions, and they discovered how to construct sums, differences, products, ratios, and square roots of given lengths. They could also construct half of a given angle, a square whose area is twice that of another square, a square having the same area as a given polygon, and regular polygons of 3, 4, or 5 sides (or one with twice the number of sides of a given polygon). But they could not construct one third of a given angle except in particular cases, or a square with the same area as a given circle, or regular polygons with other numbers of sides. Nor could they construct the side of a cube whose volume is twice the volume of a cube with a given side. Hippocrates and Menaechmus showed that the volume of the cube could be doubled by finding the intersections of hyperbolas and parabolas, but these cannot be constructed by straightedge and compass. In the fifth century BCE, Hippias used a curve that he called a quadratrix to both trisect the general angle and square the circle, and Nicomedes in the second century BCE showed how to use a conchoid to trisect an arbitrary angle; but these methods also cannot be followed with just straightedge and compass. No progress on the unsolved problems was made for two millennia, until in 1796 Gauss showed that a regular polygon with 17 sides could be constructed; five years later he showed the sufficient criterion for a regular polygon of "n" sides to be constructible. In 1837 Pierre Wantzel published a proof of the impossibility of trisecting an arbitrary angle or of doubling the volume of a cube, based on the impossibility of constructing cube roots of lengths. He also showed that Gauss's sufficient constructibility condition for regular polygons is also necessary. Then in 1882 Lindemann showed that formula_0 is a transcendental number, and thus that it is impossible by straightedge and compass to construct a square with the same area as a given circle. The basic constructions. All straightedge-and-compass constructions consist of repeated application of five basic constructions using the points, lines and circles that have already been constructed. These are: For example, starting with just two distinct points, we can create a line or either of two circles (in turn, using each point as centre and passing through the other point). If we draw both circles, two new points are created at their intersections. Drawing lines between the two original points and one of these new points completes the construction of an equilateral triangle. Therefore, in any geometric problem we have an initial set of symbols (points and lines), an algorithm, and some results. From this perspective, geometry is equivalent to an axiomatic algebra, replacing its elements by symbols. Probably Gauss first realized this, and used it to prove the impossibility of some constructions; only much later did Hilbert find a complete set of axioms for geometry. Common straightedge-and-compass constructions. The most-used straightedge-and-compass constructions include: Constructible points. One can associate an algebra to our geometry using a Cartesian coordinate system made of two lines, and represent points of our plane by vectors. Finally we can write these vectors as complex numbers. Using the equations for lines and circles, one can show that the points at which they intersect lie in a quadratic extension of the smallest field "F" containing two points on the line, the center of the circle, and the radius of the circle. That is, they are of the form formula_1, where "x", "y", and "k" are in "F". Since the field of constructible points is closed under "square roots", it contains all points that can be obtained by a finite sequence of quadratic extensions of the field of complex numbers with rational coefficients. By the above paragraph, one can show that any constructible point can be obtained by such a sequence of extensions. As a corollary of this, one finds that the degree of the minimal polynomial for a constructible point (and therefore of any constructible length) is a power of 2. In particular, any constructible point (or length) is an algebraic number, though not every algebraic number is constructible; for example, is algebraic but not constructible. Constructible angles. There is a bijection between the angles that are constructible and the points that are constructible on any constructible circle. The angles that are constructible form an abelian group under addition modulo 2π (which corresponds to multiplication of the points on the unit circle viewed as complex numbers). The angles that are constructible are exactly those whose tangent (or equivalently, sine or cosine) is constructible as a number. For example, the regular heptadecagon (the seventeen-sided regular polygon) is constructible because formula_2 as discovered by Gauss. The group of constructible angles is closed under the operation that halves angles (which corresponds to taking square roots in the complex numbers). The only angles of finite order that may be constructed starting with two points are those whose order is either a power of two, or a product of a power of two and a set of distinct Fermat primes. In addition there is a dense set of constructible angles of infinite order. Relation to complex arithmetic. Given a set of points in the Euclidean plane, selecting any one of them to be called 0 and another to be called 1, together with an arbitrary choice of orientation allows us to consider the points as a set of complex numbers. Given any such interpretation of a set of points as complex numbers, the points constructible using valid straightedge-and-compass constructions alone are precisely the elements of the smallest field containing the original set of points and closed under the complex conjugate and square root operations (to avoid ambiguity, we can specify the square root with complex argument less than π). The elements of this field are precisely those that may be expressed as a formula in the original points using only the operations of addition, subtraction, multiplication, division, complex conjugate, and square root, which is easily seen to be a countable dense subset of the plane. Each of these six operations corresponding to a simple straightedge-and-compass construction. From such a formula it is straightforward to produce a construction of the corresponding point by combining the constructions for each of the arithmetic operations. More efficient constructions of a particular set of points correspond to shortcuts in such calculations. Equivalently (and with no need to arbitrarily choose two points) we can say that, given an arbitrary choice of orientation, a set of points determines a set of complex ratios given by the ratios of the differences between any two pairs of points. The set of ratios constructible using straightedge and compass from such a set of ratios is precisely the smallest field containing the original ratios and closed under taking complex conjugates and square roots. For example, the real part, imaginary part and modulus of a point or ratio "z" (taking one of the two viewpoints above) are constructible as these may be expressed as formula_3 formula_4 formula_5 "Doubling the cube" and "trisection of an angle" (except for special angles such as any "φ" such that "φ"/(2π) is a rational number with denominator not divisible by 3) require ratios which are the solution to cubic equations, while "squaring the circle" requires a transcendental ratio. None of these are in the fields described, hence no straightedge-and-compass construction for these exists. Impossible constructions. The ancient Greeks thought that the construction problems they could not solve were simply obstinate, not unsolvable. With modern methods, however, these straightedge-and-compass constructions have been shown to be logically impossible to perform. (The problems themselves, however, are solvable, and the Greeks knew how to solve them without the constraint of working only with straightedge and compass.) Squaring the circle. The most famous of these problems, squaring the circle, otherwise known as the quadrature of the circle, involves constructing a square with the same area as a given circle using only straightedge and compass. Squaring the circle has been proved impossible, as it involves generating a transcendental number, that is, . Only certain algebraic numbers can be constructed with ruler and compass alone, namely those constructed from the integers with a finite sequence of operations of addition, subtraction, multiplication, division, and taking square roots. The phrase "squaring the circle" is often used to mean "doing the impossible" for this reason. Without the constraint of requiring solution by ruler and compass alone, the problem is easily solvable by a wide variety of geometric and algebraic means, and was solved many times in antiquity. A method which comes very close to approximating the "quadrature of the circle" can be achieved using a Kepler triangle. Doubling the cube. Doubling the cube is the construction, using only a straightedge and compass, of the edge of a cube that has twice the volume of a cube with a given edge. This is impossible because the cube root of 2, though algebraic, cannot be computed from integers by addition, subtraction, multiplication, division, and taking square roots. This follows because its minimal polynomial over the rationals has degree 3. This construction is possible using a straightedge with two marks on it and a compass. Angle trisection. Angle trisection is the construction, using only a straightedge and a compass, of an angle that is one-third of a given arbitrary angle. This is impossible in the general case. For example, the angle 2π/5 radians (72° = 360°/5) can be trisected, but the angle of π/3 radians (60°) cannot be trisected. The general trisection problem is also easily solved when a straightedge with two marks on it is allowed (a neusis construction). Distance to an ellipse. The line segment from any point in the plane to the nearest point on a circle can be constructed, but the segment from any point in the plane to the nearest point on an ellipse of positive eccentricity cannot in general be constructed. See Note that results proven here are mostly a consequence of the non-constructivity of conics. If the initial conic is considered as a given, then the proof must be reviewed to check if other distinct conic needs to be generated. As an example, constructions for normals of a parabola are known, but they need to use an intersection between circle and the parabola itself. So they are not constructible in the sense that the parabola is not constructible. Alhazen's problem. In 1997, the Oxford mathematician Peter M. Neumann proved the theorem that there is no ruler-and-compass construction for the general solution of the ancient Alhazen's problem (billiard problem or reflection from a spherical mirror). Constructing regular polygons. Some regular polygons (e.g. a pentagon) are easy to construct with straightedge and compass; others are not. This led to the question: Is it possible to construct all regular polygons with straightedge and compass? Carl Friedrich Gauss in 1796 showed that a regular 17-sided polygon can be constructed, and five years later showed that a regular "n"-sided polygon can be constructed with straightedge and compass if the odd prime factors of "n" are distinct Fermat primes. Gauss conjectured that this condition was also necessary; the conjecture was proven by Pierre Wantzel in 1837. The first few constructible regular polygons have the following numbers of sides: 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40, 48, 51, 60, 64, 68, 80, 85, 96, 102, 120, 128, 136, 160, 170, 192, 204, 240, 255, 256, 257, 272... (sequence in the OEIS) There are known to be an infinitude of constructible regular polygons with an even number of sides (because if a regular "n"-gon is constructible, then so is a regular 2"n"-gon and hence a regular 4"n"-gon, 8"n"-gon, etc.). However, there are only 31 known constructible regular "n"-gons with an odd number of sides. Constructing a triangle from three given characteristic points or lengths. Sixteen key points of a triangle are its vertices, the midpoints of its sides, the feet of its altitudes, the feet of its internal angle bisectors, and its circumcenter, centroid, orthocenter, and incenter. These can be taken three at a time to yield 139 distinct nontrivial problems of constructing a triangle from three points. Of these problems, three involve a point that can be uniquely constructed from the other two points; 23 can be non-uniquely constructed (in fact for infinitely many solutions) but only if the locations of the points obey certain constraints; in 74 the problem is constructible in the general case; and in 39 the required triangle exists but is not constructible. Twelve key lengths of a triangle are the three side lengths, the three altitudes, the three medians, and the three angle bisectors. Together with the three angles, these give 95 distinct combinations, 63 of which give rise to a constructible triangle, 30 of which do not, and two of which are underdefined. Restricted constructions. Various attempts have been made to restrict the allowable tools for constructions under various rules, in order to determine what is still constructible and how it may be constructed, as well as determining the minimum criteria necessary to still be able to construct everything that compass and straightedge can. Constructing with only ruler or only compass. It is possible (according to the Mohr–Mascheroni theorem) to construct anything with just a compass if it can be constructed with a ruler and compass, provided that the given data and the data to be found consist of discrete points (not lines or circles). The truth of this theorem depends on the truth of Archimedes' axiom, which is not first-order in nature. Examples of compass-only constructions include Napoleon's problem. It is impossible to take a square root with just a ruler, so some things that cannot be constructed with a ruler can be constructed with a compass; but (by the Poncelet–Steiner theorem) given a single circle and its center, they can be constructed. Extended constructions. The ancient Greeks classified constructions into three major categories, depending on the complexity of the tools required for their solution. If a construction used only a straightedge and compass, it was called planar; if it also required one or more conic sections (other than the circle), then it was called solid; the third category included all constructions that did not fall into either of the other two categories. This categorization meshes nicely with the modern algebraic point of view. A complex number that can be expressed using only the field operations and square roots (as described above) has a planar construction. A complex number that includes also the extraction of cube roots has a solid construction. In the language of fields, a complex number that is planar has degree a power of two, and lies in a field extension that can be broken down into a tower of fields where each extension has degree two. A complex number that has a solid construction has degree with prime factors of only two and three, and lies in a field extension that is at the top of a tower of fields where each extension has degree 2 or 3. Solid constructions. A point has a solid construction if it can be constructed using a straightedge, compass, and a (possibly hypothetical) conic drawing tool that can draw any conic with already constructed focus, directrix, and eccentricity. The same set of points can often be constructed using a smaller set of tools. For example, using a compass, straightedge, and a piece of paper on which we have the parabola y=x2 together with the points (0,0) and (1,0), one can construct any complex number that has a solid construction. Likewise, a tool that can draw any ellipse with already constructed foci and major axis (think two pins and a piece of string) is just as powerful. The ancient Greeks knew that doubling the cube and trisecting an arbitrary angle both had solid constructions. Archimedes gave a neusis construction of the regular heptagon, which was interpreted by medieval Arabic commentators, Bartel Leendert van der Waerden, and others as being based on a solid construction, but this has been disputed, as other interpretations are possible. The quadrature of the circle does not have a solid construction. A regular "n"-gon has a solid construction if and only if "n"=2"a"3"b""m" where "a" and "b" are some non-negative integers and "m" is a product of zero or more distinct Pierpont primes (primes of the form 2"r"3"s"+1). Therefore, regular "n"-gon admits a solid, but not planar, construction if and only if "n" is in the sequence 7, 9, 13, 14, 18, 19, 21, 26, 27, 28, 35, 36, 37, 38, 39, 42, 45, 52, 54, 56, 57, 63, 65, 70, 72, 73, 74, 76, 78, 81, 84, 90, 91, 95, 97... (sequence in the OEIS) The set of "n" for which a regular "n"-gon has no solid construction is the sequence 11, 22, 23, 25, 29, 31, 33, 41, 43, 44, 46, 47, 49, 50, 53, 55, 58, 59, 61, 62, 66, 67, 69, 71, 75, 77, 79, 82, 83, 86, 87, 88, 89, 92, 93, 94, 98, 99, 100... (sequence in the OEIS) Like the question with Fermat primes, it is an open question as to whether there are an infinite number of Pierpont primes. Angle trisection. What if, together with the straightedge and compass, we had a tool that could (only) trisect an arbitrary angle? Such constructions are solid constructions, but there exist numbers with solid constructions that cannot be constructed using such a tool. For example, we cannot double the cube with such a tool. On the other hand, every regular n-gon that has a solid construction can be constructed using such a tool. Origami. The mathematical theory of origami is more powerful than straightedge-and-compass construction. Folds satisfying the Huzita–Hatori axioms can construct exactly the same set of points as the extended constructions using a compass and conic drawing tool. Therefore, origami can also be used to solve cubic equations (and hence quartic equations), and thus solve two of the classical problems. Markable rulers. Archimedes, Nicomedes and Apollonius gave constructions involving the use of a markable ruler. This would permit them, for example, to take a line segment, two lines (or circles), and a point; and then draw a line which passes through the given point and intersects the two given lines, such that the distance between the points of intersection equals the given segment. This the Greeks called "neusis" ("inclination", "tendency" or "verging"), because the new line "tends" to the point. In this expanded scheme, we can trisect an arbitrary angle (see Archimedes' trisection) or extract an arbitrary cube root (due to Nicomedes). Hence, any distance whose ratio to an existing distance is the solution of a cubic or a quartic equation is constructible. Using a markable ruler, regular polygons with solid constructions, like the heptagon, are constructible; and John H. Conway and Richard K. Guy give constructions for several of them. The neusis construction is more powerful than a conic drawing tool, as one can construct complex numbers that do not have solid constructions. In fact, using this tool one can solve some quintics that are not solvable using radicals. It is known that one cannot solve an irreducible polynomial of prime degree greater or equal to 7 using the neusis construction, so it is not possible to construct a regular 23-gon or 29-gon using this tool. Benjamin and Snyder proved that it is possible to construct the regular 11-gon, but did not give a construction. It is still open as to whether a regular 25-gon or 31-gon is constructible using this tool. Trisect a straight segment. Given a straight line segment called AB, could this be divided in three new equal segments and in many parts required by the use of intercept theorem. Computation of binary digits. In 1998 Simon Plouffe gave a ruler-and-compass algorithm that can be used to compute binary digits of certain numbers. The algorithm involves the repeated doubling of an angle and becomes physically impractical after about 20 binary digits. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi" }, { "math_id": 1, "text": "x + y = \\sqrt{k}" }, { "math_id": 2, "text": "\\begin{align}\n\\cos{\\left(\\frac{2\\pi}{17}\\right)} &= \\,-\\frac{1}{16} \\,+\\, \\frac{1}{16} \\sqrt{17} \\,+\\, \\frac{1}{16} \\sqrt{34 - 2 \\sqrt{17}}\n\\\\[5mu]\n&\\qquad +\\, \\frac{1}{8} \\sqrt{ 17 + 3 \\sqrt{17} - \\sqrt{34 - 2 \\sqrt{17}} - 2 \\sqrt{34 + 2 \\sqrt{17}} }\n\\end{align}" }, { "math_id": 3, "text": "\\mathrm{Re}(z)=\\frac{z+\\bar z}{2}\\;" }, { "math_id": 4, "text": "\\mathrm{Im}(z)=\\frac{z-\\bar z}{2i}\\;" }, { "math_id": 5, "text": "\\left | z \\right | = \\sqrt{z \\bar z}.\\;" } ]
https://en.wikipedia.org/wiki?curid=61229
61229758
Universal embedding theorem
The universal embedding theorem, or Krasner–Kaloujnine universal embedding theorem, is a theorem from the mathematical discipline of group theory first published in 1951 by Marc Krasner and Lev Kaluznin. The theorem states that any group extension of a group "H" by a group "A" is isomorphic to a subgroup of the regular wreath product "A" Wr "H". The theorem is named for the fact that the group "A" Wr "H" is said to be "universal" with respect to all extensions of "H" by "A". Statement. Let "H" and "A" be groups, let "K"   "A""H" be the set of all functions from "H" to "A", and consider the action of "H" on itself by right multiplication. This action extends naturally to an action of "H" on "K" defined by formula_0 where formula_1 and "g" and "h" are both in "H". This is an automorphism of "K", so we can define the semidirect product "K" ⋊ "H" called the "regular wreath product", and denoted "A" Wr "H" or formula_2 The group "K"   "A""H" (which is isomorphic to formula_3) is called the "base group" of the wreath product. The Krasner–Kaloujnine universal embedding theorem states that if "G" has a normal subgroup "A" and "H"   "G"/"A", then there is an injective homomorphism of groups formula_4 such that "A" maps surjectively onto formula_5 This is equivalent to the wreath product "A" Wr "H" having a subgroup isomorphic to "G", where "G" is any extension of "H" by "A". Proof. This proof comes from Dixon–Mortimer. Define a homomorphism formula_6 whose kernel is "A". Choose a set formula_7 of (right) coset representatives of "A" in "G", where formula_8 Then for all "x" in "G", formula_9 For each "x" in "G", we define a function "f"x: "H" → "A" such that formula_10 Then the embedding formula_11 is given by formula_12 We now prove that this is a homomorphism. If "x" and "y" are in "G", then formula_13 Now formula_14 so for all "u" in "H", formula_15 so "f""x" "f""y"   "f""xy". Hence formula_11 is a homomorphism as required. The homomorphism is injective. If formula_16 then both "f""x"("u")   "f""y"("u") (for all "u") and formula_17 Then formula_18 but we can cancel "t""u" and formula_19 from both sides, so "x"   "y", hence formula_11 is injective. Finally, formula_20 precisely when formula_21 in other words when formula_22 (as formula_23). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\phi(g).h=\\phi(gh^{-1})," }, { "math_id": 1, "text": "\\phi\\in K," }, { "math_id": 2, "text": "A\\wr H." }, { "math_id": 3, "text": "\\{(f_x,1)\\in A\\wr H:x\\in K\\}" }, { "math_id": 4, "text": "\\theta:G\\to A\\wr H" }, { "math_id": 5, "text": "\\text{im}(\\theta)\\cap K." }, { "math_id": 6, "text": "\\psi:G\\to H" }, { "math_id": 7, "text": "T=\\{t_u:u\\in H\\}" }, { "math_id": 8, "text": "\\psi(t_u)=u." }, { "math_id": 9, "text": "t_u x t^{-1}_{u\\psi(x)}\\in\\ker \\psi=A." }, { "math_id": 10, "text": "f_x(u)=t_u x t^{-1}_{u\\psi(x)}." }, { "math_id": 11, "text": "\\theta" }, { "math_id": 12, "text": "\\theta(x)=(f_x,\\psi(x))\\in A\\wr H." }, { "math_id": 13, "text": "\\theta(x)\\theta(y)=(f_x(f_y.\\psi(x)^{-1}),\\psi(xy))." }, { "math_id": 14, "text": "f_y(u).\\psi(x)^{-1}=f_y(u\\psi(x))," }, { "math_id": 15, "text": "f_x(u)(f_y(u).\\psi(x)) = t_u x t^{-1}_{u\\psi(x)} t_{u\\psi(x)} y t^{-1}_{u\\psi(x)\\psi(y)}=t_u xy t^{-1}_{u\\psi(xy)}," }, { "math_id": 16, "text": "\\theta(x)=\\theta(y)," }, { "math_id": 17, "text": "\\psi(x)=\\psi(y)." }, { "math_id": 18, "text": "t_u x t^{-1}_{u\\psi(x)}=t_u y t^{-1}_{u\\psi(y)}," }, { "math_id": 19, "text": "t^{-1}_{u\\psi(x)}=t^{-1}_{u\\psi(y)}" }, { "math_id": 20, "text": "\\theta(x)\\in K" }, { "math_id": 21, "text": "\\psi(x)=1," }, { "math_id": 22, "text": "x\\in A" }, { "math_id": 23, "text": "A=\\ker\\psi" } ]
https://en.wikipedia.org/wiki?curid=61229758
6123
Curl (mathematics)
Circulation density in a vector field In vector calculus, the curl, also known as rotor, is a vector operator that describes the infinitesimal circulation of a vector field in three-dimensional Euclidean space. The curl at a point in the field is represented by a vector whose length and direction denote the magnitude and axis of the maximum circulation. The curl of a field is formally defined as the circulation density at each point of the field. A vector field whose curl is zero is called irrotational. The curl is a form of differentiation for vector fields. The corresponding form of the fundamental theorem of calculus is Stokes' theorem, which relates the surface integral of the curl of a vector field to the line integral of the vector field around the boundary curve. The notation curl F is more common in North America. In the rest of the world, particularly in 20th century scientific literature, the alternative notation rot F is traditionally used, which comes from the "rate of rotation" that it represents. To avoid confusion, modern authors tend to use the cross product notation with the del (nabla) operator, as in formula_0, which also reveals the relation between curl (rotor), divergence, and gradient operators. Unlike the gradient and divergence, curl as formulated in vector calculus does not generalize simply to other dimensions; some generalizations are possible, but only in three dimensions is the geometrically defined curl of a vector field again a vector field. This deficiency is a direct consequence of the limitations of vector calculus; on the other hand, when expressed as an antisymmetric tensor field via the wedge operator of geometric calculus, the curl generalizes to all dimensions. The circumstance is similar to that attending the 3-dimensional cross product, and indeed the connection is reflected in the notation formula_1 for the curl. The name "curl" was first suggested by James Clerk Maxwell in 1871 but the concept was apparently first used in the construction of an optical field theory by James MacCullagh in 1839. Definition. The curl of a vector field F, denoted by curl F, or formula_0, or rot F, is an operator that maps "Ck" functions in R3 to "C""k"−1 functions in R3, and in particular, it maps continuously differentiable functions R3 → R3 to continuous functions R3 → R3. It can be defined in several ways, to be mentioned below: One way to define the curl of a vector field at a point is implicitly through its components along various axes passing through the point: if formula_3 is any unit vector, the component of the curl of F along the direction formula_3 may be defined to be the limiting value of a closed line integral in a plane perpendicular to formula_3 divided by the area enclosed, as the path of integration is contracted indefinitely around the point. More specifically, the curl is defined at a point "p" as formula_4 where the line integral is calculated along the boundary "C" of the area "A" in question, being the magnitude of the area. This equation defines the component of the curl of F along the direction formula_3. The infinitesimal surfaces bounded by "C" have formula_3 as their normal. "C" is oriented via the right-hand rule. The above formula means that the component of the curl of a vector field along a certain axis is the "infinitesimal area density" of the circulation of the field in a plane perpendicular to that axis. This formula does not "a priori" define a legitimate vector field, for the individual circulation densities with respect to various axes "a priori" need not relate to each other in the same way as the components of a vector do; that they "do" indeed relate to each other in this precise manner must be proven separately. To this definition fits naturally the Kelvin–Stokes theorem, as a global formula corresponding to the definition. It equates the surface integral of the curl of a vector field to the above line integral taken around the boundary of the surface. Another way one can define the curl vector of a function F at a point is explicitly as the limiting value of a vector-valued surface integral around a shell enclosing "p" divided by the volume enclosed, as the shell is contracted indefinitely around "p". More specifically, the curl may be defined by the vector formula formula_5 where the surface integral is calculated along the boundary "S" of the volume "V", being the magnitude of the volume, and formula_2 pointing outward from the surface "S" perpendicularly at every point in "S". In this formula, the cross product in the integrand measures the tangential component of F at each point on the surface "S", and points along the surface at right angles to the "tangential projection" of F. Integrating this cross product over the whole surface results in a vector whose magnitude measures the overall circulation of F around "S", and whose direction is at right angles to this circulation. The above formula says that the "curl" of a vector field at a point is the "infinitesimal volume density" of this "circulation vector" around the point. To this definition fits naturally another global formula (similar to the Kelvin-Stokes theorem) which equates the volume integral of the curl of a vector field to the above surface integral taken over the boundary of the volume. Whereas the above two definitions of the curl are coordinate free, there is another "easy to memorize" definition of the curl in curvilinear orthogonal coordinates, e.g. in Cartesian coordinates, spherical, cylindrical, or even elliptical or parabolic coordinates: formula_6 The equation for each component (curl F)"k" can be obtained by exchanging each occurrence of a subscript 1, 2, 3 in cyclic permutation: 1 → 2, 2 → 3, and 3 → 1 (where the subscripts represent the relevant indices). If ("x"1, "x"2, "x"3) are the Cartesian coordinates and ("u"1, "u"2, "u"3) are the orthogonal coordinates, then formula_7 is the length of the coordinate vector corresponding to "ui". The remaining two components of curl result from cyclic permutation of indices: 3,1,2 → 1,2,3 → 2,3,1. Usage. In practice, the two coordinate-free definitions described above are rarely used because in virtually all cases, the curl operator can be applied using some set of curvilinear coordinates, for which simpler representations have been derived. The notation ∇ × F has its origins in the similarities to the 3-dimensional cross product, and it is useful as a mnemonic in Cartesian coordinates if ∇ is taken as a vector differential operator del. Such notation involving operators is common in physics and algebra. Expanded in 3-dimensional Cartesian coordinates (see "Del in cylindrical and spherical coordinates" for spherical and cylindrical coordinate representations),∇ × F is, for F composed of ["Fx", "Fy", "Fz"] (where the subscripts indicate the components of the vector, not partial derivatives): formula_8 where i, j, and k are the unit vectors for the "x"-, "y"-, and "z"-axes, respectively. This expands as follows: formula_9 Although expressed in terms of coordinates, the result is invariant under proper rotations of the coordinate axes but the result inverts under reflection. In a general coordinate system, the curl is given by formula_10 where ε denotes the Levi-Civita tensor, ∇ the covariant derivative, formula_11 is the determinant of the metric tensor and the Einstein summation convention implies that repeated indices are summed over. Due to the symmetry of the Christoffel symbols participating in the covariant derivative, this expression reduces to the partial derivative: formula_12 where R"k" are the local basis vectors. Equivalently, using the exterior derivative, the curl can be expressed as: formula_13 Here ♭ and ♯ are the musical isomorphisms, and is the Hodge star operator. This formula shows how to calculate the curl of F in any coordinate system, and how to extend the curl to any oriented three-dimensional Riemannian manifold. Since this depends on a choice of orientation, curl is a chiral operation. In other words, if the orientation is reversed, then the direction of the curl is also reversed. Examples. Example 1. Suppose the vector field describes the velocity field of a fluid flow (such as a large tank of liquid or gas) and a small ball is located within the fluid or gas (the center of the ball being fixed at a certain point). If the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis (oriented according to the right hand rule) points in the direction of the curl of the field at the center of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point. The curl of the vector field at any point is given by the rotation of an infinitesimal area in the "xy"-plane (for "z"-axis component of the curl), "zx"-plane (for "y"-axis component of the curl) and "yz"-plane (for "x"-axis component of the curl vector). This can be seen in the examples below. Example 2. The vector field formula_14 can be decomposed as formula_15 Upon visual inspection, the field can be described as "rotating". If the vectors of the field were to represent a linear force acting on objects present at that point, and an object were to be placed inside the field, the object would start to rotate clockwise around itself. This is true regardless of where the object is placed. Calculating the curl: formula_16 The resulting vector field describing the curl would at all points be pointing in the negative "z" direction. The results of this equation align with what could have been predicted using the right-hand rule using a right-handed coordinate system. Being a uniform vector field, the object described before would have the same rotational intensity regardless of where it was placed. Example 3. For the vector field formula_17 the curl is not as obvious from the graph. However, taking the object in the previous example, and placing it anywhere on the line "x" = 3, the force exerted on the right side would be slightly greater than the force exerted on the left, causing it to rotate clockwise. Using the right-hand rule, it can be predicted that the resulting curl would be straight in the negative "z" direction. Inversely, if placed on "x" = −3, the object would rotate counterclockwise and the right-hand rule would result in a positive "z" direction. Calculating the curl: formula_18 The curl points in the negative "z" direction when "x" is positive and vice versa. In this field, the intensity of rotation would be greater as the object moves away from the plane "x" = 0. Identities. In general curvilinear coordinates (not only in Cartesian coordinates), the curl of a cross product of vector fields v and F can be shown to be formula_19 Interchanging the vector field v and ∇ operator, we arrive at the cross product of a vector field with curl of a vector field: formula_20 where ∇F is the Feynman subscript notation, which considers only the variation due to the vector field F (i.e., in this case, v is treated as being constant in space). Another example is the curl of a curl of a vector field. It can be shown that in general coordinates formula_21 and this identity defines the vector Laplacian of F, symbolized as ∇2F. The curl of the gradient of "any" scalar field φ is always the zero vector field formula_22 which follows from the antisymmetry in the definition of the curl, and the symmetry of second derivatives. The divergence of the curl of any vector field is equal to zero: formula_23 If φ is a scalar valued function and F is a vector field, then formula_24 Generalizations. The vector calculus operations of grad, curl, and div are most easily generalized in the context of differential forms, which involves a number of steps. In short, they correspond to the derivatives of 0-forms, 1-forms, and 2-forms, respectively. The geometric interpretation of curl as rotation corresponds to identifying bivectors (2-vectors) in 3 dimensions with the special orthogonal Lie algebra formula_25 of infinitesimal rotations (in coordinates, skew-symmetric 3 × 3 matrices), while representing rotations by vectors corresponds to identifying 1-vectors (equivalently, 2-vectors) and formula_25, these all being 3-dimensional spaces. Differential forms. In 3 dimensions, a differential 0-form is a real-valued function "f"("x", "y", "z"); a differential 1-form is the following expression, where the coefficients are functions: formula_26 a differential 2-form is the formal sum, again with function coefficients: formula_27 and a differential 3-form is defined by a single term with one function as coefficient: formula_28 The exterior derivative of a "k"-form in R3 is defined as the ("k" + 1)-form from above—and in R"n" if, e.g., formula_29 then the exterior derivative "d" leads to formula_30 The exterior derivative of a 1-form is therefore a 2-form, and that of a 2-form is a 3-form. On the other hand, because of the interchangeability of mixed derivatives, formula_31 and antisymmetry, formula_32 the twofold application of the exterior derivative yields formula_33 (the zero formula_34-form). Thus, denoting the space of "k"-forms by Ω"k"(R3) and the exterior derivative by "d" one gets a sequence: formula_35 Here Ω"k"(R"n") is the space of sections of the exterior algebra Λ"k"(R"n") vector bundle over R"n", whose dimension is the binomial coefficient (); note that Ω"k"(R3) = 0 for "k" &gt; 3 or "k" &lt; 0. Writing only dimensions, one obtains a row of Pascal's triangle: &lt;templatestyles src="Block indent/styles.css"/&gt;0 → 1 → 3 → 3 → 1 → 0; the 1-dimensional fibers correspond to scalar fields, and the 3-dimensional fibers to vector fields, as described below. Modulo suitable identifications, the three nontrivial occurrences of the exterior derivative correspond to grad, curl, and div. Differential forms and the differential can be defined on any Euclidean space, or indeed any manifold, without any notion of a Riemannian metric. On a Riemannian manifold, or more generally pseudo-Riemannian manifold, "k"-forms can be identified with "k"-vector fields ("k"-forms are "k"-covector fields, and a pseudo-Riemannian metric gives an isomorphism between vectors and covectors), and on an "oriented" vector space with a nondegenerate form (an isomorphism between vectors and covectors), there is an isomorphism between "k"-vectors and ("n" − "k")-vectors; in particular on (the tangent space of) an oriented pseudo-Riemannian manifold. Thus on an oriented pseudo-Riemannian manifold, one can interchange "k"-forms, "k"-vector fields, ("n" − "k")-forms, and ("n" − "k")-vector fields; this is known as Hodge duality. Concretely, on R3 this is given by: Thus, identifying 0-forms and 3-forms with scalar fields, and 1-forms and 2-forms with vector fields: On the other hand, the fact that "d"2 = 0 corresponds to the identities formula_36 for any scalar field f, and formula_37 for any vector field v. Grad and div generalize to all oriented pseudo-Riemannian manifolds, with the same geometric interpretation, because the spaces of 0-forms and "n"-forms at each point are always 1-dimensional and can be identified with scalar fields, while the spaces of 1-forms and ("n" − 1)-forms are always fiberwise "n"-dimensional and can be identified with vector fields. Curl does not generalize in this way to 4 or more dimensions (or down to 2 or fewer dimensions); in 4 dimensions the dimensions are &lt;templatestyles src="Block indent/styles.css"/&gt;0 → 1 → 4 → 6 → 4 → 1 → 0; so the curl of a 1-vector field (fiberwise 4-dimensional) is a "2-vector field", which at each point belongs to 6-dimensional vector space, and so one has formula_38 which yields a sum of six independent terms, and cannot be identified with a 1-vector field. Nor can one meaningfully go from a 1-vector field to a 2-vector field to a 3-vector field (4 → 6 → 4), as taking the differential twice yields zero ("d"2 = 0). Thus there is no curl function from vector fields to vector fields in other dimensions arising in this way. However, one can define a curl of a vector field as a "2-vector field" in general, as described below. Curl geometrically. 2-vectors correspond to the exterior power Λ2"V"; in the presence of an inner product, in coordinates these are the skew-symmetric matrices, which are geometrically considered as the special orthogonal Lie algebra formula_39("V") of infinitesimal rotations. This has () = "n"("n" − 1) dimensions, and allows one to interpret the differential of a 1-vector field as its infinitesimal rotations. Only in 3 dimensions (or trivially in 0 dimensions) we have "n" = "n"("n" − 1), which is the most elegant and common case. In 2 dimensions the curl of a vector field is not a vector field but a function, as 2-dimensional rotations are given by an angle (a scalar – an orientation is required to choose whether one counts clockwise or counterclockwise rotations as positive); this is not the div, but is rather perpendicular to it. In 3 dimensions the curl of a vector field is a vector field as is familiar (in 1 and 0 dimensions the curl of a vector field is 0, because there are no non-trivial 2-vectors), while in 4 dimensions the curl of a vector field is, geometrically, at each point an element of the 6-dimensional Lie algebra formula_40. The curl of a 3-dimensional vector field which only depends on 2 coordinates (say "x" and "y") is simply a vertical vector field (in the "z" direction) whose magnitude is the curl of the 2-dimensional vector field, as in the examples on this page. Considering curl as a 2-vector field (an antisymmetric 2-tensor) has been used to generalize vector calculus and associated physics to higher dimensions. Inverse. In the case where the divergence of a vector field V is zero, a vector field W exists such that V = curl(W). This is why the magnetic field, characterized by zero divergence, can be expressed as the curl of a magnetic vector potential. If W is a vector field with curl(W) = V, then adding any gradient vector field grad("f") to W will result in another vector field W + grad("f") such that curl(W + grad("f")) = V as well. This can be summarized by saying that the inverse curl of a three-dimensional vector field can be obtained up to an unknown irrotational field with the Biot–Savart law. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nabla \\times \\mathbf{F}" }, { "math_id": 1, "text": "\\nabla \\times" }, { "math_id": 2, "text": "\\mathbf{\\hat{n}}" }, { "math_id": 3, "text": "\\mathbf{\\hat{u}}" }, { "math_id": 4, "text": "(\\nabla \\times \\mathbf{F})(p)\\cdot \\mathbf{\\hat{u}} \\ \\overset{\\underset{\\mathrm{def}}{}}{{}={}} \\lim_{A \\to 0}\\frac{1}{|A|}\\oint_C \\mathbf{F} \\cdot \\mathrm{d}\\mathbf{r}" }, { "math_id": 5, "text": "(\\nabla \\times \\mathbf{F})(p) \\overset{\\underset{\\mathrm{def}}{}}{{}={}} \\lim_{V \\to 0}\\frac{1}{|V|}\\oint_S \\mathbf{\\hat{n}} \\times \\mathbf{F} \\ \\mathrm{d}S" }, { "math_id": 6, "text": "\\begin{align}\n& (\\operatorname{curl}\\mathbf F)_1=\\frac{1}{h_2h_3}\\left (\\frac{\\partial (h_3F_3)}{\\partial u_2}-\\frac{\\partial (h_2F_2)}{\\partial u_3}\\right ), \\\\[5pt]\n& (\\operatorname{curl}\\mathbf F)_2=\\frac{1}{h_3h_1}\\left (\\frac{\\partial (h_1F_1)}{\\partial u_3}-\\frac{\\partial (h_3F_3)}{\\partial u_1}\\right ), \\\\[5pt]\n& (\\operatorname{curl}\\mathbf F)_3=\\frac{1}{h_1h_2}\\left (\\frac{\\partial (h_2F_2)}{\\partial u_1}-\\frac{\\partial (h_1F_1)}{\\partial u_2}\\right ).\n\\end{align}" }, { "math_id": 7, "text": "h_i = \\sqrt{\\left (\\frac{\\partial x_1}{\\partial u_i} \\right )^2 + \\left (\\frac{\\partial x_2}{\\partial u_i} \\right )^2 + \\left (\\frac{\\partial x_3}{\\partial u_i} \\right )^2}" }, { "math_id": 8, "text": "\n\\nabla \\times \\mathbf{F} =\n\\begin{vmatrix} \\boldsymbol{\\hat\\imath} & \\boldsymbol{\\hat\\jmath} & \\boldsymbol{\\hat k} \\\\[5mu]\n{\\dfrac{\\partial}{\\partial x}} & {\\dfrac{\\partial}{\\partial y}} & {\\dfrac{\\partial}{\\partial z}} \\\\[5mu]\nF_x & F_y & F_z \\end{vmatrix}\n" }, { "math_id": 9, "text": "\n\\nabla \\times \\mathbf{F} =\n\\left(\\frac{\\partial F_z}{\\partial y} - \\frac{\\partial F_y}{\\partial z}\\right) \\boldsymbol{\\hat\\imath} -\n\\left(\\frac{\\partial F_z}{\\partial x} - \\frac{\\partial F_x}{\\partial z} \\right) \\boldsymbol{\\hat\\jmath} +\n\\left(\\frac{\\partial F_y}{\\partial x} - \\frac{\\partial F_x}{\\partial y} \\right) \\boldsymbol{\\hat k}\n" }, { "math_id": 10, "text": "(\\nabla \\times \\mathbf{F} )^k = \\frac{1}{\\sqrt{g}} \\varepsilon^{k\\ell m} \\nabla_\\ell F_m" }, { "math_id": 11, "text": " g" }, { "math_id": 12, "text": "(\\nabla \\times \\mathbf{F} ) = \\frac{1}{\\sqrt{g}} \\mathbf{R}_k\\varepsilon^{k\\ell m} \\partial_\\ell F_m" }, { "math_id": 13, "text": " \\nabla \\times \\mathbf{F} = \\left( \\star \\big( {\\mathrm d} \\mathbf{F}^\\flat \\big) \\right)^\\sharp " }, { "math_id": 14, "text": "\\mathbf{F}(x,y,z)=y\\boldsymbol{\\hat{\\imath}}-x\\boldsymbol{\\hat{\\jmath}}" }, { "math_id": 15, "text": "F_x =y, F_y = -x, F_z =0." }, { "math_id": 16, "text": "\\nabla \\times \\mathbf{F} =0\\boldsymbol{\\hat{\\imath}}+0\\boldsymbol{\\hat{\\jmath}}+ \\left({\\frac{\\partial}{\\partial x}}(-x) -{\\frac{\\partial}{\\partial y}} y\\right)\\boldsymbol{\\hat{k}}=-2\\boldsymbol{\\hat{k}}\n" }, { "math_id": 17, "text": "\\mathbf{F}(x,y,z) = -x^2\\boldsymbol{\\hat{\\jmath}}" }, { "math_id": 18, "text": "{\\nabla} \\times \\mathbf{F} = 0 \\boldsymbol{\\hat{\\imath}} + 0\\boldsymbol{\\hat{\\jmath}} + {\\frac{\\partial}{\\partial x}}\\left(-x^2\\right) \\boldsymbol{\\hat{k}} = -2x\\boldsymbol{\\hat{k}}." }, { "math_id": 19, "text": "\\nabla \\times \\left( \\mathbf{v \\times F} \\right) = \\Big( \\left( \\mathbf{ \\nabla \\cdot F } \\right) + \\mathbf{F \\cdot \\nabla} \\Big) \\mathbf{v}- \\Big( \\left( \\mathbf{ \\nabla \\cdot v } \\right) + \\mathbf{v \\cdot \\nabla} \\Big) \\mathbf{F} \\ . " }, { "math_id": 20, "text": " \\mathbf{v \\ \\times } \\left( \\mathbf{ \\nabla \\times F} \\right) =\\nabla_\\mathbf{F} \\left( \\mathbf{v \\cdot F } \\right) - \\left( \\mathbf{v \\cdot \\nabla } \\right) \\mathbf{F} \\ , " }, { "math_id": 21, "text": " \\nabla \\times \\left( \\mathbf{\\nabla \\times F} \\right) = \\mathbf{\\nabla}(\\mathbf{\\nabla \\cdot F}) - \\nabla^2 \\mathbf{F} \\ , " }, { "math_id": 22, "text": "\\nabla \\times ( \\nabla \\varphi ) = \\boldsymbol{0}" }, { "math_id": 23, "text": "\\nabla\\cdot(\\nabla\\times\\mathbf{F}) = 0." }, { "math_id": 24, "text": "\\nabla \\times ( \\varphi \\mathbf{F}) = \\nabla \\varphi \\times \\mathbf{F} + \\varphi \\nabla \\times \\mathbf{F} " }, { "math_id": 25, "text": "\\mathfrak{so}(3)" }, { "math_id": 26, "text": "a_1\\,dx + a_2\\,dy + a_3\\,dz;" }, { "math_id": 27, "text": "a_{12}\\,dx\\wedge dy + a_{13}\\,dx\\wedge dz + a_{23}\\,dy\\wedge dz;" }, { "math_id": 28, "text": "a_{123}\\,dx\\wedge dy\\wedge dz." }, { "math_id": 29, "text": "\\omega^{(k)}=\\sum_{1\\leq i_1<i_2<\\cdots<i_k\\leq n} a_{i_1,\\ldots,i_k} \\,dx_{i_1}\\wedge \\cdots\\wedge dx_{i_k}," }, { "math_id": 30, "text": " d\\omega^{(k)}=\\sum_{\\scriptstyle{j=1} \\atop \\scriptstyle{i_1<\\cdots<i_k}}^n\\frac{\\partial a_{i_1,\\ldots,i_k}}{\\partial x_j}\\,dx_j \\wedge dx_{i_1}\\wedge \\cdots \\wedge dx_{i_k}." }, { "math_id": 31, "text": "\\frac{\\partial^2}{\\partial x_i\\,\\partial x_j} = \\frac{\\partial^2}{\\partial x_j\\,\\partial x_i} , " }, { "math_id": 32, "text": "d x_i \\wedge d x_j = -d x_j \\wedge d x_i" }, { "math_id": 33, "text": "0" }, { "math_id": 34, "text": "k+2" }, { "math_id": 35, "text": "0 \\, \\overset{d}{\\longrightarrow} \\;\n\\Omega^0\\left(\\mathbb{R}^3\\right) \\, \\overset{d}{\\longrightarrow} \\;\n\\Omega^1\\left(\\mathbb{R}^3\\right) \\, \\overset{d}{\\longrightarrow} \\;\n\\Omega^2\\left(\\mathbb{R}^3\\right) \\, \\overset{d}{\\longrightarrow} \\;\n\\Omega^3\\left(\\mathbb{R}^3\\right) \\, \\overset{d}{\\longrightarrow} \\, 0." }, { "math_id": 36, "text": "\\nabla\\times(\\nabla f) = \\mathbf 0" }, { "math_id": 37, "text": "\\nabla \\cdot (\\nabla \\times\\mathbf v)=0" }, { "math_id": 38, "text": "\\omega^{(2)}=\\sum_{i<k=1,2,3,4}a_{i,k}\\,dx_i\\wedge dx_k," }, { "math_id": 39, "text": "\\mathfrak{so}" }, { "math_id": 40, "text": "\\mathfrak{so}(4)" } ]
https://en.wikipedia.org/wiki?curid=6123
61236953
Request for quote
A Request for Quote (RfQ) is a financial term for certain way to ask a bank for an offer of a given financial instrument from a bank, made available by so-called Approved Publication Arrangement (APA) by the stock markets itself or by Financial data vendors as required in Europe by MiFID II and in effect since January 2018. A RFQ contains at least the ISIN to uniquely identify the financial product, the type (buy/ sell), the amount, a currency, and the volume (formula_0 in given currency). Background. In the wake of the 2007-09 financial crisis there was an initiative to create more "pre-trade transparency", for which it is essential to know who is requesting which financial product. Article 1(2) of the Commission Delegated Regulation (EU) 2017/583 of 14 July 2016 (which supplements Regulation (EU) No 600/2014 of the European Parliament and of the council on markets in financial instruments) defines: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;A request-for-quote (RfQ) system is a trading system where the following conditions are met: This essentially means, that everybody buying or selling stocks, bonds, foreign exchange, commodities or exchange-traded funds (ETFs) will (automatically) generate an RfQ before the trade is settled. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. The MiFID II and APA data is distributed e.g. by Further reading:
[ { "math_id": 0, "text": "\\hbox{amount}\\times\\hbox{market price}" } ]
https://en.wikipedia.org/wiki?curid=61236953
61239605
RE/flex
Advanced lexical analyzer generator for C++ RE/flex (regex-centric, fast lexical analyzer) is a free and open source computer program written in C++ that generates fast lexical analyzers (also known as "scanners" or "lexers") in C++. RE/flex offers full Unicode support, indentation anchors, word boundaries, lazy quantifiers (non-greedy, lazy repeats), and performance tuning options. RE/flex accepts Flex lexer specifications and offers options to generate scanners for Bison parsers. RE/flex includes a fast C++ regular expression library. History. The RE/flex project was designed and implemented by professor Robert van Engelen in 2016 and released as free open source. The software evolved with several contributions made by others. The RE/flex tool generates lexical analyzers based on regular expression ("regex") libraries, instead of fixed DFA tables generated by traditional lexical analyzer generators. Lexer specification. The RE/flex lexical analyzer generator accepts an extended syntax of Flex lexer specifications as input. The RE/flex specification syntax is more expressive than the traditional Flex lexer specification syntax and may include indentation anchors, word boundaries, lazy quantifiers (non-greedy, lazy repeats), and new actions such as codice_0 to retrieve Unicode wide-string matches. A lexer specification is of the form: Definitions Rules User Code The Definitions section includes declarations and customization options, followed by name-pattern pairs to define names for regular expression patterns. Named patterns may be referenced in other patterns by embracing them in codice_1 and codice_2. The following example defines two names for two patterns, where the second pattern codice_3 uses the previously named pattern codice_4: %top{ #include &lt;inttypes.h&gt; // strtol() %class{ public: int value; // yyFlexLexer class public member %init{ value = 0; // yyFlexLexer initializations %option flex digit [0-9] number {digit}+ The Rules section defines pattern-action pairs. The following example defines a rule to translate a number to the lexer class integer codice_5 member: \s // skip white space . throw *yytext; The User Code section typically defines C/C++ functions, for example a codice_6 program: int main() yyFlexLexer lexer; try while (lexer.yylex() != 0) std::cout « "number=" « lexer.value « std::endl; catch (int ch) std::cerr « "Error: unknown character code " « ch « std::endl; The codice_7 class is generated by RE/flex as instructed by the codice_8 directive in the lexer specification. The generated codice_9 source code contains the algorithm for lexical analysis, which is linked with the codice_10 library. Source code output. The generated algorithm for lexical analysis is based on the concept that any regular expression engine can in principle be used to tokenize input into tokens: given a set of n regular expression patterns formula_0 for formula_1, a regular expression of the form codice_11 with n alternations may be specified to match and tokenize the input. In this way, the group capture index i of a matching pattern formula_0 that is returned by the regular expression matcher identifies the pattern formula_0 that matched the input text partially and continuously after the previous match. This approach makes it possible for any regex library that supports group captures to be utilized as a matcher. However, note that all groupings of the form codice_12 in patterns must be converted to non-capturing groups of the form codice_13 to avoid any unwanted group capturing within sub-expressions. The following RE/flex-generated codice_7 class codice_15 method repeatedly invokes the matcher's codice_16 (continuous partial matching) operation to tokenize input: int yyFlexLexer::yylex() if (!has_matcher()) matcher("(p1)|(p2)|...|(pn)"); // new matcher engine for regex pattern (p1)|(p2)|...|(pn) while (true) switch (matcher().scan()) // scan and match next token, get capture index case 0: // no match if (... EOF reached ...) return 0; output(matcher().input()); // echo the current input character break; case 1: // pattern p1 matched ... // Action for pattern p1 break; case 2: // pattern p2 matched ... // Action for pattern p2 break; ... // and so on for patterns up to pn If none of the n patterns match and the end-of-file (EOF) is not reached, the so-called "default rule" is invoked. The default rule echo's the current input character and advances the scanner to the next character in the input. The regular expression pattern codice_11 is produced by RE/flex from a lexer specification with n rules of pattern-action pairs: %% p1 Action for pattern p1 p2 Action for pattern p2 pn Action for pattern pn From this specification, RE/flex generates the aforementioned codice_7 class with the codice_19 method that executes actions corresponding to the patterns matched in the input. The generated codice_7 class is used in a C++ application, such as a parser, to tokenize the input into the integer-valued tokens returned by the actions in the lexer specification. For example: std::ifstream ifs(filename, std::ios::in); yyFlexLexer lexer(ifs); int token; while ((token = lexer.yylex()) != 0) std::cout « "token = " « token « std::endl; ifs.close(); Note that codice_19 returns an integer value when an action executes codice_22. Otherwise, codice_19 does not return a value and continues scanning the input, which is often used by rules that ignore input such as comments. This example tokenizes a file. A lexical analyzer often serves as a tokenizer for a parser generated by a parser generator such as Bison. Compatibility. RE/flex is compatible with Flex specifications when codice_8 is used. This generates a codice_7 class with codice_19 method. RE/flex is also compatible with Bison using a range of RE/flex options for complete coverage of Bison options and features. By contrast to Flex, RE/flex scanners are thread-safe by default on work with reentrant Bison parsers. Unicode support. RE/flex supports Unicode regular expression patterns in lexer specifications and automatically tokenizes UTF-8, UTF-16, and UTF-32 input files. Code pages may be specified to tokenize input files encoded in ISO/IEC 8859 1 to 16, Windows-1250 to Windows-1258, CP-437, CP-850, CP-858, MacRoman, KOI-8, EBCDIC, and so on. Normalization to UTF-8 is automatically performed by internal incremental buffering for (partial) pattern matching with Unicode regular expression patterns. Indent, nodent, and dedent matching. RE/flex integrates indent and dedent matching directly in the regular expression syntax with new codice_27 and codice_28 anchors. These indentation anchors detect changes of line indentation in the input. This allows many practical scenarios to be covered to tokenize programming languages with indented block structures. For example, the following lexer specification detects and reports indentation changes: %% ^[ \t]+ std::cout « "| "; // nodent: text is aligned to current indent margin ^[ \t]*\i std::cout « "&gt; "; // indent: matched with \i ^[ \t]*\j std::cout « "&lt; "; // dedent: matched with \j \j std::cout « "&lt; "; // dedent: for each extra level dedented Lazy quantifiers. Lazy quantifiers may be associated with repeats in RE/flex regular expression patterns to simplify the expressions using non-greedy repeats, when applicable. Normally matching is "greedy", meaning that the longest pattern is matched. For example, the pattern codice_29 with the greedy codice_30 repeat matches codice_31, but also matches codice_32 because codice_33 matches any characters except newline and codice_32 is longer than codice_35. Using a lazy quantifier codice_36 for the lazy repeat codice_37, pattern codice_38 matches codice_35 but not codice_32. As a practical application of lazy quantifiers, consider matching C/C++ multiline comments of the form codice_41. The lexer specification pattern "/*"(.|\n)*?"*/" with lazy repeat codice_37 matches multiline comments. Without lazy repeats the pattern "/*"([^*]|(\*+[^*/]))*\*+"/" should be used (note that quotation of the form codice_43 is allowed in lexer specifications only, this construct is comparable to the codice_44 quotations supported by most regex libraries.) Other pattern matchers. Besides the built-in RE/flex POSIX regex pattern matcher, RE/flex also supports PCRE2, Boost.Regex and std::regex pattern matching libraries. PCRE2 and Boost.Regex offer a richer regular expression pattern syntax with Perl pattern matching semantics, but are slower due to their intrinsic NFA-based matching algorithm. Translation. Lex, Flex and RE/flex translate regular expressions to DFA, which are implemented in tables for run-time scanning. RE/flex differs from Lex and Flex in that the generated tables contain a list of opcode words executed by a virtual machine to perform pattern matching. In addition, a DFA implemented in code instead of opcode tables is generated with the codice_45 option. For example, the following direct-coded DFA for pattern codice_46 is generated with option codice_45: void reflex_code_INITIAL(reflex::Matcher&amp; m) int c0 = 0, c1 = 0; m.FSM_INIT(c1); S0: m.FSM_FIND(); c1 = m.FSM_CHAR(); if ('a' &lt;= c1 &amp;&amp; c1 &lt;= 'z') goto S5; if (c1 == '_') goto S5; if ('A' &lt;= c1 &amp;&amp; c1 &lt;= 'Z') goto S5; if ('0' &lt;= c1 &amp;&amp; c1 &lt;= '9') goto S5; return m.FSM_HALT(c1); S5: m.FSM_TAKE(1); c1 = m.FSM_CHAR(); if ('a' &lt;= c1 &amp;&amp; c1 &lt;= 'z') goto S5; if (c1 == '_') goto S5; if ('A' &lt;= c1 &amp;&amp; c1 &lt;= 'Z') goto S5; if ('0' &lt;= c1 &amp;&amp; c1 &lt;= '9') goto S5; return m.FSM_HALT(c1); A list of virtual machine opcode words for pattern codice_46 is generated with option codice_49: REFLEX_CODE_DECL reflex_code_INITIAL[11] = 0x617A0005, // 0: GOTO 5 ON 'a'-'z' 0x5F5F0005, // 1: GOTO 5 ON '_' 0x415A0005, // 2: GOTO 5 ON 'A'-'Z' 0x30390005, // 3: GOTO 5 ON '0'-'9' 0x00FFFFFF, // 4: HALT 0xFE000001, // 5: TAKE 1 0x617A0005, // 6: GOTO 5 ON 'a'-'z' 0x5F5F0005, // 7: GOTO 5 ON '_' 0x415A0005, // 8: GOTO 5 ON 'A'-'Z' 0x30390005, // 9: GOTO 5 ON '0'-'9' 0x00FFFFFF, // 10: HALT Debugging and profiling. The RE/flex built-in profiler can be used to measure the performance of the generated scanner automatically. The profiler instruments the scanner source code to collect run-time metrics. At run-time when the instrumented scanner terminates, the profiler reports the number of times a rule is matched and the cumulative time consumed by the matching rule. Profiling includes the time spent in the parser when the rule returns control to the parser. This allows for fine-tuning the performance of the generated scanners and parsers. Lexer rules that are hot spots, i.e. computationally expensive, are detected and can be optimized by the user in the lexer source code. Also debugging of the generated scanner is supported with Flex-compatible options. Debugging outputs annotated lexer rules during scanning. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_i" }, { "math_id": 1, "text": "i = 1,\\ldots,n" } ]
https://en.wikipedia.org/wiki?curid=61239605
61241535
Fastran
Crack growth calculation program Fastran is a computer program for calculating the rate of fatigue crack growth by combining crack growth equations and a simulation of the plasticity at the crack tip. Fastran models accelerations and retardation and other variable amplitude loading effects in crack growth using a crack closure model. The program uses a "strip yield model" of the crack tip that was first proposed by D. S. Dugdale to calculate the size of the plastic zone ahead of a crack tip. A series of "elastic-perfectly plastic" strips (originally 30 strips were used) that model the region both ahead and behind the crack tip is used to keep track of the plasticity produced at the crack tip. As the crack grows, the strips are cut and leave a region of raised plastic material in the crack wake that prevents the complete closure of a crack. This profile of the crack is used to calculate the stress intensity factor level formula_0 at which the crack tip is fully open. The effective stress intensity factor range is then formula_1 which allows the rate of growth for the loading cycle to be obtained from the crack growth equation. The rate of crack growth is then calculated from formula_2 History. Fastran was written in the 1980s by James C. Newman while at NASA and is an acronym derived from "NASA FATIGUE CRACK GROWTH STRUCTURAL ANALYSIS". Crack closure was first observed by Wolf Elber as propping open a crack tip resulting in a reduction of the full stress intensity range or crack tip driving force. It was assumed this was due to plasticity at the crack tip preventing the fracture surfaces from fully closing. A similar program "CORPUS" was also developed around the same time by A. U. de Koning. FASTRAN is written in the Fortran programming language. Features. Geometry factors. The geometry factor formula_3 relates the far-field stresses to the region near the crack tip. Many standard geometry factors are supplied in the program. These scaling factors allow the calculation of the stress intensity factor from the applied loading sequence using formula_4 where formula_5 is the applied far field stress and formula_6 is the crack length. The loading sequence is given as a file of sequential turning points that represent the loading sequence. This in combination with a load factor is used to supply the far-field stress of the given geometry. The load sequence is converted into a series of individual load cycles by a method known as "rainflow on the fly" which is a modified form of the standard rainflow-counting algorithm. The closure model has also been used to explain the increase rate of growth seen with small cracks known as the "small crack effect". Crack growth equations. Fastran has a variety of crack growth equations built in along with piece wise linear equations that can be read from file. Theory. This model allows the calculation of the "stress ratio" formula_7 or "mean stress effect" that gives rise to the increased rate of crack growth at higher stress ratios. Experiments have shown the crack is typically open at formula_8. In addition the model is able to predict retardation due to overloads which increase the plastic material in the wake of the crack. It also explains the acceleration due to underloads where the crack growth rate increases following an underload which compresses the crack faces together and reduced the degree of interference lowering formula_0. The onset of plasticity is given by the "flow stress" whose value typically lies mid-way between the yield and ultimate stresses. The flow stress scaling parameter formula_9 is used to adjust the flow stress to the degree of restraint experienced at the crack tip. This value reflects the stress state at the crack tip and typically lies between a value of formula_10 for plane stress and formula_11 for plane strain. The parameter is also used as an adjustment variable to correct the rate of crack to match test data. Limitations. Plasticity will be greater in regions of plane stress but Fastran only models the crack as a 2d cross section. Usage. Fastran has been used in the research community and for maintaining the safe life of aircraft as the C-130 used by the USAF, RAF and RAAF. If forms a component of the crack growth program "Nasgro".
[ { "math_id": 0, "text": "K_\\text{op}" }, { "math_id": 1, "text": "\n\\Delta K_\\text{eff} = K_\\text{max} - \\text{max}(K_\\text{op}, K_\\text{min})\n" }, { "math_id": 2, "text": "\n{da \\over dN} = f(\\Delta K_\\text{eff})\n" }, { "math_id": 3, "text": "\\beta" }, { "math_id": 4, "text": "\nK = \\beta \\sigma \\sqrt{\\pi a}\n" }, { "math_id": 5, "text": "\\sigma" }, { "math_id": 6, "text": "a" }, { "math_id": 7, "text": "R" }, { "math_id": 8, "text": "R>0.7" }, { "math_id": 9, "text": "\\alpha" }, { "math_id": 10, "text": "\\pi" }, { "math_id": 11, "text": "2\\pi" } ]
https://en.wikipedia.org/wiki?curid=61241535
612479
Hydrogen line
Spectral line of hydrogen state transition in UHF radio frequencies The hydrogen line, 21 centimeter line, or H I line is a spectral line that is created by a change in the energy state of solitary, electrically neutral hydrogen atoms. It is produced by a spin-flip transition, which means the direction of the electron's spin is reversed relative to the spin of the proton. This is a quantum state change between the two hyperfine levels of the hydrogen 1 s ground state. The electromagnetic radiation producing this line has a frequency of (1.42 GHz), which is equivalent to a wavelength of in a vacuum. According to the Planck–Einstein relation "E" "hν", the photon emitted by this transition has an energy of []. The constant of proportionality, h, is known as the Planck constant. The hydrogen line frequency lies in the L band, which is located in the lower end of the microwave region of the electromagnetic spectrum. It is frequently observed in radio astronomy because those radio waves can penetrate the large clouds of interstellar cosmic dust that are opaque to visible light. The existence of this line was predicted by Dutch astronomer H. van de Hulst in 1944, then directly observed by E. M. Purcell and his student H. E. Ewen in 1951. Observations of the hydrogen line have been used to reveal the spiral shape of the Milky Way, to calculate the mass and dynamics of individual galaxies, and to test for changes to the fine-structure constant over time. It is of particular importance to cosmology because it can be used to study the early Universe. Due to its fundamental properties, this line is of interest in the search for extraterrestrial intelligence. This line is the theoretical basis of the hydrogen maser. Cause. An atom of neutral hydrogen consists of an electron bound to a proton. The lowest stationary energy state of the bound electron is called its ground state. Both the electron and the proton have intrinsic magnetic dipole moments ascribed to their spin, whose interaction results in a slight increase in energy when the spins are parallel, and a decrease when antiparallel. The fact that only parallel and antiparallel states are allowed is a result of the quantum mechanical discretization of the total angular momentum of the system. When the spins are parallel, the magnetic dipole moments are antiparallel (because the electron and proton have opposite charge), thus one would expect this configuration to actually have "lower energy" just as two magnets will align so that the north pole of one is closest to the south pole of the other. This logic fails here because the wave functions of the electron and the proton overlap; that is, the electron is not spatially displaced from the proton, but encompasses it. The magnetic dipole moments are therefore best thought of as tiny current loops. As parallel currents attract, the parallel magnetic dipole moments (i.e., antiparallel spins) have lower energy. In the ground state, the spin-flip transition between these aligned states has an energy difference of . When applied to the Planck relation, this gives: formula_0 where λ is the wavelength of an emitted photon, ν is its frequency, E is the photon energy, h is the Planck constant, and c is the speed of light. In a laboratory setting, the hydrogen line parameters have been more precisely measured as: "λ" = "ν" = in a vacuum. This transition is highly forbidden with an extremely small transition rate of , and a mean lifetime of the excited state of around 11 million years. Collisions of neutral hydrogen atoms with electrons or other atoms can help promote the emission of 21 cm photons. A spontaneous occurrence of the transition is unlikely to be seen in a laboratory on Earth, but it can be artificially induced through stimulated emission using a hydrogen maser. It is commonly observed in astronomical settings such as hydrogen clouds in our galaxy and others. Because of the uncertainty principle, its long lifetime gives the spectral line an extremely small natural width, so most broadening is due to Doppler shifts caused by bulk motion or nonzero temperature of the emitting regions. Discovery. During the 1930s, it was noticed that there was a radio "hiss" that varied on a daily cycle and appeared to be extraterrestrial in origin. After initial suggestions that this was due to the Sun, it was observed that the radio waves seemed to propagate from the centre of the Galaxy. These discoveries were published in 1940 and were noted by Jan Oort who knew that significant advances could be made in astronomy if there were emission lines in the radio part of the spectrum. He referred this to Hendrik van de Hulst who, in 1944, predicted that neutral hydrogen could produce radiation at a frequency of due to two closely spaced energy levels in the ground state of the hydrogen atom. The 21 cm line (1420.4 MHz) was first detected in 1951 by Ewen and Purcell at Harvard University, and published after their data was corroborated by Dutch astronomers Muller and Oort, and by Christiansen and Hindman in Australia. After 1952 the first maps of the neutral hydrogen in the Galaxy were made, and revealed for the first time the spiral structure of the Milky Way. Uses. In radio astronomy. The 21 cm spectral line appears within the radio spectrum (in the L band of the UHF band of the microwave window to be exact). Electromagnetic energy in this range can easily pass through the Earth's atmosphere and be observed from the Earth with little interference. The hydrogen line can readily penetrate clouds of interstellar cosmic dust that are opaque to visible light. Assuming that the hydrogen atoms are uniformly distributed throughout the galaxy, each line of sight through the galaxy will reveal a hydrogen line. The only difference between each of these lines is the Doppler shift that each of these lines has. Hence, by assuming circular motion, one can calculate the relative speed of each arm of our galaxy. The rotation curve of our galaxy has been calculated using the hydrogen line. It is then possible to use the plot of the rotation curve and the velocity to determine the distance to a certain point within the galaxy. However, a limitation of this method is that departures from circular motion are observed at various scales. Hydrogen line observations have been used indirectly to calculate the mass of galaxies, to put limits on any changes over time of the fine-structure constant, and to study the dynamics of individual galaxies. The magnetic field strength of interstellar space can be measured by observing the Zeeman effect on the 21-cm line; a task that was first accomplished by G. L. Verschuur in 1968. In theory, it may be possible to search for antihydrogen atoms by measuring the polarization of the 21-cm line in an external magnetic field. Deuterium has a similar hyperfine spectral line at 91.6 cm (327 MHz), and the relative strength of the 21 cm line to the 91.6 cm line can be used to measure the deuterium-to-hydrogen (D/H) ratio. One group in 2007 reported D/H ratio in the galactic anticenter to be 21 ± 7 parts per million. In cosmology. The line is of great interest in Big Bang cosmology because it is the only known way to probe the cosmological "dark ages" from recombination (when stable hydrogen atoms first formed) to reionization. Including the redshift, this line will be observed at frequencies from 200 MHz to about 15 MHz on Earth. It potentially has two applications. First, by mapping the intensity of redshifted 21 centimeter radiation it can, in principle, provide a very precise picture of the matter power spectrum in the period after recombination. Second, it can provide a picture of how the universe was re‑ionized, as neutral hydrogen which has been ionized by radiation from stars or quasars will appear as holes in the 21 cm background. However, 21 cm observations are very difficult to make. Ground-based experiments to observe the faint signal are plagued by interference from television transmitters and the ionosphere, so they must be made from very secluded sites with care taken to eliminate interference. Space based experiments, even on the far side of the Moon (where they would be sheltered from interference from terrestrial radio signals), have been proposed to compensate for this. Little is known about other foreground effects, such as synchrotron emission and free–free emission on the galaxy. Despite these problems, 21 cm observations, along with space-based gravitational wave observations, are generally viewed as the next great frontier in observational cosmology, after the cosmic microwave background polarization. Relevance to the search for non-human intelligent life. The Pioneer plaque, attached to the Pioneer 10 and Pioneer 11 spacecraft, portrays the hyperfine transition of neutral hydrogen and used the wavelength as a standard scale of measurement. For example, the height of the woman in the image is displayed as eight times 21 cm, or 168 cm. Similarly the frequency of the hydrogen spin-flip transition was used for a unit of time in a map to Earth included on the Pioneer plaques and also the Voyager 1 and Voyager 2 probes. On this map, the position of the Sun is portrayed relative to 14 pulsars whose rotation period circa 1977 is given as a multiple of the frequency of the hydrogen spin-flip transition. It is theorized by the plaque's creators that an advanced civilization would then be able to use the locations of these pulsars to locate the Solar System at the time the spacecraft were launched. The 21 cm hydrogen line is considered a favorable frequency by the SETI program in their search for signals from potential extraterrestrial civilizations. In 1959, Italian physicist Giuseppe Cocconi and American physicist Philip Morrison published "Searching for interstellar communications", a paper proposing the 21 cm hydrogen line and the potential of microwaves in the search for interstellar communications. According to George Basalla, the paper by Cocconi and Morrison "provided a reasonable theoretical basis" for the then-nascent SETI program. Similarly, proposed SETI use a frequency which is equal to either 0π × ≈ or 2π × ≈ Since π is an irrational number, such a frequency could not possibly be produced in a natural way as a harmonic, and would clearly signify its artificial origin. Such a signal would not be overwhelmed by the H I line itself, or by any of its harmonics. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. Cosmology. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": "\\lambda = \\frac {1}{\\nu} \\cdot c = \\frac {h}{E} \\cdot c \\approx \\frac{\\; 4.135\\,67 \\cdot 10^{-15} \\ \\mathrm{eV}\\cdot\\text{s} \\;}{5.874\\,33 \\cdot 10^{-6}\\ \\mathrm{eV}}\\, \\cdot\\, 2.997\\,92 \\cdot 10^8 \\ \\mathrm{m} \\cdot \\mathrm{s}^{-1} \\approx 0.211\\,06\\ \\mathrm{m} = 21.106\\ \\mathrm{cm}\\; " } ]
https://en.wikipedia.org/wiki?curid=612479
6125
Carl Friedrich Gauss
German mathematician, astronomer, geodesist, and physicist (1777–1855) Johann Carl Friedrich Gauss ( ; ; 30 April 1777 – 23 February 1855) was a German mathematician, astronomer, geodesist, and physicist who contributed to many fields in mathematics and science. He was director of the Göttingen Observatory and professor of astronomy from 1807 until his death in 1855. While studying at the University of Göttingen, he propounded several mathematical theorems. Gauss completed his masterpieces "Disquisitiones Arithmeticae" and "Theoria motus corporum coelestium" as a private scholar. He gave the second and third complete proofs of the fundamental theorem of algebra, made contributions to number theory, and developed the theories of binary and ternary quadratic forms. Gauss was instrumental in the identification of Ceres as a dwarf planet. His work on the motion of planetoids disturbed by large planets led to the introduction of the Gaussian gravitational constant and the method of least squares, which he had discovered before Adrien-Marie Legendre published it. Gauss was in charge of the extensive geodetic survey of the Kingdom of Hanover together with an arc measurement project from 1820 to 1844; he was one of the founders of geophysics and formulated the fundamental principles of magnetism. Fruits of his practical work were the inventions of the heliotrope in 1821, a magnetometer in 1833 and – alongside Wilhelm Eduard Weber – the first electromagnetic telegraph in 1833. Gauss refused to publish incomplete work and left several works to be edited posthumously. He believed that the act of learning, not possession of knowledge, provided the greatest enjoyment. Gauss confessed to disliking teaching, but some of his students became influential mathematicians. Biography. Youth and education. Johann Carl Friedrich Gauss was born on 30 April 1777 in Brunswick (Braunschweig) in the Duchy of Brunswick-Wolfenbüttel (now part of Germany's federal state Lower Saxony), to a family of lower social status. His father Gebhard Dietrich Gauss worked in several jobs, as butcher, bricklayer, gardener, and as treasurer of a death-benefit fund. Gauss characterized his father as honourable and respected, but rough and dominating at home. He was experienced in writing and calculating, whereas his second wife Dorothea, Carl Friedrich's mother, was nearly illiterate. He had one elder brother from his father's first marriage. Gauss was a child prodigy in mathematics. When the elementary teachers noticed his intellectual abilities, they brought him to the attention of the Duke of Brunswick who sent him to the local "Collegium Carolinum", which he attended from 1792 to 1795 with Eberhard August Wilhelm von Zimmermann as one of his teachers. Thereafter the Duke granted him the resources for studies of mathematics, sciences, and classical languages at the University of Göttingen until 1798. His professor in mathematics was Abraham Gotthelf Kästner, whom Gauss called "the leading mathematician among poets, and the leading poet among mathematicians" because of his epigrams. Astronomy was taught by Karl Felix Seyffer, with whom Gauss stayed in correspondence after graduation; Olbers and Gauss mocked him in their correspondence. On the other hand, he thought highly of Georg Christoph Lichtenberg, his teacher of physics, and of Christian Gottlob Heyne, whose lectures in classics Gauss attended with pleasure. Fellow students of this time were Johann Friedrich Benzenberg, Farkas Bolyai, and Heinrich Wilhelm Brandes. He was likely a self-taught student in mathematics since he independently rediscovered several theorems. He solved a geometrical problem that had occupied mathematicians since the Ancient Greeks, when he determined in 1796 which regular polygons can be constructed by compass and straightedge. This discovery ultimately led Gauss to choose mathematics instead of philology as a career. Gauss's mathematical diary, a collection of short remarks about his results from the years 1796 until 1814, shows that many ideas for his mathematical magnum opus "Disquisitiones Arithmeticae" (1801) date from this time. Private scholar. Gauss graduated as a Doctor of Philosophy in 1799, not in Göttingen, as is sometimes stated, but at the Duke of Brunswick's special request from the University of Helmstedt, the only state university of the duchy. Johann Friedrich Pfaff assessed his doctoral thesis, and Gauss got the degree "in absentia" without further oral examination. The Duke then granted him the cost of living as a private scholar in Brunswick. Gauss subsequently refused calls from the Russian Academy of Sciences in St. Peterburg and Landshut University. Later, the Duke promised him the foundation of an observatory in Brunswick in 1804. Architect Peter Joseph Krahe made preliminary designs, but one of Napoleon's wars cancelled those plans: the Duke was killed in the battle of Jena in 1806. The duchy was abolished in the following year, and Gauss's financial support stopped. When Gauss was calculating asteroid orbits in the first years of the century, he established contact with the astronomical community of Bremen and Lilienthal, especially Wilhelm Olbers, Karl Ludwig Harding, and Friedrich Wilhelm Bessel, as part of the informal group of astronomers known as the Celestial police. One of their aims was the discovery of further planets. They assembled data on asteroids and comets as a basis for Gauss's research on their orbits, which he later published in his astronomical magnum opus "Theoria motus corporum coelestium" (1809). Professor in Göttingen. In November 1807, Gauss followed a call to the University of Göttingen, then an institution of the newly founded Kingdom of Westphalia under Jérôme Bonaparte, as full professor and director of the astronomical observatory, and kept the chair until his death in 1855. He was soon confronted with the demand for two thousand francs from the Westphalian government as a war contribution, which he could not afford to pay. Both Olbers and Laplace wanted to help him with the payment, but Gauss refused their assistance. Finally, an anonymous person from Frankfurt, later discovered to be Prince-primate Dalberg, paid the sum. Gauss took on the directorate of the 60-year-old observatory, founded in 1748 by Prince-elector George II and built on a converted fortification tower, with usable, but partly out-of-date instruments. The construction of a new observatory had been approved by Prince-elector George III in principle since 1802, and the Westphalian government continued the planning, but Gauss could not move to his new place of work until September 1816. He got new up-to-date instruments, including two meridian circles from Repsold and Reichenbach, and a heliometer from Fraunhofer. The scientific activity of Gauss, besides pure mathematics, can be roughly divided into three periods: astronomy was the main focus in the first two decades of the 19th century, geodesy in the third decade, and physics, mainly magnetism, in the fourth decade. Gauss made no secret of his aversion to giving academic lectures. But from the start of his academic career at Göttingen, he continuously gave lectures until 1854. He often complained about the burdens of teaching, feeling that it was a waste of his time. On the other hand, he occasionally described some students as talented. Most of his lectures dealt with astronomy, geodesy, and applied mathematics, and only three lectures on subjects of pure mathematics. Some of Gauss's students went on to become renowned mathematicians, physicists, and astronomers: Moritz Cantor, Dedekind, Dirksen, Encke, Gould, Heine, Klinkerfues, Kupffer, Listing, Möbius, Nicolai, Riemann, Ritter, Schering, Scherk, Schumacher, von Staudt, Stern, Ursin; as geoscientists Sartorius von Waltershausen, and Wappäus. Gauss did not write any textbook and disliked the popularization of scientific matters. His only attempts at popularization were his works on the date of Easter (1800/1802) and the essay "Erdmagnetismus und Magnetometer" of 1836. Gauss published his papers and books exclusively in Latin or German. He wrote Latin in a classical style but used some customary modifications set by contemporary mathematicians. In his inaugural lecture at Göttingen University from 1808, Gauss claimed reliable observations and results attained only by a strong calculus as the sole tasks of astronomy. At university, he was accompanied by a staff of other lecturers in his disciplines, who completed the educational program; these included the mathematician Thibaut with his lectures, the physicist Mayer, known for his textbooks, his successor Weber since 1831, and in the observatory Harding, who took the main part of lectures in practical astronomy. When the observatory was completed, Gauss took his living accommodation in the western wing of the new observatory and Harding in the eastern one. They had once been on friendly terms, but over time they became alienated, possibly – as some biographers presume – because Gauss had wished the equal-ranked Harding to be no more than his assistant or observer. Gauss used the new meridian circles nearly exclusively, and kept them away from Harding, except for some very seldom joint observations. Brendel subdivides Gauss's astronomic activity chronologically into seven periods, of which the years since 1820 are taken as a "period of lower astronomical activity". The new, well-equipped observatory did not work as effectively as other ones; Gauss's astronomical research had the character of a one-man enterprise without a long-time observation program, and the university established a place for an assistant only after Harding died in 1834. Nevertheless, Gauss twice refused the opportunity to solve the problem by accepting offers from Berlin in 1810 and 1825 to become a full member of the Prussian Academy without burdening lecturing duties, as well as from Leipzig University in 1810 and from Vienna University in 1842, perhaps because of the family's difficult situation. Gauss's salary was raised from 1000 Reichsthaler in 1810 to 2400 Reichsthaler in 1824, and in his later years he was one of the best-paid professors of the university. When Gauss was asked for help by his colleague and friend Friedrich Wilhelm Bessel in 1810, who was in trouble at Königsberg University because of his lack of an academic title, Gauss provided a doctorate "honoris causa" for Bessel from the Philosophy Faculty of Göttingen in March 1811. Gauss gave another recommendation for an honorary degree for Sophie Germain but only shortly before her death, so she never received it. He also gave successful support to the mathematician Gotthold Eisenstein in Berlin. Gauss was loyal to the House of Hanover. After King William IV died in 1837, the new Hanoverian King Ernest Augustus annulled the 1833 constitution. Seven professors, later known as the "Göttingen Seven", protested against this, among them his friend and collaborator Wilhelm Weber and Gauss's son-in-law Heinrich Ewald. All of them were dismissed, and three of them were expelled, but Ewald and Weber could stay in Göttingen. Gauss was deeply affected by this quarrel but saw no possibility to help them. Gauss took part in academic administration: three times he was elected as dean of the Faculty of Philosophy. Being entrusted with the widow's pension fund of the university, he dealt with actuarial science and wrote a report on the strategy for stabilizing the benefits. He was appointed director of the Royal Academy of Sciences in Göttingen for nine years. Gauss remained mentally active into his old age, even while suffering from gout and general unhappiness. On 23 February 1855, he died of a heart attack in Göttingen; and was interred in the Albani Cemetery there. Heinrich Ewald, Gauss's son-in-law, and Wolfgang Sartorius von Waltershausen, Gauss's close friend and biographer, gave eulogies at his funeral. Gauss was a successful investor and accumulated considerable wealth with stocks and securities, finally a value of more than 150 thousand Thaler; after his death, about 18 thousand Thaler were found hidden in his rooms. Gauss's brain. The day after Gauss's death his brain was removed, preserved, and studied by Rudolf Wagner, who found its mass to be slightly above average, at . Wagner's son Hermann, a geographer, estimated the cerebral area to be in his doctoral thesis. In 2013, a neurobiologist at the Max Planck Institute for Biophysical Chemistry in Göttingen discovered that Gauss's brain had been mixed up soon after the first investigations, due to mislabelling, with that of the physician Conrad Heinrich Fuchs, who died in Göttingen a few months after Gauss. A further investigation showed no remarkable anomalies in the brains of both persons. Thus, all investigations on Gauss's brain until 1998, except the first ones of Rudolf and Hermann Wagner, actually refer to the brain of Fuchs. Family. Gauss married Johanna Osthoff on 9 October 1805 in St. Catherine's church in Brunswick. They had two sons and one daughter: Joseph (1806–1873), Wilhelmina (1808–1840), and Louis (1809–1810). Johanna died on 11 October 1809, one month after the birth of Louis, who himself died a few months later. Gauss chose the first names of his children in honour of Giuseppe Piazzi, Wilhelm Olbers, and Karl Ludwig Harding, the discoverers of the first asteroids. On 4 August 1810, the widower married Wilhelmine (Minna) Waldeck, a friend of his first wife, with whom he had three more children: Eugen (later Eugene) (1811–1896), Wilhelm (later William) (1813–1879), and Therese (1816–1864). Minna Gauss died on 12 September 1831 after being seriously ill for more than a decade. Therese then took over the household and cared for Gauss for the rest of his life; after her father's death, she married actor Constantin Staufenau. Her sister Wilhelmina married the orientalist Heinrich Ewald. Gauss's mother Dorothea lived in his house from 1817 until she died in 1839. The eldest son Joseph, while still a schoolboy, helped his father as an assistant during the survey campaign in the summer of 1821. After a short time at university, in 1824 Joseph joined the Hanoverian army and assisted in surveying again in 1829. In the 1830s he was responsible for the enlargement of the survey network to the western parts of the kingdom. With his geodetical qualifications, he left the service and engaged in the construction of the railway network as director of the Royal Hanoverian State Railways. In 1836 he studied the railroad system in the US for some months. Eugen left Göttingen in September 1830 and emigrated to the United States, where he joined the army for five years. He then worked for the American Fur Company in the Midwest. Later, he moved to Missouri and became a successful businessman. Wilhelm married a niece of the astronomer Bessel; he then moved to Missouri, started as a farmer and became wealthy in the shoe business in St. Louis in later years. Eugene and William have numerous descendants in America, but the Gauss descendants left in Germany all derive from Joseph, as the daughters had no children. Personality. Scholar. In the first two decades of the 19th century, Gauss was the only important mathematician in Germany, comparable to the leading French ones; his "Disquisitiones Arithmeticae" was the first mathematical book from Germany to be translated into the French language. Gauss was "in front of the new development" with documented research since 1799, his wealth of new ideas, and his rigour of demonstration. Whereas previous mathematicians like Leonhard Euler let the readers take part in their reasoning for new ideas, including certain erroneous deviations from the correct path, Gauss however introduced a new style of direct and complete explanation that did not attempt to show the reader the author's train of thought. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Gauss was the first to restore that "rigor" of demonstration which we admire in the ancients and which had been forced unduly into the background by the exclusive interest of the preceding period in "new" developments. But for himself, he propagated a quite different ideal, given in a letter to Farkas Bolyai as follows: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;It is not knowledge, but the act of learning, not possession but the act of getting there, which grants the greatest enjoyment. When I have clarified and exhausted a subject, then I turn away from it, in order to go into darkness again. The posthumous papers, his scientific diary, and short glosses in his own textbooks show that he worked to a great extent in an empirical way. He was a lifelong busy and enthusiastic calculator, who made his calculations with extraordinary rapidity, mostly without precise controlling, but checked the results by masterly estimation. Nevertheless, his calculations were not always free from mistakes. He coped with the enormous workload by using skillful tools. Gauss used a lot of mathematical tables, examined their exactness, and constructed new tables on various matters for personal use. He developed new tools for effective calculation, for example the Gaussian elimination. It has been taken as a curious feature of his working style that he carried out calculations with a high degree of precision much more than required, and prepared tables with more decimal places than ever requested for practical purposes. Very likely, this method gave him a lot of material which he used in finding theorems in number theory. Gauss refused to publish work that he did not consider complete and above criticism. This perfectionism was in keeping with the motto of his personal seal ("Few, but Ripe"). Many colleagues encouraged him to publicize new ideas and sometimes rebuked him if he hesitated too long, in their opinion. Gauss defended himself, claiming that the initial discovery of ideas was easy, but preparing a presentable elaboration was a demanding matter for him, for either lack of time or "serenity of mind". Nevertheless, he published many short communications of urgent content in various journals, but left a considerable literary estate, too. Gauss referred to mathematics as "the queen of sciences" and arithmetics as "the queen of mathematics", and supposedly once espoused a belief in the necessity of immediately understanding Euler's identity as a benchmark pursuant to becoming a first-class mathematician. On certain occasions, Gauss claimed that the ideas of another scholar had already been in his possession previously. Thus his concept of priority as "the first to discover, not the first to publish" differed from that of his scientific contemporaries. In contrast to his perfectionism in presenting mathematical ideas, he was criticized for a negligent way of quoting. He justified himself with a very special view of correct quoting: if he gave references, then only in a quite complete way, with respect to the previous authors of importance, which no one should ignore; but quoting in this way needed knowledge of the history of science and more time than he wished to spend. Private man. Soon after Gauss's death, his friend Sartorius published the first biography (1856), written in a rather enthusiastic style. Sartorius saw him as a serene and forward-striving man with childlike modesty, but also of "iron character" with an unshakeable strength of mind. Apart from his closer circle, others regarded him as reserved and unapproachable "like an Olympian sitting enthroned on the summit of science". His close contemporaries agreed that Gauss was a man of difficult character. He often refused to accept compliments. His visitors were occasionally irritated by his grumpy behaviour, but a short time later his mood could change, and he would become a charming, open-minded host. Gauss abominated polemic natures; together with his colleague Hausmann he opposed to a call for Justus Liebig on a university chair in Göttingen, "because he was always involved in some polemic." Gauss's life was overshadowed by severe problems in his family. When his first wife Johanna suddenly died shortly after the birth of their third child, he revealed the grief in a last letter to his dead wife in the style of an ancient threnody, the most personal surviving document of Gauss. The situation worsened when tuberculosis ultimately destroyed the health of his second wife Minna over 13 years; both his daughters later suffered from the same disease. Gauss himself gave only slight hints of his distress: in a letter to Bessel dated December 1831 he described himself as "the victim of the worst domestic sufferings". By reason of his wife's illness, both younger sons were educated for some years in Celle, far from Göttingen. The military career of his elder son Joseph ended after more than two decades with the rank of a poorly paid first lieutenant, although he had acquired a considerable knowledge of geodesy. He needed financial support from his father even after he was married. The second son Eugen shared a good measure of his father's talent in computation and languages, but had a vivacious and sometimes rebellious character. He wanted to study philology, whereas Gauss wanted him to become a lawyer. Having run up debts and caused a scandal in public, Eugen suddenly left Göttingen under dramatic circumstances in September 1830 and emigrated via Bremen to the United States. He wasted the little money he had taken to start, after which his father refused further financial support. The youngest son Wilhelm wanted to qualify for agricultural administration, but had difficulties getting an appropriate education, and eventually emigrated as well. Only Gauss's youngest daughter Therese accompanied him in his last years of life. Collecting numerical data on very different things, useful or useless, became a habit in his later years, for example, the number of paths from his home to certain places in Göttingen, or the number of living days of persons; he congratulated Humboldt in December 1851 for having reached the same age as Isaac Newton at his death, calculated in days. Similar to his excellent knowledge of Latin he was also acquainted with modern languages. At the age of 62, he began to teach himself Russian, very likely to understand scientific writings from Russia, among them those of Lobachevsky on non-Euclidean geometry. Gauss read both classical and modern literature, and English and French works in the original languages. His favorite English author was Walter Scott, his favorite German Jean Paul. Gauss liked singing and went to concerts. He was a busy newspaper reader; in his last years, he used to visit an academic press salon of the university every noon. Gauss did not care much for philosophy, and mocked the "splitting hairs of the so-called metaphysicians", by which he meant proponents of the contemporary school of "Naturphilosophie". Gauss had an "aristocratic and through and through conservative nature", with little respect for people's intelligence and morals, following the motto "mundus vult decipi". He disliked Napoleon and his system, and all kinds of violence and revolution caused horror to him. Thus he condemned the methods of the Revolutions of 1848, though he agreed with some of their aims, such as the idea of a unified Germany. As far as the political system is concerned, he had a low estimation of the constitutional system; he criticized parliamentarians of his time for a lack of knowledge and logical errors. Some Gauss biographers have speculated on his religious beliefs. He sometimes said "God arithmetizes" and "I succeeded – not on account of my hard efforts, but by the grace of the Lord." Gauss was a member of the Lutheran church, like most of the population in northern Germany. It seems that he did not believe all dogmas or understand the Holy Bible quite literally. Sartorius mentioned Gauss's religious tolerance, and estimated his "insatiable thirst for truth" and his sense of justice as motivated by religious convictions. Scientific work. Algebra and number theory. Fundamental theorem of algebra. In his doctoral thesis from 1799, Gauss proved the fundamental theorem of algebra which states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. Mathematicians including Jean le Rond d'Alembert had produced false proofs before him, and Gauss's dissertation contains a critique of d'Alembert's work. He subsequently produced three other proofs, the last one in 1849 being generally rigorous. His attempts clarified the concept of complex numbers considerably along the way. "Disquisitiones Arithmeticae". In the preface to the "Disquisitiones", Gauss dates the beginning of his work on number theory to 1795. By studying the works of previous mathematicians like Fermat, Euler, Lagrange, and Legendre, he realized that these scholars had already found much of what he had discovered by himself. The "Disquisitiones Arithmeticae", written since 1798 and published in 1801, consolidated number theory as a discipline and covered both elementary and algebraic number theory. Therein he introduces the triple bar symbol (≡) for congruence and uses it for a clean presentation of modular arithmetic. It deals with the unique factorization theorem and primitive roots modulo n. In the main chapters, Gauss presents the first two proofs of the law of quadratic reciprocity and develops the theories of binary and ternary quadratic forms. The "Disquisitiones" include the Gauss composition law for binary quadratic forms, as well as the enumeration of the number of representations of an integer as the sum of three squares. As an almost immediate corollary of his theorem on three squares, he proves the triangular case of the Fermat polygonal number theorem for "n" = 3. From several analytic results on class numbers that Gauss gives without proof towards the end of the fifth chapter, it appears that Gauss already knew the class number formula in 1801. In the last chapter, Gauss gives proof for the constructibility of a regular heptadecagon (17-sided polygon) with straightedge and compass by reducing this geometrical problem to an algebraic one. He shows that a regular polygon is constructible if the number of its sides is either a power of 2 or the product of a power of 2 and any number of distinct Fermat primes. In the same chapter, he gives a result on the number of solutions of certain cubic polynomials with coefficients in finite fields, which amounts to counting integral points on an elliptic curve. An unfinished eight chapter was found among left papers only after his death, consisting of work done during 1797–1799. Further investigations. One of Gauss's first results was the empirically found conjecture of 1792 – the later called prime number theorem – giving an estimation of the number of prime numbers by using the integral logarithm. When Olbers encouraged Gauss in 1816 to compete for a prize from the French Academy on proof for Fermat's Last Theorem (FLT), he refused because of his low esteem on this matter. However, among his left works a short undated paper was found with proofs of FLT for the cases "n" = 3 and "n" = 5. The particular case of "n" = 3 was proved much earlier by Leonhard Euler, but Gauss developed a more streamlined proof which made use of Eisenstein integers; though more general, the proof was simpler than in the real integers case. Gauss contributed to solving the Kepler conjecture in 1831 with the proof that a greatest packing density of spheres in the three-dimensional space is given when the centers of the spheres form a cubic face-centered arrangement, when he reviewed a book of Ludwig August Seeber on the theory of reduction of positive ternary quadratic forms. Having noticed some lacks in Seeber's proof, he simplified many of his arguments, proved the central conjecture, and remarked that this theorem is equivalent to the Kepler conjecture for regular arrangements. In two papers on biquadratic residues (1828, 1832) Gauss introduced the ring of Gaussian integers formula_0, showed that it is a unique factorization domain. and generalized some key arithmetic concepts, such as Fermat's little theorem and Gauss's lemma. The main objective of introducing this ring was to formulate the law of biquadratic reciprocity – as Gauss discovered, rings of complex integers are the natural setting for such higher reciprocity laws. In the second paper, he stated the general law of biquadratic reciprocity and proved several special cases of it. In an earlier publication from 1818 containing his fifth and sixth proofs of quadratic reciprocity, he claimed the techniques of these proofs (Gauss sums) can be applied to prove higher reciprocity laws. Analysis. One of Gauss's first discoveries was the notion of the arithmetic-geometric mean (AGM) of two positive real numbers. He discovered its relation to elliptic integrals in the years 1798–1799 through the Landen's transformation, and a diary entry recorded the discovery of the connection of Gauss's constant to lemniscatic elliptic functions, a result that Gauss stated that "will surely open an entirely new field of analysis". He also made early inroads into the more formal issues of the foundations of complex analysis, and from a letter to Bessel in 1811 it is clear that he knew the "fundamental theorem of complex analysis" – Cauchy's integral theorem – and understood the notion of complex residues when integrating around poles. Euler's pentagonal numbers theorem, together with other researches on the AGM and lemniscatic functions, led him to plenty of results on Jacobi theta functions, culminating in the discovery in 1808 of the later called Jacobi triple product identity, which includes Euler's theorem as a special case. His works show that he knew modular transformations of order 3, 5, 7 for elliptic functions since 1808. Several mathematical fragments in his Nachlass indicate that he knew parts of the modern theory of modular forms. In his work on the multivalued AGM of two complex numbers, he discovered a deep connection between the infinitely many values of the AGM to its two "simplest values". In his unpublished writings he recognized and made a sketch of the key concept of fundamental domain for the modular group. One of Gauss's sketches of this kind was a drawing of a tessellation of the unit disk by "equilateral" hyperbolic triangles with all angles equal to formula_1. An example of Gauss's insight in the fields of analysis is the cryptic remark that the principles of circle division by compass and straightedge can also be applied to the division of the lemniscate curve, which inspired Abel's theorem on lemniscate division. Another example is his publication "Summatio quarundam serierum singularium" (1811) on the determination of the sign of quadratic Gauss sum, in which he solved the main problem by introducing q-analogs of binomial coefficients and manipulating them by several original identities that seem to stem out of his work on elliptic functions theory; however, Gauss cast his argument in a formal way that does not reveal its origin in elliptic functions theory, and only the later work of mathematicians such as Jacobi and Hermite has exposed the crux of his argument. In the "Disquisitiones generales circa series infinitam..." (1813), he provides the first systematic treatment of the general hypergeometric function formula_2, and shows that many of the functions known at the time are special cases of the hypergeometric function. This work is the first one with an exact inquiry of convergence of infinite series in the history of mathematics. Furthermore, it deals with infinite continued fractions arising as ratios of hypergeometric functions which are now called Gauss continued fractions. In 1823, Gauss won the prize of the Danish Society with an essay on conformal mappings, which contains several developments that pertain to the field of complex analysis. Gauss stated that angle-preserving mappings in the complex plane must be complex analytic functions, and used the later called Beltrami equation to prove the existence of isothermal coordinates on analytic surfaces. The essay concludes with examples of conformal mappings into a sphere and an ellipsoid of revolution. Numeric analysis. Gauss often deduced theorems inductively from numerical data he had collected empirically. As such, the use of efficient algorithms to facilitate calculations was vital to his research, and he made many contributions to numeric analysis, as the method of Gaussian quadrature published in 1816. In a private letter to Gerling from 1823, he described a solution of a 4X4 system of linear equations by using Gauss-Seidel method – an "indirect" iterative method for the solution of linear systems, and recommended it over the usual method of "direct elimination" for systems of more than two equations. Gauss invented an algorithm for calculating what is now called discrete Fourier transforms, when calculating the orbits of Pallas and Juno in 1805, 160 years before Cooley and Tukey found their similar Cooley–Tukey FFT algorithm. He developed it as a trigonometric interpolation method, but the paper "Theoria Interpolationis Methodo Nova Tractata" was published only posthumously in 1876, preceded by the first presentation by Joseph Fourier on the subject in 1807. Chronology. The first publication following the doctoral thesis dealt with the determination of the date of Easter (1800), an elementary matter of mathematics. Gauss aimed to present a most convenient algorithm for people without any knowledge of ecclesiastical or even astronomical chronology, and thus avoided the usually required terms of golden number, epact, solar cycle, domenical letter, and any religious connotations. Biographers speculated on the reason why Gauss dealt with this matter, but it is likely comprehensible by the historical background. The replacement of the Julian calendar by the Gregorian calendar had caused confusion in the Holy Roman Empire since the 16th century, and was not finished in Germany until 1700, when the difference of eleven days was deleted, but the difference in calculating the date of Easter remained between Protestant and Catholic territories. A further agreement of 1776 equalized the confessional way of counting; thus in the Protestant states like the Duchy of Brunswick the Easter of 1777, five weeks before Gauss's birth, was the first one calculated in the new manner. The public difficulties of replacement may be the historical background for the confusion on this matter in the Gauss family (see chapter: Anecdotes). For being connected with the Easter regulations, an essay on the date of Pesach followed soon in 1802. Astronomy. On 1 January 1801, Italian astronomer Giuseppe Piazzi discovered a new celestial object, presumed it to be the long searched planet between Mars and Jupiter according to the so-called Titius–Bode law, and named it Ceres. He could track it only for a short time until it disappeared behind the glare of the Sun. The mathematical tools of the time were not sufficient to extrapolate a position from the few data for its reappearance. Gauss tackled the problem and predicted a position for possible rediscovery in December 1801. This turned out to be accurate within a half-degree when Franz Xaver von Zach on 7 and 31 December at Gotha, and independently Heinrich Olbers on 1 and 2 January in Bremen, identified the object near the predicted position. Gauss's method leads to an equation of the eighth degree, of which one solution, the Earth's orbit, is known. The solution sought is then separated from the remaining six based on physical conditions. In this work, Gauss used comprehensive approximation methods which he created for that purpose. The discovery of Ceres led Gauss to the theory of the motion of planetoids disturbed by large planets, eventually published in 1809 as "Theoria motus corporum coelestium in sectionibus conicis solem ambientum". It introduced the Gaussian gravitational constant. Since the new asteroids had been discovered, Gauss occupied himself with the perturbations of their orbital elements. Firstly he examined Ceres with analytical methods similar to those of Laplace, but his favorite object was Pallas, because of its great eccentricity and orbital inclination, whereby Laplace's method did not work. Gauss used his own tools: the arithmetic–geometric mean, the hypergeometric function, and his method of interpolation. He found an orbital resonance with Jupiter in proportion 18:7 in 1812; Gauss gave this result as cipher, and gave the explicit meaning only in letters to Olbers and Bessel. After long years of work, he finished it in 1816 without a result that seemed sufficient to him. This marked the end of his activities in theoretical astronomy. One fruit of Gauss's research on Pallas perturbations was the "Determinatio Attractionis..." (1818) on a method of theoretical astronomy that later became known as the "elliptic ring method". It introduced an averaging conception in which a planet in orbit is replaced by a fictitious ring with mass density proportional to the time taking the planet to follow the corresponding orbital arcs. Gauss presents the method of evaluating the gravitational attraction of such an elliptic ring, which includes several steps; one of them involves a direct application of the arithmetic-geometric mean (AGM) algorithm to calculate an elliptic integral. While Gauss's contributions to theoretical astronomy came to an end, more practical activities in observational astronomy continued and occupied him during his entire career. Even early in 1799, Gauss dealt with the determination of longitude by use of the lunar parallax, for which he developed more convenient formulas than those were in common use. After appointment as director of observatory he attached importance to the fundamental astronomical constants in correspondence with Bessel. Gauss himself provided tables for nutation and aberration, the solar coordinates, and refraction. He made many contributions to spherical geometry, and in this context solved some practical problems about navigation by stars. He published a great number of observations, mainly on minor planets and comets; his last observation was the solar eclipse of 28 July 1851. Theory of errors. Gauss likely used the method of least squares for calculating the orbit of Ceres to minimize the impact of measurement error. The method was published first by Adrien-Marie Legendre in 1805, but Gauss claimed in "Theoria motus" (1809) that he had been using it since 1794 or 1795. In the history of statistics, this disagreement is called the "priority dispute over the discovery of the method of least squares". Gauss proved that the method has the lowest sampling variance within the class of linear unbiased estimators under the assumption of normally distributed errors (Gauss–Markov theorem), in the two-part paper "Theoria combinationis observationum erroribus minimis obnoxiae" (1823). In the first paper he proved Gauss's inequality (a Chebyshev-type inequality) for unimodal distributions, and stated without proof another inequality for moments of the fourth order (a special case of Gauss-Winckler inequality). He derived lower and upper bounds for the variance of sample variance. In the second paper, Gauss described recursive least squares methods. His work on the theory of errors was extended in several directions by the geodesist Friedrich Robert Helmert to the Gauss-Helmert model. Gauss also contributed to problems in probability theory that are not directly concerned with the theory of errors. One example appears as a diary note where he tried to describe the asymptotic distribution of entries in the continued fraction expansion of a random number uniformly distributed in "(0,1)". He derived this distribution, now known as the Gauss-Kuzmin distribution, as a by-product of the discovery of the ergodicity of the Gauss map for continued fractions. Gauss's solution is the first-ever result in the metrical theory of continued fractions. Arc measurement and geodetic survey. Gauss was busy with geodetic problems since 1799 when he helped Karl Ludwig von Lecoq with calculations during his survey in Westphalia. Beginning in 1804, he taught himself some geodetic practise with a sextant in Brunswick, and Göttingen. Since 1816, Gauss's former student Heinrich Christian Schumacher, then professor in Copenhagen, but living in Altona (Holstein) near Hamburg as head of an observatory, carried out a triangulation of the Jutland peninsula from Skagen in the north to Lauenburg in the south. This project was the basis for map production but also aimed at determining the geodetic arc between the terminal sites. Data from geodetic arcs were used to determine the dimensions of the earth geoid, and long arc distances brought more precise results. Schumacher asked Gauss to continue this work further to the south in the Kingdom of Hanover; Gauss agreed after a short time of hesitation. Finally, in May 1820, King George IV gave the order to Gauss. An arc measurement needs a precise astronomical determination of at least two points in the network. Gauss and Schumacher used the favourite occasion that both observatories in Göttingen and Altona, in the garden of Schumacher's house, laid nearly in the same longitude. The latitude was measured with both their instruments and a zenith sector of Ramsden that was transported to both observatories. Gauss and Schumacher had already determined some angles between Lüneburg, Hamburg, and Lauenburg for the geodetic connection in October 1818. During the summers of 1821 until 1825 Gauss directed the triangulation work personally, from Thuringia in the south to the river Elbe in the north. The triangle between Hoher Hagen, Großer Inselsberg in the Thuringian Forest, and Brocken in the Harz mountains was the largest one Gauss had ever measured with a maximum size of . In the thinly populated Lüneburg Heath without significant natural summits or artificial buildings, he had difficulties finding suitable triangulation points; sometimes cutting lanes through the vegetation was necessary. For pointing signals, Gauss invented a new instrument with movable mirrors and a small telescope that reflects the sunbeams to the triangulation points, and named it "heliotrope". Another suitable construction for the same purpose was a sextant with an additional mirror which he named "vice heliotrope". Gauss got assistance by soldiers of the Hanoverian army, among them his eldest son Joseph. Gauss took part in the baseline measurement (Braak Base Line) of Schumacher in the village of Braak near Hamburg in 1820, and used the result for the evaluation of the Hanoverian triangulation. An additional result was a better value of flattening of the approximative Earth ellipsoid. Gauss developed the universal transverse Mercator projection of the ellipsoidal shaped Earth (what he named "conform projection") for representing geodetical data in plane charts. When the arc measurement was finished, Gauss began the enlargement of the triangulation to the west to get a survey of the whole Kingdom of Hanover with a Royal decree from 25 March 1828. The practical work was directed by three army officers, among them Lieutenant Joseph Gauss. The complete data evaluation laid in the hands of Gauss, who applied his mathematical inventions such as the method of least squares and the elimination method to it. The project was finished in 1844, and Gauss sent a final report of the project to the government; his method of projection was not edited until 1866. In 1828, when studying differences in latitude, Gauss first defined a physical approximation for the figure of the Earth as the surface everywhere perpendicular to the direction of gravity; later his doctoral student Johann Benedict Listing called this the "geoid". Differential geometry. The geodetic survey of Hanover fueled Gauss's interest in differential geometry and topology, fields of mathematics dealing with curves and surfaces. This led him in 1828 to the publication of a memoir that marks the birth of modern differential geometry of surfaces, as it departed from the traditional ways of treating surfaces as cartesian graphs of functions of two variables, and that initiated the exploration of surfaces from the "inner" point of view of a two-dimensional being constrained to move on it. As a result, the Theorema Egregium ("remarkable theorem"), established a property of the notion of Gaussian curvature. Informally, the theorem says that the curvature of a surface can be determined entirely by measuring angles and distances on the surface, regardless of the embedding of the surface in three-dimensional or two-dimensional space. The Theorema Egregium leads to the abstraction of surfaces as doubly-extended manifolds; it clarifies the distinction between the intrinsic properties of the manifold (the metric) and its physical realization in ambient space. A consequence is the impossibility of an isometric transformation between surfaces of different Gaussian curvature. This means practically that a sphere or an ellipsoid cannot be transformed to a plane without distortion, which causes a fundamental problem in designing projections for geographical maps. A portion of this essay is dedicated to a profound study of geodesics. In particular, Gauss proves the local Gauss-Bonnet theorem on geodesic triangles, and generalizes Legendre's theorem on spherical triangles to geodesic triangles on arbitrary surfaces with continuous curvature; he found that the angles of a "sufficiently small" geodesic triangle deviate from that of a planar triangle of the same sides in a way that depends only on the values of the surface curvature at the vertices of the triangle, regardless of the behaviour of the surface in the triangle interior. Gauss's memoir from 1828 lacks the conception of geodesic curvature. However, in a previously unpublished manuscript, very likely written in 1822–1825, he introduced the term "side curvature" (German: "Seitenkrümmung") and proved its invariance under isometric transformations, a result that was later obtained by Ferdinand Minding and published by him in 1830. This Gauss paper contains the core of his lemma on total curvature, but also its generalization, found and proved by Pierre Ossian Bonnet in 1848 and known as Gauss-Bonnet theorem. Non-Euclidean geometry. In the lifetime of Gauss, a vivid discussion on the Parallel postulate in Euclidean geometry was going on. Numerous efforts were made to prove it in the frame of the Euclidean axioms, whereas some mathematicians discussed the possibility of geometrical systems without it. Gauss thought about the basics of geometry since the 1790s years, but in the 1810s he realized that a non-Euclidean geometry without the parallel postulate could solve the problem. In a letter to Franz Taurinus of 1824, he presented a short comprehensible outline of what he named a "non-Euclidean geometry", but he strongly forbade Taurinus to make any use of it. The first publications on non-Euclidean geometry in the history of mathematics were authored by Nikolai Lobachevsky in 1829 and Janos Bolyai in 1832. In the following years, Gauss wrote his ideas on the topic but did not publish them, thus avoiding influencing the contemporary scientific discussion. Gauss commended the ideas of Janos Bolyai in a letter to his father and university friend Farkas Bolyai claiming that these were congruent to his own thoughts of some decades. However, it is not quite clear to what extent he preceded Lobachevsky and Bolyai, as his letter remarks are only vague and obscure. Sartorius mentioned Gauss's work on non-Euclidean geometry firstly in 1856, but only the edition of left papers in Volume VIII of the Collected Works (1900) showed Gauss's ideas on that matter, at a time when non-Euclidean geometry had yet grown out of controversial discussion. Early topology. Gauss was also an early pioneer of topology or "Geometria Situs", as it was called in his lifetime. The first proof of the fundamental theorem of algebra in 1799 contained an essentially topological argument; fifty years later, he further developed the topological argument in his fourth proof of this theorem. Another encounter with topological notions occurred to him in the course of his astronomical work in 1804, when he determined the limits of the region on the celestial sphere in which comets and asteroids might appear, and which he termed "Zodiacus". He discovered that if the Earth's and comet's orbits are linked, then by topological reasons the Zodiacus is the entire sphere. In 1848, in the context of the discovery of the asteroid 7 Iris, he published a further qualitative discussion of the Zodiacus. In Gauss's letters of 1820–1830, he thought intensively on topics with close affinity to Geometria Situs, and became gradually conscious of semantic difficulty in this field. Fragments from this period reveal that he tried to classify "tract figures", which are closed plane curves with a finite number of transverse self-intersections, that may also be planar projections of knots. To do so he devised a symbolical scheme, the Gauss code, that in a sense captured the characteristic features of tract figures. In a fragment from 1833, Gauss defined the linking number of two space curves by a certain double integral, and in doing so provided for the first time an analytical formulation of a topological phenomenon. On the same note, he lamented the little progress made in Geometria Situs, and remarked that one of its central problems will be "to count the intertwinings of two closed or infinite curves". His notebooks from that period reveal that he was also thinking about other topological objects such as braids and tangles. Gauss's influence in later years to the emerging field of topology, which he held in high esteem, was through occasional remarks and oral communications to Mobius and Listing. Minor mathematical accomplishments. Gauss applied the concept of complex numbers to solve well-known problems in a new concise way. For example in a short note from 1836 on geometric aspects of the ternary forms and their application to crystallography, he stated the fundamental theorem of axonometry, which tells how to represent a 3D cube on a 2D plane with complete accuracy, via complex numbers. He described rotations of this sphere as the action of certain linear fractional transformations on the extended complex plane, and gave a proof for the geometric theorem that the altitudes of a triangle always meet in a single orthocenter. Gauss was concerned with John Napier's "Pentagramma mirificum" – a certain spherical pentagram – for several decades; he approached it from various points of view, and gradually gained a full understanding of its geometric, algebraic, and analytic aspects. In particular, in 1843 he stated and proved several theorems connecting elliptic functions, Napier spherical pentagons, and Poncelet pentagons in the plane. Furthermore, he contributed a solution to the problem of constructing the largest-area ellipse inside a given quadrilateral, and discovered a surprising result about the computation of area of pentagons. Magnetism and telegraphy. Geomagnetism. Gauss had been interested in magnetism since 1803. After Alexander von Humboldt visited Göttingen in 1826, both scientists began intensive research on geomagnetism, partly independently, partly in productive cooperation. In 1828, Gauss was Humboldt's guest during the conference of the Society of German Natural Scientists and Physicians in Berlin, where he got acquainted with the physicist Wilhelm Weber. When Weber got the chair for physics in Göttingen as successor of Johann Tobias Mayer by Gauss's recommendation in 1831, both of them started a fruitful collaboration, leading to a new knowledge of magnetism with a representation for the unit of magnetism in terms of mass, charge, and time. They founded the "Magnetic Association" (German: "Magnetischer Verein"), an international working group of several observatories, which supported measurements of Earth's magnetic field in many regions of the world with equal methods at arranged dates in the years 1836 to 1841. In 1836, Humboldt suggested the establishment of a worldwide net of geomagnetic stations in the British dominions with a letter to the Duke of Sussex, then president of the Royal Society; he proposed that magnetic measures should be taken under standardized conditions using his methods. Together with other instigators, this led to a global program known as "Magnetical crusade" under the direction of Edward Sabine. The dates, times, and intervals of observations were determined in advance, the "Göttingen mean time" was used as standard. 61 stations on all five continents participated in this global program. Gauss and Weber founded a series for publication of the results, six volumes were edited between 1837 and 1843. Weber's departure to Leipzig in 1843 as late effect of the Göttingen Seven affair marked the end of Magnetic Association activity. Following Humboldt's example, Gauss ordered a magnetic observatory to be built in the garden of the observatory, but the scientists differed over instrumental equipment; Gauss preferred stationary instruments, which he thought to give more precise results, whereas Humboldt was accustomed to movable instruments. Gauss was interested in the temporal and spatial variation of magnetic declination, inclination, and intensity, but discriminated Humboldt's concept of magnetic intensity to the terms of "horizontal" and "vertical" intensity. Together with Weber, he developed methods of measuring the components of intensity of the magnetic field, and constructed a suitable magnetometer to measure "absolute values" of the strength of the Earth's magnetic field, not more relative ones that depended on the apparatus. The precision of the magnetometer was about ten times higher than of previous instruments. With this work, Gauss was the first to derive a non-mechanical quantity by basic mechanical quantities. Gauss carried out a "General Theory of Terrestrial Magnetism" (1839), in what he believed to describe the nature of magnetic force; according to Felix Klein, this work is a presentation of observations by use of spherical harmonics rather than a physical theory. The theory predicted the existence of exactly two magnetic poles on the Earth, thus Hansteen's idea of four magnetic poles became obsolete, and the data allowed to determine their location with rather good precision. Gauss influenced the beginning of geophysics in Russia, when Adolph Theodor Kupffer, one of his former students, founded a magnetic observatory in St. Petersburg, following the example of the observatory in Göttingen, and similarly, Ivan Simonov in Kazan. Electromagnetism. The discoveries of Hans Christian Ørsted on electromagnetism and Michael Faraday on electromagnetic induction drew Gauss's attention to these matters. Gauss and Weber found rules for branched electric circuits, which were later found independently and firstly published by Gustav Kirchhoff and benamed after him as Kirchhoff's circuit laws, and made inquiries on electromagnetism. They constructed the first electromechanical telegraph in 1833, and Weber himself connected the observatory with the institute for physics in the town centre of Göttingen, but they did not care for any further development of this invention for commercial purposes. Gauss's main theoretical interests in electromagnetism were reflected in his attempts to formulate quantitive laws governing electromagnetic induction. In notebooks from these years, he recorded several innovative formulations; he discovered the idea of vector potential function (independently rediscovered by Franz Ernst Neumann in 1845), and in January 1835 he wrote down an "induction law" equivalent to Faraday's law, which stated that the electromotive force at a given point in space is equal to the instantaneous rate of change (with respect to time) of this function. Gauss tried to find a unifying law for long-distance effects of electrostatics, electrodynamics, electromagnetism, and induction, comparable to Newton's law of gravitation, but his attempt ended in a "tragic failure". Potential theory. Since Isaac Newton had shown theoretically that the Earth and rotating stars assume non-spherical shapes, the problem of attraction of ellipsoids gained importance in mathematical astronomy. In his first publication on potential theory, the "Theoria attractionis..." (1813), Gauss provided a closed-form expression to the gravitational attraction of a homogeneous triaxial ellipsoid at every point in space. In contrast to previous research of Maclaurin, Laplace and Lagrange, Gauss's new solution treated the attraction more directly in the form of an elliptic integral. In the process, he also proved and applied some special cases of the so-called Gauss's theorem in vector analysis. In the "General theorems concerning the attractive and repulsive forces acting in reciprocal proportions of quadratic distances" (1840) Gauss gave the baseline of a theory of the magnetic potential, based on Lagrange, Laplace, and Poisson; it seems rather unlikely that he knew the previous works of George Green on this subject. However, Gauss could never give any reasons for magnetism, nor a theory of magnetism similar to Newton's work on gravitation, that enabled scientists to predict geomagnetic effects in the future. Optics. Gauss's calculations enabled instrument maker Johann Georg Repsold in Hamburg to construct a new achromatic lens system in 1810. A main problem, among other difficulties, was the nonprecise knowledge of the refractive index and dispersion of the used glass types. In a short article from 1817 Gauss dealt with the problem of removal of chromatic aberration in double lenses, and computed adjustments of the shape and coefficients of refraction required to minimize it. His work was noted by the optician Carl August von Steinheil, who in 1860 introduced the achromatic Steinheil doublet, partly based on Gauss's calculations. Many results in geometrical optics are only scattered in Gauss's correspondences and hand notes. In the "Dioptrical Investigations" (1840), Gauss gave the first systematic analysis on the formation of images under a paraxial approximation (Gaussian optics). He characterized optical systems under a paraxial approximation only by its cardinal points, and he derived the Gaussian lens formula, applicable without restrictions in respect to the thickness of the lenses. Mechanics. Gauss's first business in mechanics concerned the earth's rotation. When his university friend Benzenberg carried out experiments to determine the deviation of falling masses from the perpendicular in 1802, what today is known as an effect of the Coriolis force, he asked Gauss for a theory-based calculation of the values for comparison with the experimental ones. Gauss elaborated a system of fundamental equations for the motion, and the results corresponded sufficiently with Benzenberg's data, who added Gauss's considerations as an appendix to his book on falling experiments. After Foucault had demonstrated the earth's rotation by his pendulum experiment in public in 1851, Gerling questioned Gauss for further explanations. This instigated Gauss to design a new apparatus for demonstration with a much shorter length of pendulum than Foucault's one. The oscillations were observed with a reading telescope, with a vertical scale and a mirror fastened at the pendulum. It is described in the Gauss–Gerling correspondence, and Weber made some experiments with this apparatus in 1853, but no data were published. Gauss's principle of least constraint of 1829 was established as a general concept to overcome the division of mechanics into statics and dynamics, combining D'Alembert's principle with Lagrange's principle of virtual work, and showing analogies to the method of least squares.   Metrology. In 1828, Gauss was appointed to head of a Board for weights and measures of the Kingdom of Hanover. He provided the creation of standards of length and measures. Gauss himself took care of the time-consuming measures and gave detailed orders for the mechanical preparation. In the correspondence with Schumacher, who was also working on this matter, he described new ideas for scales of high precision. He submitted the final reports on the Hanoverian foot and pound to the government in 1841. This work got more than regional importance by the order of a law of 1836 that connected the Hanoverian measures with the English ones. Anecdotes. Several stories of his early genius have been reported. Carl Friedrich Gauss's mother had never recorded the date of his birth, remembering only that he had been born on a Wednesday, eight days before the Feast of the Ascension, which occurs 39 days after Easter. Gauss later solved this puzzle about his birthdate in the context of finding the date of Easter, deriving methods to compute the date in both past and future years. In his memorial on Gauss, Wolfgang Sartorius von Waltershausen tells a story about the three-year-old Gauss, who corrected a math error his father made. The most popular story, also told by Sartorius, tells of a school exercise: the teacher Büttner and his assistant Martin Bartels ordered students to add an arithmetic series. Out of about a hundred pupils, Gauss was the first to solve the problem correctly by a significant margin. Although (or because) Sartorius gave no details, over time many versions of this story have been created, with more and more details regarding the nature of the series – the most frequent being the classical problem of adding together all the integers from 1 to 100 – and the circumstances in the classroom. Honours and awards. The first membership of a scientific society was given to Gauss in 1802 by the Russian Academy of Sciences. Further memberships (corresponding, foreign or full) were awarded from the Academy of Sciences in Göttingen (1802/ 1807), the French Academy of Sciences (1804/ 1820), the Royal Society of London (1804), the Royal Prussian Academy in Berlin (1810), the National Academy of Science in Verona (1810), the Royal Society of Edinburgh (1820), the Bavarian Academy of Sciences of Munich (1820), the Royal Danish Academy in Copenhagen (1821), the Royal Astronomical Society in London (1821), the Royal Swedish Academy of Sciences (1821), the American Academy of Arts and Sciences in Boston (1822), the Royal Bohemian Society of Sciences in Prague (1833), the Royal Academy of Science, Letters and Fine Arts of Belgium (1841/1845), the Royal Society of Sciences in Uppsala (1843), the Royal Irish Academy in Dublin (1843), the Royal Institute of the Netherlands (1845/ 1851), the Spanish Royal Academy of Sciences in Madrid (1850), the Russian Geographical Society (1851), the Imperial Academy of Sciences in Vienna (1848), the American Philosophical Society (1853), the Cambridge Philosophical Society, and the Royal Hollandish Society of Sciences in Haarlem. Both the University of Kazan and the Philosophy Faculty of the University of Prague appointed him honorary member in 1848. Gauss received the Lalande Prize from the French Academy of Science in 1809 for the theory of planets and the means of determining their orbits from only three observations, the Danish Academy of Science prize in 1823 for his memoir on conformal projection, and the Copley Medal from the Royal Society in 1838 for "his inventions and mathematical researches in magnetism". Gauss was appointed Knight of the French Legion of Honour in 1837, and was taken as one of the first members of the Prussian Order Pour le Merite (Civil class) when it was established in 1842. He received the Order of the Crown of Westphalia (1810), the Danish Order of the Dannebrog (1817), the Hanoverian Royal Guelphic Order (1815), the Swedish Order of the Polar Star (1844), the Order of Henry the Lion (1849), and the Bavarian Maximilian Order for Science and Art (1853). The Kings of Hanover appointed him the honorary titles "Hofrath" (1816) and "Geheimer Hofrath" (1845). In 1949, on the occasion of his golden doctor degree jubilee, he got the honorary citizenship of both towns of Brunswick and Göttingen. Soon after his death a medal was issued by order of King George V of Hanover with the back inscription dedicated "to the Prince of Mathematicians". The "Gauss-Gesellschaft Göttingen" ("Göttingen Gauss Society") was founded in 1964 for research on life and work of Carl Friedrich Gauss and related persons and edits the "Mitteilungen der Gauss-Gesellschaft" ("Communications of the Gauss Society"). Selected writings. Correspondence. The Göttingen Academy of Sciences and Humanities provides a complete collection of the known letters from and to Carl Friedrich Gauss that is accessible online. The literary estate is kept and provided by the Göttingen State and University Library. Written materials from Carl Friedrich Gauss and family members can also be found in the municipal archive of Brunswick. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt; External links. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}[i]" }, { "math_id": 1, "text": "\\pi/4" }, { "math_id": 2, "text": "F(\\alpha,\\beta,\\gamma,x)" } ]
https://en.wikipedia.org/wiki?curid=6125
61252437
C23 (C standard revision)
C programming language standard, future revision C23 (formally ISO/IEC 9899:2024) is an open standard for the C programming language, which replaced C17 (standard ISO/IEC 9899:2018). It was started in 2016 informally as C2x, and is expected to be published in 2024. The most recent publicly available working draft of C23 was released on April 1, 2023. The first WG14 meeting for the C2x draft was held in October 2019, virtual remote meetings were held in 2020 due to the COVID-19 pandemic, then various teleconference meetings continued to occur through 2024. Features. Changes integrated into the latest working draft of C23 are listed below. Obsolete features. Some old obsolete features are either removed or deprecated from the working draft of C23: * Remove trigraphs. * Remove K&amp;R function definitions/declarations (with no information about the function arguments). * Remove representations for signed integers other than two's complement. Two's complement signed integer representation will be required. * The macros in codice_82 are obsolescent features. Compiler support. The GCC 9, Clang 9.0, and Pelles C 11.00 compilers implement an experimental compiler flag to support this standard. References. &lt;templatestyles src="Reflist/styles.css" /&gt; * WG14 Document Repository * WG14 Meetings - agenda and minutes * WG14 Charters: C2x Charter, C23 Charter, Interpreting the C23 Charter, C Standard Charter
[ { "math_id": 0, "text": "\\pi x" } ]
https://en.wikipedia.org/wiki?curid=61252437
61256222
Hajek projection
In statistics, Hájek projection of a random variable formula_0 on a set of independent random vectors formula_1 is a particular measurable function of formula_1 that, loosely speaking, captures the variation of formula_0 in an optimal way. It is named after the Czech statistician Jaroslav Hájek . Definition. Given a random variable formula_0 and a set of independent random vectors formula_1, the Hájek projection formula_2 of formula_0 onto formula_3 is given by formula_4 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "X_1,\\dots,X_n" }, { "math_id": 2, "text": "\\hat{T}" }, { "math_id": 3, "text": "\\{X_1,\\dots,X_n\\}" }, { "math_id": 4, "text": "\\hat{T} = \\operatorname{E}(T) + \\sum_{i=1}^n \\left[ \\operatorname{E}(T\\mid X_i) - \\operatorname{E}(T)\\right] =\n\\sum_{i=1}^n \\operatorname{E}(T\\mid X_i) - (n-1)\\operatorname{E}(T)" }, { "math_id": 5, "text": "L^2" }, { "math_id": 6, "text": "\\sum_{i=1}^n g_i(X_i)" }, { "math_id": 7, "text": "g_i:\\mathbb{R}^d \\to \\mathbb{R} " }, { "math_id": 8, "text": "\\operatorname{E}(g_i^2(X_i))<\\infty " }, { "math_id": 9, "text": "i=1,\\dots,n" }, { "math_id": 10, "text": "\\operatorname{E} (\\hat{T}\\mid X_i)=\\operatorname{E}(T\\mid X_i)" }, { "math_id": 11, "text": "\\operatorname{E}(\\hat{T})=\\operatorname{E}(T)" }, { "math_id": 12, "text": "T_n=T_n(X_1,\\dots,X_n)" }, { "math_id": 13, "text": "\\hat{T}_n = \\hat{T}_n(X_1,\\dots,X_n)" }, { "math_id": 14, "text": "\\operatorname{Var}(T_n)/\\operatorname{Var}(\\hat{T}_n) \\to 1" }, { "math_id": 15, "text": "\\frac{T_n-\\operatorname{E}(T_n)}{\\sqrt{\\operatorname{Var}(T_n)}} - \\frac{\\hat{T}_n-\\operatorname{E}(\\hat{T}_n)}{\\sqrt{\\operatorname{Var}(\\hat{T}_n)}}" } ]
https://en.wikipedia.org/wiki?curid=61256222
61261
Lotus Seven
The Lotus Seven is a sports car produced by the British manufacturer Lotus Cars (initially called Lotus Engineering) between 1957 and 1973. The Seven is an open-wheel car with two seats and an open top. It was designed by Lotus founder Colin Chapman and has been considered the embodiment of the Lotus philosophy of performance through low weight and simplicity. The original model was highly successful with more than 2,500 cars sold, due to its attraction as a road legal car that could be used for clubman racing. After Lotus ended production of the Seven, Caterham bought the rights and today Caterham makes both kits and fully assembled cars based on the original design known as the Caterham 7. The Lotus Seven design has spawned a host of imitations on the kit car market, generally called "Sevens" or "Sevenesque" roadsters. History. Series 1. The Lotus Seven was launched in 1957 to replace the Mark VI as the entry-level Lotus model. The Seven name was left over from a model that had been abandoned by Lotus, which would have been a Riley-engined single-seater that Lotus intended to enter into the Formula Two in 1952 or 1953. However, the car was completed on Chapman's chassis as a sports car by its backers and christened the Clairmonte Special. Externally similar to Chapman's earlier Lotus Mark VI, but with a different tubular frame similar to the Lotus Eleven, the Seven was powered by a Ford Sidevalve 1,172 cc inline-four engine. In addition to the Ford unit, both BMC series A and Coventry Climax FWA engines were available fitment. Under the Purchase Tax system of the time cars supplied as a kit did not attract the tax surcharge that would apply if sold in assembled form. Tax rules specified assembly instructions could not be included. This situation remained until 1973 and a large proportion of Sevens sold in the United Kingdom were delivered in kit form as a result. The Seven Series 1 was used both on the road and for club racing (750 motor club in the UK). Series 2. The Lotus Seven S2 followed in June 1960 and was supplemented by the Lotus Super Seven S2 from 1961. These were slightly more road-oriented than the Series 1, and received a somewhat simpler chassis. The Series 1's aluminium nosecone was changed to a fibreglass unit. Cycle fenders were originally standard, with clamshell units standard fitment on the 1500, Super Seven, and America or available as an option. While the 1172 cc Sidevalve unit remained available until 1962, the series 2 typically used Ford Kent engines of 1,340 or 1,499 cc from the Ford Consul Classic. These were also available with Cosworth modifications; the Cosworth 1,340 cc "Super Seven" delivered and the later "Super Seven 1500" . Some Series 2 Sevens built during 1968 (oftentimes referred to as "Series &lt;templatestyles src="Fraction/styles.css" /&gt;2+1⁄2") were fitted with the later crossflow Kent engine of 1,599 cc. The series II had problems with its Standard Companion estate car rear axle and differential, unable to cope with the high power and cornering forces of the Seven. This was later solved on the Series III by installing a Ford Cortina rear end. Production of the Series 2 ended in August 1968, after 1310 examples had been built. Series 3. The Seven S3 was released in 1968. As for late Series 2s, the S3 typically received the 1,599 cc crossflow Kent engine. First shown at Earl's Court in 1969, the Super Seven Twin Cam SS used the Lotus Twin Cam engine. Only 13 examples were built. While only manufactured by Lotus for around two years, the Series 3 was the model later revived by Caterham after they ran out of Series 4 kits some time in the first half of the 1970s. In modified form, the design continues to be produced until today (2023). Between 1970 and 1975, following a representation agreement, Lotus Argentina SA obtained the licence to manufacture the Lotus Seven in Argentina. This production reached approximately 51 units. These vehicles were not replicas but built under licence and branded as Lotuses. Series 4. In 1970, Lotus radically changed the shape of the car to create the slightly more conventional sized Series 4 (S4), with a squarer fibreglass shell replacing most of the aluminium bodywork. It also offered some luxuries as standard, such as an internal heater matrix. The S4 Seven could be supplied with 1298 or 1599 cc Kent engines or the twin cam. Until now, most Sevens in the UK had been sold in kit form in order to avoid paying purchase tax. However, once the UK joined the EEC on 1 January 1973, the VAT system was adopted instead so the tax advantage of the kit-built Lotus Seven came to an end. Accordingly, in 1973, Lotus decided to shed fully its "British tax system"-inspired kit car image. As part of this plan, it sold the rights to the Seven to its only remaining agents Caterham Cars in England and Steel Brothers Limited in New Zealand. Caterham ran out of the Lotus Series 4 kits in the early 1970s. When this occurred and in accordance with their agreement with Lotus, Caterham introduced its own brand version of the Series 3. They have been manufacturing the car ever since as the Caterham Seven. Steel Brothers Limited in Christchurch, New Zealand, assembled Lotus Seven Series 4s until March 1979 when the last of the 95 kits provided by Lotus was used up. Steel Brothers had a much wider range of factory options than the UK models with carpet, centre console glove-box, radio, window-washer and hardtop. Sold largely to competition enthusiasts, the NZ cars also had engine modifications, close-ratio gears, and adjustable suspension as factory options. As such, they were very successful in local racing. With officially licensed production stopping in 1979, the last Lotus badged Seven, a Series 4, was therefore produced in New Zealand. Steel Brothers Limited attempted to make a wider, modernised version of the Series 4, the Lotus Super 907, using the twin cam Lotus 907 engine. In the spring of 1978 it was announced that this was to be sold in the United States - but the American importer had no funds and the project came to naught. The single finished Super 907 was moved from New Zealand to the United States in 2010 to undergo a full restoration. Performance. Road test. A car with a tuned Ford 1172 cc engine and close-ratio gearbox was tested by the British magazine "The Motor" in 1958. It was found to have a top speed of , could accelerate from 0- in 6.2 seconds and had a fuel consumption of . The test car cost £1,157 including taxes of £386. They commented that car could be bought in component form and then it would have cost £399 for the parts from Lotus, £100 for the Ford engine and gearbox and £27 for the BMC rear axle. Top speed. A Seven's top speed greatly depends upon the body configuration, engine power and gearing. Early models with low-powered engines had difficulty exceeding , although a race-prepared Seven was clocked at whilst driven by Brausch Niemann through a speed-trap at the 1962 Natal Grand Prix. In addition, clamshell style wings tend to create drag and generate lift at higher speeds. Cycle wings help alleviate this tendency, and low height Brookland aeroscreens or the lighter Perspex variants that can replace the windscreen help improve top end speed. Sevens do suffer from front end lift at high speed – the nose creates more lift than downforce at speeds over around , although retro fitted "winglets" may counter this. Low speed acceleration. Nearly all Sevens, due to their extremely light weight (around 10cwt / 500 kg) have excellent acceleration, especially up to , depending on power. The original late 1950s Sevens could beat most contemporary saloon cars—and by the early 1960s, with improved Ford-Cosworth engines could take on most high-performance sports cars with 0–60 mph time in the low 7 seconds. Braking. The less powerful early models had drum brakes all around, in common with most road cars of the time. Later models had front disc brakes. Physics favours small cars in braking and Sevens have excellent stopping distances. Handling. The highest part of the car is about three feet (900 mm) from the road and it has a cloth top and side curtains with plastic back and side windows. The supports for the top and the windshield frame are aluminium. The lower chassis tubes are five inches (127 mm) from the road, while the wet-sump, bell housing, and one chassis tube are lower, meaning the centre of gravity is very low. The front/rear weight distribution is nearly equal and the lack of a boot and small petrol tank assure that it remains fairly constant. It is, however, more front-heavy than more modern high-performance cars. Suspension. In the original Seven, the front lower A-arm (or "wishbone") of the double wishbone suspension is traditional, but for the purpose of reducing weight, the upper suspension integrated an anti-roll (anti-sway) bar into a horizontal suspension arm. This approach formed a pseudo-wishbone which was semi-independent in nature. This approach worked well with early cross-ply tyres, but with later radials, the configuration seriously affected its adjustability. For the rear suspension, Lotus originally used a live axle (or solid axle). This approach was very cost-effective since most production saloon cars up to the 1980s used these components. A mixture of Ford, Standard Motor Company and Austin components was used. One disadvantage of live axles is higher unsprung weight, affecting handling and ride on rough surfaces. Aerodynamics. In general, cars with non-optimised aerodynamics tend to be free of adverse aerodynamic effects on handling, but the front wheel arches, of all but the Series I, cause lift at high speeds. Like the good straight-line performance, the car's nimble handling is limited in the speed range, and this is not usually important in a car intended for public roads. While the car's frontal area is small, the Lotus Seven has a drag coefficient (formula_0) among the highest of any known production car - ranging from 0.65 to 0.75, depending on the bodywork. Additionally, the clamshell front wings develop lift. This is accentuated by the slight natural lift caused by rotating wheels. Consequently, Sevens have exhibited understeer at high speeds. Steering. The rack and pinion steering provide a minimum of play and friction. Frame rigidity. It is a stressed skin construction, in which the flat aluminium body panels, and especially the floor, stiffen and effectively triangulate the largely rectangular steel tubular frame structure. This gives a rigid frame with few tubes and very little bodyweight that does not contribute to the frame stiffness. The flat panels avoid difficulties in shaping aluminum sheet into smooth compound curves. On the downside, it does not allow attractive curves or streamlining. Mechanical details. Engines. Originally equipped with the Ford Sidevalve engine, the Series 2 received the new Ford Kent engine. The original "Super Seven" received versions of the Kent unit with Cosworth modifications. Later, the Kent engine was updated to the crossflow design; this 1.6-litre engine was the most commonly installed one in the Series 3 as well as Series 4. A limited number of earlier cars received Coventry Climax FWA engines, while the later cars were offered with the Lotus-Ford Twin Cam engine. Frame and body. The Lotus Seven was designed with racing in mind, and lightness was of primary concern to Chapman. Like racing cars of the time, it was therefore built around a multi-tube space frame with high sides to allow a stiffer frame (longer lever arm). The Series II and later road versions had simpler frames than the more race-oriented Series I. A front-mounted engine driving the rear wheels (a similar layout to most cars of the day) and a very lightweight steel spaceframe was covered with stressed aluminium panel bodywork. The body panels were mainly flat to avoid the expense of more elaborate curved bodywork, and the simple cloth lined plastic doors were hinged from the windscreen. The nose-cone and wheel arches were originally aluminium parts, but these were replaced in the later S2 and S3 models with painted or self-coloured fibreglass. Weight. Early Lotus Sevens weighed around 1,100 lb (10cwt/500 kg). Although the weight crept upward as production progressed, it remained remarkably low for a production car of over a litre displacement. Suspension. The front was by "A" arms and coil springs with an anti-roll bar serving as the front half of the top A-arm. The rear had trailing arms, a triangular centre locating member, and a solid rear axle. Literature. The Lotus Seven has spawned many books, test reports, and articles, many of which are still in print. Replicas. Because of the Seven's relatively simple design, over 160 companies have offered replicas or Seven-type cars over the years. Many have been challenged over the years by the UK rights-holder, Caterham. Such cars are often referred to as "sevenesque" or simply a "seven" or "se7en". Sometimes they are also called clubmans or "locost". Some examples are: Also, see Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\scriptstyle C_\\mathrm d\\," } ]
https://en.wikipedia.org/wiki?curid=61261
61270051
Tetris (NES video game)
1989 video game Tetris, also known as classic "Tetris", is a puzzle video game developed and published by Nintendo for the Nintendo Entertainment System (NES). Based on "Tetris" (1985) by Alexey Pajitnov, it was released after a legal battle between Nintendo and Atari Games, who had previously released a "Tetris" port under an invalid license. Bullet-Proof Software had previously released "Tetris" for the Family Computer in December 1988, while Nintendo had released "Tetris" for the Game Boy earlier in 1989. The game will be re-released for the first time on Nintendo Switch Online in winter 2024. This "Tetris" port is infamous for being designed to end by becoming too fast to play, after a certain number of lines are cleared. Score must be accumulated through efficient play, rather than pure endurance, before the game ends. These circumstances have led to its use as an esports game. Although the highest game speed was intended to be unplayably difficult, it was shown to be manageable with novel controller grips developed in the 2020s. Gameplay. This version of "Tetris" has two modes of play: A-Type and B-Type. In A-Type play, the goal is to achieve the highest score. As lines are cleared, the level advances and increases the speed of the falling blocks. In B-Type play, the board starts with randomized obstacle blocks at the bottom of the field, and the goal is to clear 25 lines. The level remains constant, and players choose the height of the obstacle beforehand. During play, the tetrominoes are chosen randomly. This leaves the possibility of extended periods with no long bar pieces, which are essential because Tetrises are worth more than clearing the equivalent amount of lines in singles, doubles, or triples. The next piece to fall is shown in a preview window next to the playfield. In a side panel, the game tracks how many of each tetromino has appeared in the game so far. In A-Type, the level advances for every 10 lines cleared. Each successive level increases the points scored by line clears and changes the colors of the game pieces. All levels from 1 to 10 increase the game speed. After level 10, the game speed only increases on levels 13, 16, 19, and 29, at which point the speed no longer increases. On level 29, pieces fall at 1 grid cell every frame, which is too fast for almost all players, and it is thus called the "kill screen". The developers of the game never intended anyone to play past the kill screen, as the game does not properly display the level numbers past 29, but with modern speed techniques, skilled players can play past level 29. When starting a game, players can select a starting level from 0 to 9, but if the A button is held on the controller when selecting a level, 10 additional levels are added, raising the starting options to 0 to 19. When starting on a later level, the level is not supposed to advance until as many lines have been cleared as it would have taken to advance from level 0 to the starting level. Due to a bug, the levels will begin advancing earlier than intended when starting on level 10 or higher. At the end of an A-Type game, a substantial score yields an animated ending of a rocket launch in front of Saint Basil's Cathedral. The size of the rocket depends on the score, ranging from a bottle rocket to the "Buran" spaceplane. In the best ending, a UFO appears on the launch pad and the cathedral lifts off. After a high-level B-Type game, various Nintendo characters perform in front of the cathedral. The definition of "beating the game" has changed over time with the development of novel controller methods designed for high-level play. After clearing around 1550 lines, the game is at risk of crashing due to inefficient multiplication operations. Crashing the game in this way is popularly considered "beating the game", a feat first achieved on 21 December 2023 by 13-year-old Willis Gibson, known by his online alias "Blue Scuti". Scoring. The score received by each line clear is dependent on the level. Each type of clear, being a single, double, triple, or Tetris, has a base value which is multiplied by the number 1 higher than the current level. For any level formula_0, a single will give formula_1 points, a double will give formula_2 points, a triple will give formula_3 points, and a Tetris will give formula_4 points. The game also awards points for holding the down key to make pieces fall faster, awarding 1 point for every grid cell that a piece is continuously soft dropped. Unlike line clears, this does not scale by level. This scoring convention makes scoring Tetrises much more efficient than scoring an equivalent amount of lines through smaller line clears. At level 0, a Tetris awards 300 points per line cleared, a triple awards 100 points per line cleared, a double awards 50 points per line cleared, and a single awards 40 points per line cleared. For example, a single in level 0 is worth 40 points and a Tetris worth 1,200 points. A single in level 9 is worth 400 points and a Tetris worth 12,000 points, and a single in level 29 is worth 1,200 points (same as a Tetris on level 0) and a Tetris on level 29 is worth 36,000 points. High-level play. Speed techniques. One of the most limiting factors in NES "Tetris" is the speed at which a tetromino can be moved left and right. On the NTSC version, when a movement key is pressed, the piece will instantly move one grid cell, stop for 16 frames due to delayed auto-shift, before moving again once every 6 frames, or 10 times per second. At higher levels, waiting for this delay is not feasible because the pieces fall too fast. A technique known as "hypertapping" is used to circumvent this delay. When hypertapping, horizontal tetromino speed is maximized by rapidly tapping the D-pad more than 10 times per second. The technique involves flexing the bicep until it tremors, so that the high-speed tremor taps the thumb on the D-pad. Thor Aackerlund was the first hypertapper, but the technique was very rare until it was popularized by Joseph Saelee in 2018. Jacob Sweet of "The New Yorker" described hypertapping as "turning [the] thumb into a jackhammer." In 2020, the "rolling" technique was developed by competitive NES "Tetris" player Chris "Cheez_fish" Martinez. When rolling, a stationary finger is placed on the D-pad, while the other hand's fingers are drummed across the back of the controller, pushing the buttons up into the stationary hand. To reduce friction, a glove may be worn on the drumming hand. This technique is both much faster and less physically straining than hypertapping – it allows pieces to be shifted horizontally up to 30 times per second, enabling play far past level 29. Since 2021, numerous world records have been achieved using the rolling technique, and rolling is used in tournaments such as the Classic Tetris World Championship. History of major achievements. In 2009, Harry Hong became the first independently verified person to achieve 999,999 points on an unmodified cartridge, known as a maxout score. Earlier plausible but unverified max-out scores were claimed by Thor Aackerlund c. 1990 and Jonas Neubauer c. 2002. The game pieces reach their maximum falling speed on level 29 on the NTSC version and 19 on the PAL version, when the speed suddenly doubles. According to "The New Yorker", level 29 "seems intentionally impossible—a quick way for developers to end the game." Because of this "soft wall", efficient play was required to accumulate points before level 29 ended the game. In the 2000s, level 29 came to be known as the game's "kill screen" – though this label was found to be a misnomer when the level was passed in the 2010s. Progression beyond level 29. Aackerlund, a hypertapper, first demonstrated that level 29 could be beaten in 2011; he is shown reaching level 30 in the documentary film "". From level 30, the game's level counter stops working correctly, further suggesting that the developers did not believe level 29 could be surpassed. Level 31 was first reached in 2018 by 15-year-old hypertapper Joseph Saelee who eventually cleared four more consecutive levels until level 35, and by 2020 hypertappers had gone as far as level 38. Kyle Orland of "Ars Technica" explains that because of the rolling technique introduced by Martinez in 2020, "players were getting good enough to effectively play indefinitely on the same 'Level 29' speed that had been considered an effective kill screen just a few years earlier." By 2024, the highest level reached was 235. The final matchup in the Classic Tetris World Championship in 2022 resulted in competitors Eric Tolt and Justin Yu both reaching two million points in a game and levels 73 and 69, respectively. To prevent extremely prolonged games, the CTWC has modified their competition cartridges in 2023 to include a "super killscreen" at level 39, where pieces reach the bottom of the well in only a sixth of a second - two blocks per frame. Technical problems at very high levels. In 2014, computer researcher Mike Birken published an analysis of "Tetris"'s game code, including details on unexpected behaviors that occur at very high levels. An integer overflow bug is first encountered at level 138, where color palettes would be loaded from unrelated areas of memory, creating unusual and unintended game piece colors. In particular, levels 146 and 148, nicknamed "dusk" and "charcoal" by players, feature black game pieces that are extremely difficult to see against the black background, hindering further progression. Additionally, the score-counting code could crash the game after about 1550 lines are cleared, corresponding to level 155. In December 2023, 13-year-old roller Willis Gibson from Stillwater, Oklahoma, was the first to complete the "charcoal" level 148. He continued playing and reached a game crash at level 157. Because "Tetris" had been considered unwinnable (due to games necessarily ending with "topping out"), Gibson is credited with being the first person to "beat the game" since its release in 1989. In a statement, Tetris Company CEO Maya Rogers congratulated Gibson for his "feat that defies all preconceived limits" of "Tetris". Co-founders Alexey Pajitnov and Henk Rogers met Gibson in January 2024, calling his playthrough an "amazing, amazing achievement." If it is possible to avoid the conditions which crash "Tetris", completing level 255 would overflow the level counter back to level 0. Before this level can be reached, players must contend with another bug first encountered at 2190 lines, where an integer underflow causes the level counter to erroneously not increment. The next level is only reached after clearing an additional 810 lines. Competition. The 1990 Nintendo World Championships were based on A-Type "Tetris", "Super Mario Bros.", and "Rad Racer". In each round, contestants were given a total of six minutes to score as much as possible across all three games. As the "Tetris" score was multiplied 25 times in the final tally, the prevailing strategy was to rush through the other two games to spend all available time in "Tetris". Since 2010, the NES version of "Tetris" has been featured in the annual Classic Tetris World Championship (CTWC), which consists of a one-on-one competition to score the most points. Specialized cartridges give both competitors the option to use the same piece sequence. Since 2017, the tournament Classic Tetris Monthly (CTM) has run monthly with the same one-on-one format as the CTWC. The CTM rules are more relaxed than those of the CTWC, allowing the usage of emulators and third-party hardware. In both the CTWC and CTM, there is a cap at level 39, either by stopping play once level 39 is reached, or by a mod which implements a "Super Killscreen", doubling the drop speed at level 39. Development. Licensing. By 1989, about six companies claimed rights to create and distribute the "Tetris" software for home computers, game consoles, and handheld systems. ELORG, the Soviet bureau that held the ultimate copyright, claimed that none of the companies were legally entitled to produce an arcade version, and signed those rights over to Atari Games. Non-Japanese console and handheld rights were signed over to Nintendo. "Tetris" was shown at the January 1988 Consumer Electronics Show in Las Vegas, where it was picked up by Dutch-born American games publisher Henk Rogers, then based in Japan. This eventually led to an agreement brokered with Nintendo where "Tetris" became a launch game for Game Boy and bundled with every system. Implementation. Pajitnov is credited for the "original concept, design and program" but was not directly involved in developing this version. The NES does not have hardware support for generating random numbers. A pseudo-random number generator was implemented with a 16-bit linear-feedback shift register. The algorithm produces a close, but slightly uneven distribution of the seven types of game pieces; for every 224 pieces, there is one fewer long bar piece than would be expected from an even distribution. The game's code includes an unfinished and inaccessible two-player versus mode, which sends rows of garbage blocks (with one opening) to the bottom of the opponent's board when lines are cleared. This feature may have been scrapped due to a rushed development schedule, or to promote sales of the Game Link Cable which enables a two-player mode in Nintendo's Game Boy "Tetris". Music. The soundtrack was written by Nintendo composer Hirokazu Tanaka, who also scored the Game Boy version. Focusing on , the soundtrack features arrangements of "The Dance of the Sugar Plum Fairy" from Pyotr Ilyich Tchaikovsky's ballet "The Nutcracker" and the overture from Georges Bizet's opera "Carmen". The former replaces the arrangement of "Korobeiniki", present in the Game Boy version, which has become strongly associated with "Tetris". Release and reception. "Tetris" was marketed extensively for the 1989 Christmas season. Television advertising used the slogan "You've been Tetris-ized!", referring to the Tetris effect. The tagline "From Russia with fun!" appears on the game's cover, referencing "From Russia, with Love" by Ian Fleming. In its first six months of release by 1990, Nintendo's NES version of "Tetris" had sales of 1.5 million copies totaling $ (equivalent to $ in 2023), surpassing Spectrum HoloByte's versions for personal computers at 150,000 copies for $ (equivalent to $ in 2023) in the previous two years since 1988. As of 2004[ [update]], 8 million copies of the NES version were sold worldwide. In 1991, "Tetris" was included as a pack-in game with some European NES consoles. Unlike the Game Boy version, the NES release was not made available for purchase on Nintendo's Virtual Console. Michael Suck of "Aktueller Software Markt" considered the game a success, praising the adjustable starting level and music. Suck considered the graphics to be adequate, noting that they do not overwhelm the senses. IGN noted that "almost everyone" regarded Nintendo's "Tetris" as inferior to Atari's "Tetris", which was pulled from shelves due to licensing issues. "Computer Entertainer" recommended Nintendo's "Tetris" only to consumers who had not played Atari's version, which it says has superior graphics, gameplay and options – further calling its removal from stores "unfortunate for players" of puzzle games. "Electronic Gaming Monthly" called Atari's version "more playable and in-depth" than Nintendo's. Legacy. "Tetris &amp; Dr. Mario" (1994) features an enhanced remake of "Tetris". "Tetris Effect: Connected" (2020) includes a game mode that simulates the rules and visuals of "Tetris" for the NES. The events that lead to Nintendo acquiring the license to publish a "Tetris" game for consoles are explained in BBC's TV documentary "" (2004), as well as a dramatic retelling in "Tetris" (2023). Since 2018, Nintendo's "Tetris" has experienced a resurgence in popularity with a younger audience. In 2020, more people attained a max-out score than from 1990 to 2019 combined. Use in research. In 2018, 11 classic "Tetris" experts were instructed to play with the next piece preview window disabled ("no next box"). Their average score was found to drop dramatically, from 465,371 in control games to 6,457 with no next box. The author notes that even though one participant went on to become that year's world champion, no player was recorded scoring a Tetris during any of the no next box games. In a 2023 study, 160 people were recorded playing classic "Tetris". The recordings suggested that novice players blink less than usual while playing "Tetris", whereas experienced players remained closer to their normal blinking rate. The study concludes that a person's "Tetris" ability can be assessed by their blink rate during the first minute of play. In contrast, seven-time world champion Jonas Neubauer manually suppressed his blink reflex while playing, leading to health concerns and his regular use of eye drops. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "40(n+1) " }, { "math_id": 2, "text": "100(n+1)" }, { "math_id": 3, "text": "300(n+1)" }, { "math_id": 4, "text": "1200(n+1)" } ]
https://en.wikipedia.org/wiki?curid=61270051
61275
Cathodoluminescence
Photon emission under the impact of an electron beam Cathodoluminescence is an optical and electromagnetic phenomenon in which electrons impacting on a luminescent material such as a phosphor, cause the emission of photons which may have wavelengths in the visible spectrum. A familiar example is the generation of light by an electron beam scanning the phosphor-coated inner surface of the screen of a television that uses a cathode ray tube. Cathodoluminescence is the inverse of the photoelectric effect, in which electron emission is induced by irradiation with photons. Origin. Luminescence in a semiconductor results when an electron in the conduction band recombines with a hole in the valence band. The difference energy (band gap) of this transition can be emitted in form of a photon. The energy (color) of the photon, and the probability that a photon and not a phonon will be emitted, depends on the material, its purity, and the presence of defects. First, the electron has to be excited from the valence band into the conduction band. In cathodoluminescence, this occurs as the result of an impinging high energy electron beam onto a semiconductor. However, these primary electrons carry far too much energy to directly excite electrons. Instead, the inelastic scattering of the primary electrons in the crystal leads to the emission of secondary electrons, Auger electrons and X-rays, which in turn can scatter as well. Such a cascade of scattering events leads to up to 103 secondary electrons per incident electron. These secondary electrons can excite valence electrons into the conduction band when they have a kinetic energy about three times the band gap energy of the material formula_0. From there the electron recombines with a hole in the valence band and creates a photon. The excess energy is transferred to phonons and thus heats the lattice. One of the advantages of excitation with an electron beam is that the band gap energy of materials that are investigated is not limited by the energy of the incident light as in the case of photoluminescence. Therefore, in cathodoluminescence, the "semiconductor" examined can, in fact, be almost any non-metallic material. In terms of band structure, classical semiconductors, insulators, ceramics, gemstones, minerals, and glasses can be treated the same way. Microscopy. In geology, mineralogy, materials science and semiconductor engineering, a scanning electron microscope (SEM) fitted with a cathodoluminescence detector, or an optical cathodoluminescence microscope, may be used to examine internal structures of semiconductors, rocks, ceramics, glass, etc. in order to get information on the composition, growth and quality of the material. Optical cathodoluminescence microscope. A cathodoluminescence (CL) microscope combines a regular (light optical) microscope with a cathode-ray tube. It is designed to image the luminescence characteristics of polished thin sections of solids irradiated by an electron beam. Using a cathodoluminescence microscope, structures within crystals or fabrics can be made visible which cannot be seen in normal light conditions. Thus, for example, valuable information on the growth of minerals can be obtained. CL-microscopy is used in geology, mineralogy and materials science for the investigation of rocks, minerals, volcanic ash, glass, ceramic, concrete, fly ash, etc. CL color and intensity are dependent on the characteristics of the sample and on the working conditions of the electron gun. Here, acceleration voltage and beam current of the electron beam are of major importance. Today, two types of CL microscopes are in use. One is working with a "cold cathode" generating an electron beam by a corona discharge tube, the other one produces a beam using a "hot cathode". Cold-cathode CL microscopes are the simplest and most economical type. Unlike other electron bombardment techniques like electron microscopy, cold cathodoluminescence microscopy provides positive ions along with the electrons which neutralize surface charge buildup and eliminate the need for conductive coatings to be applied to the specimens. The "hot cathode" type generates an electron beam by an electron gun with tungsten filament. The advantage of a hot cathode is the precisely controllable high beam intensity allowing to stimulate the emission of light even on weakly luminescing materials (e.g. quartz – see picture). To prevent charging of the sample, the surface must be coated with a conductive layer of gold or carbon. This is usually done by a sputter deposition device or a carbon coater. Cathodoluminescence from a scanning electron microscope. In scanning electron microscopes a focused beam of electrons impinges on a sample and induces it to emit light that is collected by an optical system, such as an elliptical mirror. From there, a fiber optic will transfer the light out of the microscope where it is separated into its component wavelengths by a monochromator and is then detected with a photomultiplier tube. By scanning the microscope's beam in an X-Y pattern and measuring the light emitted with the beam at each point, a map of the optical activity of the specimen can be obtained (cathodoluminescence imaging). Instead, by measuring the wavelength dependence for a fixed point or a certain area, the spectral characteristics can be recorded (cathodoluminescence spectroscopy). Furthermore, if the photomultiplier tube is replaced with a CCD camera, an entire spectrum can be measured at each point of a map (hyperspectral imaging). Moreover, the optical properties of an object can be correlated to structural properties observed with the electron microscope. The primary advantages to the electron microscope based technique is its spatial resolution. In a scanning electron microscope, the attainable resolution is on the order of a few ten nanometers, while in a (scanning) transmission electron microscope (TEM), nanometer-sized features can be resolved. Additionally, it is possible to perform nanosecond- to picosecond-level time-resolved measurements if the electron beam can be "chopped" into nano- or pico-second pulses by a beam-blanker or with a pulsed electron source. These advanced techniques are useful for examining low-dimensional semiconductor structures, such a quantum wells or quantum dots. While an electron microscope with a cathodoluminescence detector provides high magnification, an optical cathodoluminescence microscope benefits from its ability to show actual visible color features directly through the eyepiece. More recently developed systems try to combine both an optical and an electron microscope to take advantage of both these techniques. Extended applications. Although direct bandgap semiconductors such as GaAs or GaN are most easily examined by these techniques, indirect semiconductors such as silicon also emit weak cathodoluminescence, and can be examined as well. In particular, the luminescence of dislocated silicon is different from intrinsic silicon, and can be used to map defects in integrated circuits. Recently, cathodoluminescence performed in electron microscopes is also being used to study surface plasmon resonances in metallic nanoparticles. Surface plasmons in metal nanoparticles can absorb and emit light, though the process is different from that in semiconductors. Similarly, cathodoluminescence has been exploited as a probe to map the local density of states of planar dielectric photonic crystals and nanostructured photonic materials. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(E_{kin}\\approx 3 E_g)" } ]
https://en.wikipedia.org/wiki?curid=61275
61287985
Assembly line feeding problem
The assembly line feeding problem (abbr. ALFP) describes a problem in operations management concerned with finding the optimal way of feeding parts to assembly stations. For this, various cost elements may be taken into account and every part is assigned to a policy, i.e., a way of feeding parts to an assembly line. The most common policies are: These policies differ with respect to the way parts are brought to the line as well as in the way parts are handled before they are brought to the line. E.g., in line stocking, parts are brought to the line directly in the way they are stored in the warehouse. In the other policies, quantities are reduced (boxed supply) and different part variants are sorted in the order of demand (sequencing, stationary, and traveling kitting). History. The problem was formally introduced by Bozer and McGinnis in 1992 by means of a descriptive cost model. Since then, many contributions have been made in both, quantitative and qualitative manners. E.g., a more qualitative contribution is done by Hua and Johnson investigating important aspects of the problem, whereas more recent contributions focus rather on quantitative aspects and use mathematical optimization to solve this assignment problem to optimality Mathematical problem statement. formula_0 This model minimizes the costs formula_1 when assigning all parts (index:i) to a feeding policy (index:p) at all stations (index:s) formula_2, if there is a demand for a part at a station formula_3. Using a certain policy at a station formula_4 incurs some cost formula_5 as well as some other costs formula_6 are incurred when a policy is used at any station, formula_7. All assembly line feeding problems of this type have been proven to be NP-hard References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\n\\text{minimize:}\\\\\nC &= \\sum_{i \\in I} \\sum_{s \\in S} \\sum_{p \\in P} \\chi_{isp} \\cdot c_{isp}^v +\\sum_{s \\in S}\\sum_{p \\in P} \\psi_{sp} \\cdot c_{sp}^f + \\sum_{p \\in P} \\Omega_{p} \\cdot c_{p}^f\\\\\n\\text{subject to:}\\\\\n\\sum_{p \\in P} \\chi_{isp} &= min\\{1,\\lambda_{is}\\} &\\forall &i \\in I~\\forall s \\in S \\\\\n\\underset{i \\in I}{\\operatorname{max}}~\\{\\chi_{isp}\\} &\\leq \\psi_{sp} &\\forall &s \\in S~\\forall p \\in P \\\\\n\\underset{i \\in I, s \\in S}{\\operatorname{max}}~\\{\\chi_{isp}\\} &\\leq \\Omega_{p} &\\forall &p \\in P \\\\\n\\chi_{isp} &\\in \\{0,1\\} &\\forall &i \\in I~\\forall s \\in S~\\forall p \\in P \\\\\n\\psi_{sp} &\\in \\{0,1\\} &\\forall &s \\in S~\\forall p \\in P \\\\\n\\Omega_{p} &\\in \\{0,1\\} &\\forall &p \\in P\n\\end{align}\n" }, { "math_id": 1, "text": "c_{isp}" }, { "math_id": 2, "text": "\\chi_{isp}=1" }, { "math_id": 3, "text": "\\lambda_{is}>0" }, { "math_id": 4, "text": "\\psi_{sp}=1" }, { "math_id": 5, "text": "c_{sp}" }, { "math_id": 6, "text": "c_{p}" }, { "math_id": 7, "text": "\\Omega_{p}=1" } ]
https://en.wikipedia.org/wiki?curid=61287985
6129269
Transport of structure
Property of structural isomorphism In mathematics, particularly in universal algebra and category theory, transport of structure refers to the process whereby a mathematical object acquires a new structure and its canonical definitions, as a result of being isomorphic to (or otherwise identified with) another object with a pre-existing structure. Definitions by transport of structure are regarded as canonical. Since mathematical structures are often defined in reference to an underlying space, many examples of transport of structure involve spaces and mappings between them. For example, if "formula_0" and "formula_1" are vector spaces with formula_2 being an inner product on formula_1, such that there is an isomorphism formula_3 from "formula_0" to "formula_1", then one can define an inner product formula_4 on "formula_0" by the following rule: formula_5 Although the equation makes sense even when formula_3 is not an isomorphism, it only defines an inner product on "formula_0" when formula_3 is, since otherwise it will cause formula_6 to be degenerate. The idea is that formula_3 allows one to consider "formula_0" and "formula_1" as "the same" vector space, and by following this analogy, then one can transport an inner product from one space to the other. A more elaborated example comes from differential topology, in which the notion of smooth manifold is involved: if formula_7 is such a manifold, and if "formula_8" is any topological space which is homeomorphic to "formula_7", then one can consider "formula_8" as a smooth manifold as well. That is, given a homeomorphism formula_9, one can define coordinate charts on "formula_8" by "pulling back" coordinate charts on "formula_7" through formula_3. Recall that a coordinate chart on formula_7 is an open set "formula_10" together with an injective map formula_11 for some natural number "formula_12"; to get such a chart on "formula_8", one uses the following rules: formula_13 and formula_14. Furthermore, it is required that the charts cover "formula_7" (the fact that the transported charts cover "formula_8" follows immediately from the fact that formula_3 is a bijection). Since "formula_7" is a "smooth" manifold, if "U" and "V", with their maps formula_11 and formula_15, are two charts on "formula_7", then the composition, the "transition map" formula_16 (a self-map of formula_17) is smooth. To verify this for the transported charts on "formula_8", notice that formula_18, and therefore formula_19, and formula_20. Thus the transition map for formula_21 and formula_22 is the same as that for "formula_10" and "formula_0", hence smooth. That is, "formula_8" is a smooth manifold via transport of structure. This is a special case of transport of structures in general. The second example also illustrates why "transport of structure" is not always desirable. Namely, one can take "formula_7" to be the plane, and "formula_8" to be an infinite one-sided cone. By "flattening" the cone, a homeomorphism of "formula_8" and "formula_7" can be obtained, and therefore the structure of a smooth manifold on "formula_8", but the cone is not "naturally" a smooth manifold. That is, one can consider "formula_8" as a subspace of 3-space, in which context it is not smooth at the cone point. A more surprising example is that of exotic spheres, discovered by Milnor, which states that there are exactly 28 smooth manifolds which are homeomorphic but "not" diffeomorphic to formula_23, the 7-dimensional sphere in 8-space. Thus, transport of structure is most productive when there exists a canonical isomorphism between the two objects.
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "W" }, { "math_id": 2, "text": "(\\cdot,\\cdot)" }, { "math_id": 3, "text": "\\phi" }, { "math_id": 4, "text": "[\\cdot, \\cdot]" }, { "math_id": 5, "text": "[v_1, v_2] = (\\phi(v_1), \\phi(v_2))" }, { "math_id": 6, "text": "[\\cdot,\\cdot]" }, { "math_id": 7, "text": "M" }, { "math_id": 8, "text": "X" }, { "math_id": 9, "text": "\\phi \\colon X \\to M" }, { "math_id": 10, "text": "U" }, { "math_id": 11, "text": "c \\colon U \\to \\mathbb{R}^n" }, { "math_id": 12, "text": "n" }, { "math_id": 13, "text": "U' = \\phi^{-1}(U)" }, { "math_id": 14, "text": "c' = c \\circ \\phi" }, { "math_id": 15, "text": "d \\colon V \\to \\mathbb{R}^n" }, { "math_id": 16, "text": "d \\circ c^{-1} \\colon c(U \\cap V) \\to \\mathbb{R}^n" }, { "math_id": 17, "text": "\\mathbb{R}^n" }, { "math_id": 18, "text": "\\phi^{-1}(U) \\cap \\phi^{-1}(V) = \\phi^{-1}(U \\cap V)" }, { "math_id": 19, "text": "c'(U' \\cap V') = (c \\circ \\phi)(\\phi^{-1}(U \\cap V)) = c(U \\cap V)" }, { "math_id": 20, "text": "d' \\circ (c')^{-1} = (d \\circ \\phi) \\circ (c \\circ \\phi)^{-1} = d \\circ (\\phi \\circ \\phi^{-1}) \\circ c^{-1} = d \\circ c^{-1}" }, { "math_id": 21, "text": "U'" }, { "math_id": 22, "text": "V'" }, { "math_id": 23, "text": "S^7" } ]
https://en.wikipedia.org/wiki?curid=6129269
61292787
Counting Bloom filter
A counting Bloom filter is a probabilistic data structure that is used to test whether the number of occurrences of a given element in a sequence exceeds a given threshold. As a generalized form of the Bloom filter, false positive matches are possible, but false negatives are not – in other words, a query returns either "possibly bigger or equal than the threshold" or "definitely smaller than the threshold". Algorithm description. Most of the parameters are defined same with Bloom filter, such as "m, k. m" is the number of counters in counting Bloom filter, which is expansion of "m" bits in Bloom filter. An "empty counting Bloom filter" is a "m" counters, all set to 0. Similar to Bloom filter, there must also be "k" different hash functions defined, each of which maps or hashes some set element to one of the "m" counter array positions, generating a uniform random distribution. It is also similar that "k" is a constant, much smaller than "m", which is proportional to the number of elements to be added. The main generalization of Bloom filter is adding an element. To "add" an element, feed it to each of the "k" hash functions to get "k" array positions and "increment" the counters 1 at all these positions. To "query" for an element with a threshold "θ" (test whether the count number of an element is smaller than "θ"), feed it to each of the "k" hash functions to get "k" counter positions. If any of the counters at these positions is less than "θ", the count number of element is definitely less than "θ" – if it were more and equal, then all the corresponding counters would have been greater or equal to "θ". If all are greater or equal to "θ", then either the count is really greater or equal to "θ", or the counters have by chance been greater or equal to "θ". If all are greater or equal to θ even though the count is smaller than "θ", this circumstance is defined as false positive. This also should be minimized like Bloom filter. About hashing problem and advantages, see Bloom filter. A counting Bloom filter is essentially the same data structure as count–min sketches, but are used differently. Potential for false negatives. Several implementations of counting bloom filters allow for deletion, by decrementing each of the "k" counters for a given input. This will introduce the probability of false negatives during a query if the deleted input has not previously been inserted into the filter. Guo "et al." present the problem in great detail, and provide heuristics for the parameters "m", "k", and "n" which minimize the probability of false negatives. Probability of false positives. The same assumptions in Bloom filter, which hash functions make insertions uniform random, is also assumed here. In the "m" pots, "kn" balls are inserted randomly. So the probability of one of counter in counting Bloom filter counts "l" is formula_0, where "b" is binomial distribution. A counting Bloom filter determines an element is greater or equal to "θ" when the corresponding "k" counters are greater or equal to "θ." Therefore, the probability that counting Bloom filter determines an element is greater or equal to "θ" is formula_1. This is different from formal definition of false positive in counting Bloom filter. However, following the assumption in Bloom filter, above probability is defined as false positive of counting Bloom filter. If "θ"=1, the equation becomes false positive of Bloom filter. Optimal number of hash functions. For large but fixed "n" and "m", the false positive decreases from "k"=1 to a point defined formula_2, and increases from formula_2to positive infinity. Kim et al. (2019) shows numerical values of formula_2within formula_3. For formula_4 they suggested using the floor or ceiling of formula_5. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "b(l, kn, \\frac{1}{m}) = {kn \\choose l}(\\frac{1}{m})^l (1 - \\frac{1}{m})^{kn-l}" }, { "math_id": 1, "text": "p_{fp}(\\theta, k, n, m) = (1 - \\sum\\limits_{l < \\theta} b(l, kn, \\frac{1}{m}))^k" }, { "math_id": 2, "text": "k_{opt}" }, { "math_id": 3, "text": "1\\leq\\theta\\leq30" }, { "math_id": 4, "text": "\\theta > 30" }, { "math_id": 5, "text": "\\frac{m}{n} (0.2037\\theta + 0.9176)" } ]
https://en.wikipedia.org/wiki?curid=61292787
61293808
Interval predictor model
In regression analysis, an interval predictor model (IPM) is an approach to regression where bounds on the function to be approximated are obtained. This differs from other techniques in machine learning, where usually one wishes to estimate point values or an entire probability distribution. Interval Predictor Models are sometimes referred to as a nonparametric regression technique, because a potentially infinite set of functions are contained by the IPM, and no specific distribution is implied for the regressed variables. Multiple-input multiple-output IPMs for multi-point data commonly used to represent functions have been recently developed. These IPM prescribe the parameters of the model as a path-connected, semi-algebraic set using sliced-normal or sliced-exponential distributions. A key advantage of this approach is its ability to characterize complex parameter dependencies to varying fidelity levels. This practice enables the analyst to adjust the desired level of conservatism in the prediction. As a consequence of the theory of scenario optimization, in many cases rigorous predictions can be made regarding the performance of the model at test time. Hence an interval predictor model can be seen as a guaranteed bound on quantile regression. Interval predictor models can also be seen as a way to prescribe the support of random predictor models, of which a Gaussian process is a specific case Convex interval predictor models. Typically the interval predictor model is created by specifying a parametric function, which is usually chosen to be the product of a parameter vector and a basis. Usually the basis is made up of polynomial features or a radial basis is sometimes used. Then a convex set is assigned to the parameter vector, and the size of the convex set is minimized such that every possible data point can be predicted by one possible value of the parameters. Ellipsoidal parameters sets were used by Campi (2009), which yield a convex optimization program to train the IPM. Crespo (2016) proposed the use of a hyperrectangular parameter set, which results in a convenient, linear form for the bounds of the IPM. Hence the IPM can be trained with a linear optimization program: formula_0 where the training data examples are formula_1 and formula_2, and the Interval Predictor Model bounds formula_3 and formula_4 are parameterised by the parameter vector formula_5. The reliability of such an IPM is obtained by noting that for a convex IPM the number of support constraints is less than the dimensionality of the trainable parameters, and hence the scenario approach can be applied. Lacerda (2017) demonstrated that this approach can be extended to situations where the training data is interval valued rather than point valued. Non-convex interval predictor models. In Campi (2015) a non-convex theory of scenario optimization was proposed. This involves measuring the number of support constraints, formula_6, for the Interval Predictor Model after training and hence making predictions about the reliability of the model. This enables non-convex IPMs to be created, such as a single layer neural network. Campi (2015) demonstrates that an algorithm where the scenario optimization program is only solved formula_6 times which can determine the reliability of the model at test time without a prior evaluation on a validation set. This is achieved by solving the optimisation program formula_7 where the interval predictor model center line formula_8, and the model width formula_9. This results in an IPM which makes predictions with homoscedastic uncertainty. Sadeghi (2019) demonstrates that the non-convex scenario approach from Campi (2015) can be extended to train deeper neural networks which predict intervals with hetreoscedastic uncertainty on datasets with imprecision. This is achieved by proposing generalizations to the max-error loss function given by formula_10 which is equivalent to solving the optimisation program proposed by Campi (2015). Applications. Initially, scenario optimization was applied to robust control problems. Crespo (2015) and (2021) applied Interval Predictor Models to the design of space radiation shielding and to system identification. In Patelli (2017), Faes (2019), and Crespo (2018), Interval Predictor models were applied to the structural reliability analysis problem. Brandt (2017) applies interval predictor models to fatigue damage estimation of offshore wind turbines jacket substructures. Garatti (2019) proved that Chebyshev layers (i.e., the minimax layers around functions fitted by linear formula_11-regression) belong to a particular class of Interval Predictor Models, for which the reliability is invariant with respect to the distribution of the data. Software implementations. OpenCOSSAN provides a Matlab implementation of the work of Crespo (2015). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\operatorname{arg\\,min}_p \\left\\{\\mathbb{E}_x(\\bar{y}_p(x) - \\underline{y}_p(x)) : \\bar{y}_p(x^{(i)}) > y^{(i)} > \\underline{y}_p(x^{(i)}), i=1,\\ldots,N \\right\\}\n" }, { "math_id": 1, "text": " y^{(i)}" }, { "math_id": 2, "text": " x^{(i)}" }, { "math_id": 3, "text": " \\underline{y}_p(x)" }, { "math_id": 4, "text": "\\overline{y}_p(x) " }, { "math_id": 5, "text": " p " }, { "math_id": 6, "text": "S" }, { "math_id": 7, "text": "\n\\operatorname{arg\\,min}_p \\left\\{h : |\\hat{y}_p(x^{(i)}) - y^{(i)}| < h, i=1,\\ldots,N\\right\\},\n" }, { "math_id": 8, "text": " \\hat{y}_p(x) = (\\overline{y}_p(x) + \\underline{y}_p(x)) \\times 1/2" }, { "math_id": 9, "text": " h = (\\overline{y}_p(x) - \\underline{y}_p(x)) \\times 1/2 " }, { "math_id": 10, "text": "\n\\mathcal{L}_{\\text{max-error}} = \\max_i |y^{(i)}-\\hat{y}_p(x^{(i)})|,\n" }, { "math_id": 11, "text": "\\ell_\\infty" } ]
https://en.wikipedia.org/wiki?curid=61293808
6129563
Point-to-point Lee model
Radio propagation model The Lee model for point-to-point mode is a radio propagation model that operates around 900 MHz. Built as two different modes, this model includes an adjustment factor that can be adjusted to make the model more flexible to different regions of propagation. Applicable to/under conditions. This model is suitable for using in data collected in a specific area for point-to-point links. Coverage. Frequency: 900 MHz band Mathematical formulation. The model. The Lee model for point to point mode is formally expressed as: formula_0 where, "L" = The median path loss. Unit: decibel (dB). "L"0 = The reference path loss along 1 km. Unit: decibel (dB). formula_1 = The slope of the path loss curve. Unit: decibels per decade. "d" = The distance on which the path loss is to be calculated. Unit: kilometer (km). "F"A = Adjustment factor "H"ET = Effective height of terrain. Unit: meter (m). Calculation of reference path loss. The reference path loss is usually computed along a 1 km or 1 mi link. Any other suitable length of path can be chosen based on the applications. formula_2 where, "G"B = Base station antenna gain. Unit: decibel with respect to isotropic antenna (dBi). formula_3 = Wavelength. Unit: meter (m). GM = Mobile station antenna gain. Unit: decibel with respect to isotropic antenna (dBi). Calculation of adjustment factors. The adjustment factor is calculated as: formula_4 where, "F"BH = Base station antenna height correction factor "F"BG = Base station antenna gain correction factor "F"MH = Mobile station antenna height correction factor "F"MG = Mobile station antenna gain correction factor "F"F = Frequency correction factor The base station antenna height correction factor. formula_5 where, hB = Base station antenna height. Unit: meter. The base station antenna gain correction factor. formula_6 where, "G"B = Base station antenna gain. Unit: decibel with respect to half-wave dipole (dBd). The mobile station antenna height correction factor. formula_7 where, "h"M = Mobile station antenna height. Unit: meter. The mobile antenna gain correction factor. formula_8 where, GM = Mobile station antenna gain. Unit: decibel with respect to half wave dipole antenna (dBd). The frequency correction factor. formula_9 where, "f" = Frequency. Unit: megahertz (MHz). Effective terrain slope calculation. This is computed in the following way:
[ { "math_id": 0, "text": "L = L_0 + \\gamma g \\log d - 10 \\left(\\log {F_A} - 2 \\log \\left(\\frac{H_{ET}}{30}\\right)\\right)" }, { "math_id": 1, "text": "\\gamma\\;" }, { "math_id": 2, "text": "L_0 = G_B + G_M + 20 \\left( \\log \\lambda - \\log d\\right) - 22" }, { "math_id": 3, "text": "\\lambda" }, { "math_id": 4, "text": "F_A = F_{BH} F_{BG} F_{MH} F_{MG} F_{F} " }, { "math_id": 5, "text": "F_1 = \\left(\\frac{h_B} {30.48}\\right)^2" }, { "math_id": 6, "text": "F_2 = \\frac{G_B}{4}" }, { "math_id": 7, "text": "F_3 = \\begin{cases} \\frac{h_M}{3} \\qquad \\text{ if, } h_M \\le 3 \\\\ (\\frac{h_M}{3})^2 \\quad \\text{ if, } h_M > 3 \\end{cases}" }, { "math_id": 8, "text": "F_4 = G_M" }, { "math_id": 9, "text": "F_5 = \\left(\\frac{f}{900}\\right)^{-n} \\text{ for } 2< n <3" } ]
https://en.wikipedia.org/wiki?curid=6129563
6129627
Womersley number
Dimensionless expression of the pulsatile flow frequency in relation to viscous effects The Womersley number (formula_0 or formula_1) is a dimensionless number in biofluid mechanics and biofluid dynamics. It is a dimensionless expression of the pulsatile flow frequency in relation to viscous effects. It is named after John R. Womersley (1907–1958) for his work with blood flow in arteries. The Womersley number is important in keeping dynamic similarity when scaling an experiment. An example of this is scaling up the vascular system for experimental study. The Womersley number is also important in determining the thickness of the boundary layer to see if entrance effects can be ignored. The square root of this number is also referred to as Stokes number, formula_2, due to the pioneering work done by Sir George Stokes on the Stokes second problem. Derivation. The Womersley number, usually denoted formula_0, is defined by the relation formula_3 where formula_4 is an appropriate length scale (for example the radius of a pipe), formula_5 is the angular frequency of the oscillations, and formula_6, formula_7, formula_8 are the kinematic viscosity, density, and dynamic viscosity of the fluid, respectively. The Womersley number is normally written in the powerless form formula_9 In the cardiovascular system, the pulsation frequency, density, and dynamic viscosity are constant, however the Characteristic length, which in the case of blood flow is the vessel diameter, changes by three orders of magnitudes (OoM) between the aorta and fine capillaries. The Womersley number thus changes due to the variations in vessel size across the vasculature system. The Womersley number of human blood flow can be estimated as follows: formula_10 Below is a list of estimated Womersley numbers in different human blood vessels: It can also be written in terms of the dimensionless Reynolds number (Re) and Strouhal number (St): formula_11 The Womersley number arises in the solution of the linearized Navier–Stokes equations for oscillatory flow (presumed to be laminar and incompressible) in a tube. It expresses the ratio of the transient or oscillatory inertia force to the shear force. When formula_0 is small (1 or less), it means the frequency of pulsations is sufficiently low that a parabolic velocity profile has time to develop during each cycle, and the flow will be very nearly in phase with the pressure gradient, and will be given to a good approximation by Poiseuille's law, using the instantaneous pressure gradient. When formula_0 is large (10 or more), it means the frequency of pulsations is sufficiently large that the velocity profile is relatively flat or plug-like, and the mean flow lags the pressure gradient by about 90 degrees. Along with the Reynolds number, the Womersley number governs dynamic similarity. The boundary layer thickness formula_12 that is associated with the transient acceleration is inversely related to the Womersley number. This can be seen by recognizing the Stokes number as the square root of the Womersley number. formula_13 where formula_4 is a characteristic length. Biofluid mechanics. In a flow distribution network that progresses from a large tube to many small tubes (e.g. a blood vessel network), the frequency, density, and dynamic viscosity are (usually) the same throughout the network, but the tube radii change. Therefore, the Womersley number is large in large vessels and small in small vessels. As the vessel diameter decreases with each division the Womersley number soon becomes quite small. The Womersley numbers tend to 1 at the level of the terminal arteries. In the arterioles, capillaries, and venules the Womersley numbers are less than one. In these regions the inertia force becomes less important and the flow is determined by the balance of viscous stresses and the pressure gradient. This is called microcirculation. Some typical values for the Womersley number in the cardiovascular system for a canine at a heart rate of 2 Hz are: It has been argued that universal biological scaling laws (power-law relationships that describe variation of quantities such as metabolic rate, lifespan, length, etc., with body mass) are a consequence of the need for energy minimization, the fractal nature of vascular networks, and the crossover from high to low Womersley number flow as one progresses from large to small vessels. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha" }, { "math_id": 1, "text": "\\text{Wo}" }, { "math_id": 2, "text": "\\text{Stk}=\\sqrt{\\text{Wo}}" }, { "math_id": 3, "text": "\\alpha^2 = \\frac{\\text{transient inertial force}}{\\text{viscous force}} = \\frac{ \\rho \\omega U}{\\mu U L^{-2} } = \\frac{ \\omega L^{2} }{\\mu \\rho^{-1} } = \\frac{ \\omega L^{2} }{\\nu} \\, ," }, { "math_id": 4, "text": "L" }, { "math_id": 5, "text": "\\omega" }, { "math_id": 6, "text": "\\nu" }, { "math_id": 7, "text": "\\rho" }, { "math_id": 8, "text": "\\mu" }, { "math_id": 9, "text": "\\alpha = L \\left( \\frac{\\omega \\rho}{\\mu} \\right)^\\frac{1}{2} \\, ." }, { "math_id": 10, "text": "\\alpha = L \\left( \\frac{\\omega \\rho}{\\mu} \\right)^\\frac{1}{2} \\, ." }, { "math_id": 11, "text": "\\alpha = \\left( 2\\pi\\, \\mathrm{Re} \\, \\mathrm{St} \\right)^{1/2}\\, ." }, { "math_id": 12, "text": "\\delta" }, { "math_id": 13, "text": "\\delta = \\left( L/\\alpha \\right)= \\left( \\frac{L}{\\sqrt{\\mathrm{Wo}}}\\right), " } ]
https://en.wikipedia.org/wiki?curid=6129627
61298138
Swap test
Technique for comparing quantum states The swap test is a procedure in quantum computation that is used to check how much two quantum states differ, appearing first in the work of Barenco et al. and later rediscovered by Harry Buhrman, Richard Cleve, John Watrous, and Ronald de Wolf. It appears commonly in quantum machine learning, and is a circuit used for proofs-of-concept in implementations of quantum computers. Formally, the swap test takes two input states formula_0 and formula_1 and outputs a Bernoulli random variable that is 1 with probability formula_2 (where the expressions here use bra–ket notation). This allows one to, for example, estimate the squared inner product between the two states, formula_3, to formula_4 additive error by taking the average over formula_5 runs of the swap test. This requires formula_5 copies of the input states. The squared inner product roughly measures "overlap" between the two states, and can be used in linear-algebraic applications, including clustering quantum states. Explanation of the circuit. Consider two states: formula_0 and formula_1. The state of the system at the beginning of the protocol is formula_6. After the Hadamard gate, the state of the system is formula_7. The controlled SWAP gate transforms the state into formula_8. The second Hadamard gate results in formula_9 The measurement gate on the first qubit ensures that it's 0 with a probability of formula_10 when measured. If formula_11 and formula_12 are orthogonal formula_13, then the probability that 0 is measured is formula_14. If the states are equal formula_15, then the probability that 0 is measured is 1. In general, for formula_16 trials of the swap test using formula_16 copies of formula_0 and formula_16 copies of formula_1, the fraction of measurements that are zero is formula_17, so by taking formula_18, one can get arbitrary precision of this value. Below is the pseudocode for estimating the value of formula_19 using "P" copies of formula_1 and formula_0: Inputs "P" copies each of the "n" qubits quantum states formula_1 and formula_0 Output An estimate of formula_19 for "j" ranging from 1 to "P": initialize an ancilla qubit "A" in state formula_20 apply a Hadamard gate to the ancilla qubit "A" for "i" ranging from 1 to "n": apply CSWAP to formula_21 and formula_22 (the "i"th qubit of the "j"th copy of formula_1 and formula_0), with "A" as the control qubit apply a Hadamard gate to the ancilla qubit "A" measure "A" in the formula_23 basis and record the measurement "Mj as either a 0 or 1 compute formula_24. return formula_25 as our estimate of formula_19 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|\\phi\\rangle" }, { "math_id": 1, "text": "|\\psi\\rangle" }, { "math_id": 2, "text": "\\textstyle\\frac{1}{2} - \\frac{1}{2} {|\\langle \\psi | \\phi\\rangle|}^2" }, { "math_id": 3, "text": "{|\\langle \\psi | \\phi\\rangle|}^2" }, { "math_id": 4, "text": "\\varepsilon" }, { "math_id": 5, "text": "O(\\textstyle\\frac{1}{\\varepsilon^2})" }, { "math_id": 6, "text": "|0, \\phi, \\psi\\rangle" }, { "math_id": 7, "text": " \\frac{1}{\\sqrt{2}}(|0, \\phi, \\psi\\rangle + |1, \\phi, \\psi\\rangle)" }, { "math_id": 8, "text": " \\frac{1}{\\sqrt{2}}(|0, \\phi, \\psi\\rangle + |1, \\psi, \\phi\\rangle)" }, { "math_id": 9, "text": "\n\\frac{1}{2}(|0, \\phi, \\psi\\rangle + |1, \\phi, \\psi\\rangle + |0, \\psi, \\phi\\rangle - |1, \\psi, \\phi\\rangle) = \\frac{1}{2}|0\\rangle(|\\phi, \\psi\\rangle + |\\psi, \\phi\\rangle) + \\frac{1}{2}|1\\rangle(|\\phi, \\psi\\rangle - |\\psi, \\phi\\rangle)\n" }, { "math_id": 10, "text": "\nP(\\text{First qubit} = 0) = \\frac{1}{2} \\Big( \\langle \\phi | \\langle \\psi | + \\langle \\psi | \\langle \\phi | \\Big) \\frac{1}{2} \\Big( | \\phi \\rangle | \\psi \\rangle + | \\psi \\rangle | \\phi \\rangle \\Big) = \\frac{1}{2} + \\frac{1}{2} {|\\langle \\psi | \\phi\\rangle|}^2\n" }, { "math_id": 11, "text": "\\psi" }, { "math_id": 12, "text": "\\phi" }, { "math_id": 13, "text": "({|\\langle \\psi | \\phi\\rangle|}^2 = 0)" }, { "math_id": 14, "text": "\\frac{1}{2}" }, { "math_id": 15, "text": "({|\\langle \\psi | \\phi\\rangle|}^2 = 1)" }, { "math_id": 16, "text": "P" }, { "math_id": 17, "text": "1 - \\textstyle\\frac{1}{P} \\textstyle\\sum_{i = 1}^{P} M_i" }, { "math_id": 18, "text": "P \\rightarrow \\infty" }, { "math_id": 19, "text": "|\\langle \\psi | \\phi \\rangle |^2" }, { "math_id": 20, "text": "|0\\rangle" }, { "math_id": 21, "text": "\\psi_i" }, { "math_id": 22, "text": "\\phi_i" }, { "math_id": 23, "text": "Z" }, { "math_id": 24, "text": "s = 1 - \\textstyle\\frac{2}{P} \\textstyle\\sum_{i = 1}^{P} M_i" }, { "math_id": 25, "text": "s" } ]
https://en.wikipedia.org/wiki?curid=61298138
6129873
Schouten tensor
In Riemannian geometry the Schouten tensor is a second-order tensor introduced by Jan Arnoldus Schouten defined for "n" ≥ 3 by: formula_0 where Ric is the Ricci tensor (defined by contracting the first and third indices of the Riemann tensor), "R" is the scalar curvature, "g" is the Riemannian metric, formula_1 is the trace of "P" and "n" is the dimension of the manifold. The Weyl tensor equals the Riemann curvature tensor minus the Kulkarni–Nomizu product of the Schouten tensor with the metric. In an index notation formula_2 The Schouten tensor often appears in conformal geometry because of its relatively simple conformal transformation law formula_3 where formula_4
[ { "math_id": 0, "text": "P=\\frac{1}{n - 2} \\left(\\mathrm{Ric} -\\frac{ R}{2 (n-1)} g\\right)\\, \\Leftrightarrow \\mathrm{Ric}=(n-2) P + J g \\, ," }, { "math_id": 1, "text": "J=\\frac{1}{2(n-1)}R" }, { "math_id": 2, "text": "R_{ijkl}=W_{ijkl}+g_{ik} P_{jl}-g_{jk} P_{il}-g_{il} P_{jk}+g_{jl} P_{ik}\\, . " }, { "math_id": 3, "text": "g_{ij}\\mapsto \\Omega^2 g_{ij} \\Rightarrow P_{ij}\\mapsto P_{ij}-\\nabla_i \\Upsilon_j + \\Upsilon_i \\Upsilon_j -\\frac12 \\Upsilon_k \\Upsilon^k g_{ij}\\, , " }, { "math_id": 4, "text": " \\Upsilon_i := \\Omega^{-1} \\partial_i \\Omega\\, . " } ]
https://en.wikipedia.org/wiki?curid=6129873
6129957
Weyl–Schouten theorem
On when a Riemannian manifold of dimension ≥ 3 is conformally flat In the mathematical field of differential geometry, the existence of isothermal coordinates for a (pseudo-)Riemannian metric is often of interest. In the case of a metric on a two-dimensional space, the existence of isothermal coordinates is unconditional. For higher-dimensional spaces, the Weyl–Schouten theorem (named after Hermann Weyl and Jan Arnoldus Schouten) characterizes the existence of isothermal coordinates by certain equations to be satisfied by the Riemann curvature tensor of the metric. Existence of isothermal coordinates is also called conformal flatness, although some authors refer to it instead as "local conformal flatness"; for those authors, conformal flatness refers to a more restrictive condition. Theorem. In terms of the Riemann curvature tensor, the Ricci tensor, and the scalar curvature, the Weyl tensor of a pseudo-Riemannian metric g of dimension n is given by formula_0 The Schouten tensor is defined via the Ricci and scalar curvatures by formula_1 As can be calculated by the Bianchi identities, these satisfy the relation that formula_2 The Weyl–Schouten theorem says that for any pseudo-Riemannian manifold of dimension n: 3 then the manifold is conformally flat if and only if its Schouten tensor is a Codazzi tensor. As known prior to the work of Weyl and Schouten, in the case "n" 2, every manifold is conformally flat. In all cases, the theorem and its proof are entirely local, so the topology of the manifold is irrelevant. There are varying conventions for the meaning of conformal flatness; the meaning as taken here is sometimes instead called "local conformal flatness". Sketch of proof. The "only if" direction is a direct computation based on how the Weyl and Schouten tensors are modified by a conformal change of metric. The "if direction" requires more work. Consider the following equation for a 1-form "ω": formula_3 Let "F""ω","g" denote the tensor on the right-hand side. The Frobenius theorem states that the above equation is locally solvable if and only if formula_4 is symmetric in i and k for any 1-form "ω". A direct cancellation of terms shows that this is the case if and only if formula_5 for any 1-form "ω". If "n" 3 then the left-hand side is zero since the Weyl tensor of any three-dimensional metric is zero; the right-hand side is zero whenever the Schouten tensor is a Codazzi tensor. If "n" ≥ 4 then the left-hand side is zero whenever the Weyl tensor is zero; the right-hand side is also then zero due to the identity given above which relates the Weyl tensor to the Schouten tensor. As such, under the given curvature and dimension conditions, there always exists a locally defined 1-form "ω" solving the given equation. From the symmetry of the right-hand side, it follows that "ω" must be a closed form. The Poincaré lemma then implies that there is a real-valued function u with "ω" "du". Due to the formula for the Ricci curvature under a conformal change of metric, the (locally defined) pseudo-Riemannian metric e"u""g" is Ricci-flat. If "n" 3 then every Ricci-flat metric is flat, and if "n" ≥ 4 then every Ricci-flat and Weyl-flat metric is flat. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources.
[ { "math_id": 0, "text": "W_{ijkl}=R_{ijkl}-\\frac{R_{ik}g_{jl}-R_{il}g_{jk}+R_{jl}g_{ik}-R_{jk}g_{il}}{n-2}+\\frac{R}{(n-1)(n-2)}(g_{jl}g_{ik}-g_{jk}g_{ik})." }, { "math_id": 1, "text": "S_{ij}=\\frac{2}{n-2}R_{ij}-\\frac{Rg_{ij}}{(n-2)(n-1)}." }, { "math_id": 2, "text": "\\nabla^jW_{ijkl}=\\frac{n-3}{2}(\\nabla_kS_{il}-\\nabla_lS_{ik})." }, { "math_id": 3, "text": "\\nabla_i\\omega_j=\\frac{1}{2}\\omega_i\\omega_j-\\frac{1}{4}g^{pq}\\omega_p\\omega_qg_{ij}-S_{ij}" }, { "math_id": 4, "text": "\\partial_k\\Gamma_{ij}^p\\omega_p+\\Gamma_{ij}^pF_{kp}^{\\omega,g}+\\frac{1}{2}F_{ki}^{\\omega,g}\\omega_j+\\frac{1}{2}\\omega_iF_{kj}^{\\omega,g}-\\frac{1}{4}\\partial_kg^{pq}\\omega_p\\omega_qg_{ij}-\\frac{1}{2}g^{pq}\\omega_pF_{kq}^{\\omega,g}g_{ij}-\\frac{1}{4}g^{pq}\\omega_p\\omega_q\\partial_kg_{ij}-\\partial_kS_{ij}" }, { "math_id": 5, "text": "{W_{kij}}^p\\omega_p=\\nabla_kS_{ij}-\\nabla_iS_{jk}" } ]
https://en.wikipedia.org/wiki?curid=6129957
61304144
Coherent Raman scattering microscopy
Multi-photon microscopy technique Coherent Raman scattering (CRS) microscopy is a multi-photon microscopy technique based on Raman-active vibrational modes of molecules. The two major techniques in CRS microscopy are stimulated Raman scattering (SRS) and coherent anti-Stokes Raman scattering (CARS). SRS and CARS were theoretically predicted and experimentally realized in the 1960s. In 1982 the first CARS microscope was demonstrated. In 1999, CARS microscopy using a collinear geometry and high numerical aperture objective were developed in Xiaoliang Sunney Xie's lab at Harvard University. This advancement made the technique more compatible with modern laser scanning microscopes. Since then, CRS's popularity in biomedical research started to grow. CRS is mainly used to image lipid, protein, and other bio-molecules in live or fixed cells or tissues without labeling or staining. CRS can also be used to image samples labeled with Raman tags, which can avoid interference from other molecules and normally allows for stronger CRS signals than would normally be obtained for common biomolecules. CRS also finds application in other fields, such as material science and environmental science. Background. Coherent Raman scattering is based on Raman scattering (or spontaneous Raman scattering). In spontaneous Raman, only one monochromatic excitation laser is used. Spontaneous Raman scattering's signal intensity grows linearly with the average power of a continuous-wave pump laser. In CRS, two lasers are used to excite specific vibrational modes of molecules to be imaged. The laser with a higher photon energy is normally called the pump laser and the laser with a lower photon energy is called Stokes laser. In order to produce a signal their photon energy differences must match the energy of a vibrational mode: formula_0, where the formula_1. CRS is a nonlinear optical process, where the signal level is normally a function of the product of the powers of the pump and Stokes lasers. Therefore, most CRS microscopy experiments are performed with pulsed lasers, where higher peak power improved the signal levels of CRS significantly. Coherent anti-Stokes Raman scattering (CARS) Microscopy. In CARS, anti-Stokes photons (higher in energy, shorter wavelength than the pump) are detected as signals. formula_2 In CARS microscopy, there are normally two ways to detect the newly generated photons. One is called forward-detected CARS, the other called epi-detected CARS. In forward-detected CARS, the generated CARS photons together with pump and Stokes lasers go through the sample. The pump and Stokes lasers are completely blocked by a high optical density (OD) notch filter. The CARS photons are then detected by a photomultiplier tube (PMT) or a CCD camera. In epi-detected CARS, back-scattered CARS photons are redirected by a dichroic mirror or polarizing beam splitter. After high OD filters are used to block back-scattered pump and Stokes lasers, the newly generated photons are detected by a PMT. The signal intensity of CARS has the following relationship with the pump and Stokes laser intensities formula_3, the number of molecules formula_4 in the focus of the lasers and the third order Raman susceptibility formula_5 of the molecule: formula_6 The signal-to-noise ratio (SNR), which is a more important characteristic in imaging experiments depends on the square root of the number of CARS photons generated, which is given below: formula_7 There are other non-linear optical processes that also generate photons at the anti-Stokes wavelength. Those signals are normally called non-resonant (NR) four-wave-mixing (FWM) background in CARS microscopy. These background can interfere with the CARS signal either constructively or destructively. However, the problem can be partially circumvented by subtracting the on- and off-resonance images or using mathematical methods to retrieve the background free images. Stimulated Raman scattering (SRS) microscopy. In SRS, the intensity of the energy transfer from the pump wavelength to the Stokes laser wavelength is measured as a signal. There are two ways to measure SRS signals, one is to measure the increase of power in Stokes laser, which is called stimulated Raman gain (SRG). The other is to measure the decrease of power in the pump laser, which is called stimulated Raman loss (SRL). Since the change of power is on the order of 10−3 to 10−6 compared with the original power of pump and Stokes lasers, a modulation transfer scheme is normally employed to extract the SRS signals. The SRS signal depends on the pump and Stokes laser powers in the following way: formula_8 Shot noise limited detection can be achieved if electronic noise from detectors are reduced well below optical noise and the lasers are shot noise limited at the detection frequency (modulation frequency). In the shot noise limited case, the signal-to-noise ratio (SNR) of SRS is formula_9 formula_10 The signal of SRS is free from the non-resonant background which plagues CARS microscopy, although a much smaller non-resonant background from other optical process (e.g. cross-phase modulation, multi-color multi-photon absorption) may exist. SRS can be detected in either the forward direction and epi directions. In forward-detected SRS, the modulated laser is blocked by a high OD notch filter and the other laser is measured by a photodiode. Modulation transferred from the modulated laser to the originally unmodulated laser is normally extracted by a lock-in amplifier from the output of photodiode. In epi-detected SRS, there are normally two methods to detect the SRS signal. One method is to detect the back-scattered light in front of the objective by a photodiode with a hole at the center. The other method is similar to the epi-detected CARS microscopy, where the back-scattered light goes through the objective and is deflected to the side of the light path, normally with the combination of a polarizing beam splitter and a quarter wave-plate. The Stokes (or pump) laser is then detected after filtering out the pump (or Stokes laser). Two-color, multi-color, and hyper-spectral CRS microscopy. One pair of laser wavelengths only gives access to a single vibrational frequency. Imaging samples at different wavenumbers can provide a more specific and quantitative chemical mapping of the sample. This can be achieved by imaging at different wavenumbers one after another. This operation always involves some type of tuning: tuning of one of the lasers' wavelengths, tuning of a spectral filtering device, or tuning of the time delay between the pump and Stokes lasers in the case of spectral-focusing CRS. Another way of performing multi-color CRS is to use one picosecond laser with a narrow spectral bandwidth (&lt;1 nm) as pump or Stokes and the other laser with broad spectral bandwidth. In this case, the spectrum of the transmitted broadband laser can be spread by a grating and measured by an array of detectors. Spectral-focusing CRS. CRS normally use lasers with narrow bandwidth lasers, whose bandwidth &lt; 1 nm, to maintain good spectral resolution ~ 15 cm−1. Lasers with sub 1 nm bandwidth are picosecond lasers. In spectral-focusing CRS, femtosecond pump and Stokes lasers are equally linearly chirped into picosecond lasers. The effective bandwidth become smaller and therefore, high spectral resolution can be achieved this way with femtosecond lasers which normally have a broad bandwidth. The wavenumber tuning of spectral-focusing CRS can be achieved both by changing the center wavelength of lasers and by changing the delay between pump and Stokes lasers. Applications. Coherent Raman histology. One of the major applications for CRS is label-free histology, which is also called coherent Raman histology, or sometimes stimulated Raman histology. In CRH, CRS images are obtained at lipid and protein images and after some image processing, an image similar to H&amp;E staining can be obtained. Different from H&amp;E staining, CRH can be done on live and fresh tissue and doesn't need fixation or staining. Cell metabolism. The metabolism of small molecules like glucose, cholesterol, and drugs are studied with CRS in live cells. CRS provide a way to measure molecular distribution and quantities with relatively high throughput. Myelin imaging. Myelin is rich in lipid. CRS is routinely used to image myelin in live or fixed tissues to study neurodegenerative diseases or other neural disorders. Pharmaceutical research. The functions of drugs can be studied by CRS too. For example, an anti-leukemia drug imatinib are studied with SRS in leukemia cell lines. The study revealed the possible mechanism of its metabolism in cells and provided insight about ways to improve drug effectiveness. Raman tags. Even though CRS allows label-free imaging, Raman tags can also be used to boost signal for specific targets. For example, deuterated molecules are used to shift Raman signal to a band where the interference from other molecules is absent. Specially engineered molecules containing isotopes can be used as Raman tags to achieve super-multiplexing multi-color imaging with SRS. Comparison to confocal Raman microscopy. Confocal Raman microscopy normally uses continuous wave lasers to provide a spontaneous Raman spectrum over a broad wavenumber range for each point in an image. It takes a long time to scan the whole sample, since each pixel requires seconds for data acquisition. The whole imaging process is long and therefore, it is more suitable for samples that do not move. CRS on the other hand measures signals at single wavenumber but allows for fast scanning. If more spectral information is needed, multi-color or hyperspectral CRS can be used and the scanning speed or data quality will be compromised accordingly. Comparison between SRS and CARS. In CRS microscopy, we can regard SRS and CARS as two aspects of the same process. CARS signal is always mixed with non-resonant four-wave mixing background and has a quadratic dependence on concentration of chemicals being imaged. SRS has much smaller background and depends linearly on the concentration of the chemical being imaged. Therefore, SRS is more suitable for quantitative imaging than CARS. On the instrument side, SRS requires modulation and demodulation (e.g. lock-in amplifier or resonant detector). For multi-channel imaging, SRS requires multichannel demodulation while CARS only needs a PMT array or a CCD. Therefore, the instrumentation required is more complicated for SRS than CARS. On the sensitivity side, SRS and CARS normally provide similar sensitivities. Their differences are mainly due to detection methods. In CARS microscopy, PMT, APD or CCDs are used as detectors to detect photons generated in the CARS process. PMTs are most commonly used due to their large detection area and high speed. In SRS microscopy, photodiodes are normally used to measure laser beam intensities. Because of such differences, the applications of CARS and SRS are also different. PMTs normally have relatively low quantum efficiency compared with photodiodes. This will negatively impact the SNR of CARS microscopy. PMTs also have reduced sensitivity for lasers with wavelengths longer than 650 nm. Therefore, with the commonly used laser system for CRS (Ti-sapphire laser), CARS is mainly used to image at high wavenumber region (2800–3400 cm−1). The SNR of CARS microscopy is normally poor for fingerprint imaging (400–1800 cm−1). SRS microscopy mainly uses silicon photodiode as detectors. Si photodiodes have much higher quantum efficiency than PMTs, which is one of the reasons that the SNR of SRS can be better than CARS in many cases. Si photodiodes also suffer reduced sensitivity when the wavelength of laser is longer than 850 nm. However, the sensitivity is still relatively high and allows for imaging in fingerprint region (400–1800 cm−1). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_{Raman}=E_{pump}-E_{Stokes}" }, { "math_id": 1, "text": "E_{Raman}=\\hbar \\Omega,\\ E_{pump}= \\hbar \\omega_{pump},\\ E_{Stokes}=\\hbar \\omega_{Stokes}" }, { "math_id": 2, "text": "E_{CARS}=E_{pump}+\\Omega=2\\times E_{pump}-E_{Stokes}" }, { "math_id": 3, "text": "I_{pump}, I_{Stokes}" }, { "math_id": 4, "text": "N" }, { "math_id": 5, "text": "\\chi ^{(3)} _{Raman}" }, { "math_id": 6, "text": "I_{CARS}\\propto (N \\chi ^{(3)} _{Raman})^2 I^2_{pump} I_{Stokes}" }, { "math_id": 7, "text": "\nSNR_{CARS}\\propto N \\chi ^{(3)} _{Raman} I_{pump} \\sqrt{I_{Stokes}}" }, { "math_id": 8, "text": "I_{SRS}\\propto N\\chi^{(3)}_{Raman} I_{pump}I_{Stokes}" }, { "math_id": 9, "text": "SNR_{SRL}\\propto N\\chi^{(3)}_{Raman} \\sqrt{I_{pump}}I_{Stokes}" }, { "math_id": 10, "text": "SNR_{SRG}\\propto N\\chi^{(3)}_{Raman} I_{pump}\\sqrt{I_{Stokes}}" } ]
https://en.wikipedia.org/wiki?curid=61304144
61310341
Totient summatory function
Arithmetic function In number theory, the totient summatory function formula_0 is a summatory function of Euler's totient function defined by: formula_1 It is the number of coprime integer pairs {p, q}, 1 ≤ p ≤ q ≤ n. The first few values are 0, 1, 2, 4, 6, 10, 12, 18, 22, 28, 32 (sequence in the OEIS). Values for powers of 10 at (sequence in the OEIS). Properties. Using Möbius inversion to the totient function, we obtain formula_2 Φ("n") has the asymptotic expansion formula_3 where ζ(2) is the Riemann zeta function for the value 2. Φ("n") is the number of coprime integer pairs {p, q}, 1 ≤ p ≤ q ≤ n. The summatory of reciprocal totient function. The summatory of reciprocal totient function is defined as formula_4 Edmund Landau showed in 1900 that this function has the asymptotic behavior formula_5 where γ is the Euler–Mascheroni constant, formula_6 and formula_7 The constant "A" = 1.943596... is sometimes known as Landau's totient constant. The sum formula_8 is convergent and equal to: formula_9 In this case, the product over the primes in the right side is a constant known as totient summatory constant, and its value is: formula_10 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Phi(n)" }, { "math_id": 1, "text": "\\Phi(n) := \\sum_{k=1}^n \\varphi(k), \\quad n\\in \\mathbf{N}" }, { "math_id": 2, "text": "\\Phi(n) = \\sum_{k=1}^n k\\sum _{d\\mid k} \\frac {\\mu (d)}{d} = \\frac{1}{2} \\sum _{k=1}^n \\mu(k) \\left\\lfloor \\frac {n}{k} \\right\\rfloor \\left(1 + \\left\\lfloor \\frac {n}{k} \\right\\rfloor \\right)" }, { "math_id": 3, "text": "\\Phi(n) \\sim \\frac{1}{2\\zeta(2)}n^{2}+O\\left( n\\log n \\right )," }, { "math_id": 4, "text": "S(n) := \\sum _{k=1}^{n}{\\frac {1}{\\varphi (k)}}" }, { "math_id": 5, "text": "S(n) \\sim A (\\gamma+\\log n)+ B +O\\left(\\frac{\\log n} n\\right)" }, { "math_id": 6, "text": "A = \\sum_{k=1}^\\infty \\frac{\\mu (k)^2}{k \\varphi(k)} = \\frac{\\zeta(2)\\zeta(3)}{\\zeta(6)} = \\prod_p \\left(1+\\frac 1 {p(p-1)} \\right)" }, { "math_id": 7, "text": "B = \\sum_{k=1}^{\\infty} \\frac{\\mu (k)^2\\log k}{k \\,\\varphi(k)} = A \\, \\prod _{p}\\left(\\frac {\\log p}{p^2-p+1}\\right)." }, { "math_id": 8, "text": "\\textstyle \\sum _{k=1}^\\infty\\frac 1 {k\\varphi (k)}" }, { "math_id": 9, "text": "\\sum _{k=1}^\\infty \\frac 1 {k\\varphi (k)} = \\zeta(2) \\prod_p \\left(1 + \\frac 1 {p^2(p-1)}\\right) =2.20386\\ldots " }, { "math_id": 10, "text": "\\prod_p \\left(1+\\frac 1 {p^2(p-1)} \\right) = 1.339784\\ldots" } ]
https://en.wikipedia.org/wiki?curid=61310341
61316
Galois theory
Mathematical connection between field theory and group theory In mathematics, Galois theory, originally introduced by Évariste Galois, provides a connection between field theory and group theory. This connection, the fundamental theorem of Galois theory, allows reducing certain problems in field theory to group theory, which makes them simpler and easier to understand. Galois introduced the subject for studying roots of polynomials. This allowed him to characterize the polynomial equations that are solvable by radicals in terms of properties of the permutation group of their roots—an equation is "solvable by radicals" if its roots may be expressed by a formula involving only integers, nth roots, and the four basic arithmetic operations. This widely generalizes the Abel–Ruffini theorem, which asserts that a general polynomial of degree at least five cannot be solved by radicals. Galois theory has been used to solve classic problems including showing that two problems of antiquity cannot be solved as they were stated (doubling the cube and trisecting the angle), and characterizing the regular polygons that are constructible (this characterization was previously given by Gauss but without the proof that the list of constructible polygons was complete; all known proofs that this characterization is complete require Galois theory). Galois' work was published by Joseph Liouville fourteen years after his death. The theory took longer to become popular among mathematicians and to be well understood. Galois theory has been generalized to Galois connections and Grothendieck's Galois theory. Application to classical problems. The birth and development of Galois theory was caused by the following question, which was one of the main open mathematical questions until the beginning of 19th century: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Does there exist a formula for the roots of a fifth (or higher) degree polynomial equation in terms of the coefficients of the polynomial, using only the usual algebraic operations (addition, subtraction, multiplication, division) and application of radicals (square roots, cube roots, etc)? The Abel–Ruffini theorem provides a counterexample proving that there are polynomial equations for which such a formula cannot exist. Galois' theory provides a much more complete answer to this question, by explaining why it "is" possible to solve some equations, including all those of degree four or lower, in the above manner, and why it is not possible for most equations of degree five or higher. Furthermore, it provides a means of determining whether a particular equation can be solved that is both conceptually clear and easily expressed as an algorithm. Galois' theory also gives a clear insight into questions concerning problems in compass and straightedge construction. It gives an elegant characterization of the ratios of lengths that can be constructed with this method. Using this, it becomes relatively easy to answer such classical problems of geometry as History. Pre-history. Galois' theory originated in the study of symmetric functions – the coefficients of a monic polynomial are (up to sign) the elementary symmetric polynomials in the roots. For instance, ("x" – "a")("x" – "b") = "x"2 – ("a" + "b")"x" + "ab", where 1, "a" + "b" and "ab" are the elementary polynomials of degree 0, 1 and 2 in two variables. This was first formalized by the 16th-century French mathematician François Viète, in Viète's formulas, for the case of positive real roots. In the opinion of the 18th-century British mathematician Charles Hutton, the expression of coefficients of a polynomial in terms of the roots (not only for positive roots) was first understood by the 17th-century French mathematician Albert Girard; Hutton writes: ...[Girard was] the first person who understood the general doctrine of the formation of the coefficients of the powers from the sum of the roots and their products. He was the first who discovered the rules for summing the powers of the roots of any equation. In this vein, the discriminant is a symmetric function in the roots that reflects properties of the roots – it is zero if and only if the polynomial has a multiple root, and for quadratic and cubic polynomials it is positive if and only if all roots are real and distinct, and negative if and only if there is a pair of distinct complex conjugate roots. See Discriminant:Nature of the roots for details. The cubic was first partly solved by the 15–16th-century Italian mathematician Scipione del Ferro, who did not however publish his results; this method, though, only solved one type of cubic equation. This solution was then rediscovered independently in 1535 by Niccolò Fontana Tartaglia, who shared it with Gerolamo Cardano, asking him to not publish it. Cardano then extended this to numerous other cases, using similar arguments; see more details at Cardano's method. After the discovery of del Ferro's work, he felt that Tartaglia's method was no longer secret, and thus he published his solution in his 1545 "Ars Magna." His student Lodovico Ferrari solved the quartic polynomial; his solution was also included in "Ars Magna." In this book, however, Cardano did not provide a "general formula" for the solution of a cubic equation, as he had neither complex numbers at his disposal, nor the algebraic notation to be able to describe a general cubic equation. With the benefit of modern notation and complex numbers, the formulae in this book do work in the general case, but Cardano did not know this. It was Rafael Bombelli who managed to understand how to work with complex numbers in order to solve all forms of cubic equation. A further step was the 1770 paper "Réflexions sur la résolution algébrique des équations" by the French-Italian mathematician Joseph Louis Lagrange, in his method of Lagrange resolvents, where he analyzed Cardano's and Ferrari's solution of cubics and quartics by considering them in terms of "permutations" of the roots, which yielded an auxiliary polynomial of lower degree, providing a unified understanding of the solutions and laying the groundwork for group theory and Galois' theory. Crucially, however, he did not consider "composition" of permutations. Lagrange's method did not extend to quintic equations or higher, because the resolvent had higher degree. The quintic was almost proven to have no general solutions by radicals by Paolo Ruffini in 1799, whose key insight was to use permutation "groups", not just a single permutation. His solution contained a gap, which Cauchy considered minor, though this was not patched until the work of the Norwegian mathematician Niels Henrik Abel, who published a proof in 1824, thus establishing the Abel–Ruffini theorem. While Ruffini and Abel established that the "general" quintic could not be solved, some "particular" quintics can be solved, such as "x"5 - 1 0, and the precise criterion by which a "given" quintic or higher polynomial could be determined to be solvable or not was given by Évariste Galois, who showed that whether a polynomial was solvable or not was equivalent to whether or not the permutation group of its roots – in modern terms, its Galois group – had a certain structure – in modern terms, whether or not it was a solvable group. This group was always solvable for polynomials of degree four or less, but not always so for polynomials of degree five and greater, which explains why there is no general solution in higher degrees. Galois' writings. In 1830 Galois (at the age of 18) submitted to the Paris Academy of Sciences a memoir on his theory of solvability by radicals; Galois' paper was ultimately rejected in 1831 as being too sketchy and for giving a condition in terms of the roots of the equation instead of its coefficients. Galois then died in a duel in 1832, and his paper, "Mémoire sur les conditions de résolubilité des équations par radicaux", remained unpublished until 1846 when it was published by Joseph Liouville accompanied by some of his own explanations. Prior to this publication, Liouville announced Galois' result to the Academy in a speech he gave on 4 July 1843. According to Allan Clark, Galois's characterization "dramatically supersedes the work of Abel and Ruffini." Aftermath. Galois' theory was notoriously difficult for his contemporaries to understand, especially to the level where they could expand on it. For example, in his 1846 commentary, Liouville completely missed the group-theoretic core of Galois' method. Joseph Alfred Serret who attended some of Liouville's talks, included Galois' theory in his 1866 (third edition) of his textbook "Cours d'algèbre supérieure". Serret's pupil, Camille Jordan, had an even better understanding reflected in his 1870 book "Traité des substitutions et des équations algébriques". Outside France, Galois' theory remained more obscure for a longer period. In Britain, Cayley failed to grasp its depth and popular British algebra textbooks did not even mention Galois' theory until well after the turn of the century. In Germany, Kronecker's writings focused more on Abel's result. Dedekind wrote little about Galois' theory, but lectured on it at Göttingen in 1858, showing a very good understanding. Eugen Netto's books of the 1880s, based on Jordan's "Traité", made Galois theory accessible to a wider German and American audience as did Heinrich Martin Weber's 1895 algebra textbook. Permutation group approach. Given a polynomial, it may be that some of the roots are connected by various algebraic equations. For example, it may be that for two of the roots, say "A" and "B", that "A"2 + 5"B"3 = 7. The central idea of Galois' theory is to consider permutations (or rearrangements) of the roots such that "any" algebraic equation satisfied by the roots is "still satisfied" after the roots have been permuted. Originally, the theory had been developed for algebraic equations whose coefficients are rational numbers. It extends naturally to equations with coefficients in any field, but this will not be considered in the simple examples below. These permutations together form a permutation group, also called the Galois group of the polynomial, which is explicitly described in the following examples. Quadratic equation. Consider the quadratic equation formula_0 By using the quadratic formula, we find that the two roots are formula_1 Examples of algebraic equations satisfied by "A" and "B" include formula_2 and formula_3 If we exchange "A" and "B" in either of the last two equations we obtain another true statement. For example, the equation "A" + "B" 4 becomes "B" + "A" 4. It is more generally true that this holds for "every" possible algebraic relation between "A" and "B" such that all coefficients are rational; that is, in any such relation, swapping "A" and "B" yields another true relation. This results from the theory of symmetric polynomials, which, in this case, may be replaced by formula manipulations involving the binomial theorem. One might object that "A" and "B" are related by the algebraic equation "A" − "B" − 2√3 0, which does not remain true when "A" and "B" are exchanged. However, this relation is not considered here, because it has the coefficient which is not rational. We conclude that the Galois group of the polynomial "x"2 − 4"x" + 1 consists of two permutations: the identity permutation which leaves "A" and "B" untouched, and the transposition permutation which exchanges "A" and "B". As all groups with two elements are isomorphic, this Galois group is isomorphic to the multiplicative group {1, −1}. A similar discussion applies to any quadratic polynomial "ax"2 + "bx" + "c", where "a", "b" and "c" are rational numbers. ("x" − 2)2, or "x"2 − 3"x" + 2 ("x" − 2)("x" − 1), then the Galois group is trivial; that is, it contains only the identity permutation. In this example, if "A" 2 and "B" 1 then "A" − "B" 1 is no longer true when "A" and "B" are swapped. Quartic equation. Consider the polynomial formula_4 Completing the square in an unusual way, it can also be written as formula_5 By applying the quadratic formula to each factor, one sees that the four roots are formula_6 Among the 24 possible permutations of these four roots, four are particularly simple, those consisting in the sign change of 0, 1, or 2 square roots. They form a group that is isomorphic to the Klein four-group. Galois theory implies that, since the polynomial is irreducible, the Galois group has at least four elements. For proving that the Galois group consists of these four permutations, it suffices thus to show that every element of the Galois group is determined by the image of A, which can be shown as follows. The members of the Galois group must preserve any algebraic equation with rational coefficients involving "A", "B", "C" and "D". Among these equations, we have: formula_7 It follows that, if "φ" is a permutation that belongs to the Galois group, we must have: formula_8 This implies that the permutation is well defined by the image of "A", and that the Galois group has 4 elements, which are: ("A", "B", "C", "D") → ("A", "B", "C", "D")(identity) ("A", "B", "C", "D") → ("B", "A", "D", "C")(change of sign of formula_9) ("A", "B", "C", "D") → ("C", "D", "A", "B")(change of sign of formula_10) ("A", "B", "C", "D") → ("D", "C", "B", "A")(change of sign of both square roots) This implies that the Galois group is isomorphic to the Klein four-group. Modern approach by field theory. In the modern approach, one starts with a field extension "L"/"K" (read ""L" over "K""), and examines the group of automorphisms of "L" that fix "K". See the article on Galois groups for further explanation and examples. The connection between the two approaches is as follows. The coefficients of the polynomial in question should be chosen from the base field "K". The top field "L" should be the field obtained by adjoining the roots of the polynomial in question to the base field "K". Any permutation of the roots which respects algebraic equations as described above gives rise to an automorphism of "L"/"K", and vice versa. In the first example above, we were studying the extension Q(√3)/Q, where Q is the field of rational numbers, and Q(√3) is the field obtained from Q by adjoining . In the second example, we were studying the extension Q("A","B","C","D")/Q. There are several advantages to the modern approach over the permutation group approach. Solvable groups and solution by radicals. The notion of a solvable group in group theory allows one to determine whether a polynomial is solvable in radicals, depending on whether its Galois group has the property of solvability. In essence, each field extension "L"/"K" corresponds to a factor group in a composition series of the Galois group. If a factor group in the composition series is cyclic of order "n", and if in the corresponding field extension "L"/"K" the field "K" already contains a primitive "n"th root of unity, then it is a radical extension and the elements of "L" can then be expressed using the "n"th root of some element of "K". If all the factor groups in its composition series are cyclic, the Galois group is called "solvable", and all of the elements of the corresponding field can be found by repeatedly taking roots, products, and sums of elements from the base field (usually Q). One of the great triumphs of Galois Theory was the proof that for every "n" &gt; 4, there exist polynomials of degree "n" which are not solvable by radicals (this was proven independently, using a similar method, by Niels Henrik Abel a few years before, and is the Abel–Ruffini theorem), and a systematic way for testing whether a specific polynomial is solvable by radicals. The Abel–Ruffini theorem results from the fact that for "n" &gt; 4 the symmetric group "S""n" contains a simple, noncyclic, normal subgroup, namely the alternating group "A""n". A non-solvable quintic example. Van der Waerden cites the polynomial "f"("x") "x"5 − "x" − 1. By the rational root theorem, this has no rational zeroes. Neither does it have linear factors modulo 2 or 3. The Galois group of "f"("x") modulo 2 is cyclic of order 6, because "f"("x") modulo 2 factors into polynomials of orders 2 and 3, ("x"2 + "x" + 1)("x"3 + "x"2 + 1). "f"("x") modulo 3 has no linear or quadratic factor, and hence is irreducible. Thus its modulo 3 Galois group contains an element of order 5. It is known that a Galois group modulo a prime is isomorphic to a subgroup of the Galois group over the rationals. A permutation group on 5 objects with elements of orders 6 and 5 must be the symmetric group "S"5, which is therefore the Galois group of "f"("x"). This is one of the simplest examples of a non-solvable quintic polynomial. According to Serge Lang, Emil Artin was fond of this example. Inverse Galois problem. The "inverse Galois problem" is to find a field extension with a given Galois group. As long as one does not also specify the ground field, the problem is not very difficult, and all finite groups do occur as Galois groups. For showing this, one may proceed as follows. Choose a field "K" and a finite group "G". Cayley's theorem says that "G" is (up to isomorphism) a subgroup of the symmetric group "S" on the elements of "G". Choose indeterminates {"x""α"}, one for each element "α" of "G", and adjoin them to "K" to get the field "F" "K"({"x""α"}). Contained within "F" is the field "L" of symmetric rational functions in the {"x""α"}. The Galois group of "F"/"L" is "S", by a basic result of Emil Artin. "G" acts on "F" by restriction of action of "S". If the fixed field of this action is "M", then, by the fundamental theorem of Galois theory, the Galois group of "F"/"M" is "G". On the other hand, it is an open problem whether every finite group is the Galois group of a field extension of the field Q of the rational numbers. Igor Shafarevich proved that every solvable finite group is the Galois group of some extension of Q. Various people have solved the inverse Galois problem for selected non-Abelian simple groups. Existence of solutions has been shown for all but possibly one (Mathieu group "M"23) of the 26 sporadic simple groups. There is even a polynomial with integral coefficients whose Galois group is the Monster group. Inseparable extensions. In the form mentioned above, including in particular the fundamental theorem of Galois theory, the theory only considers Galois extensions, which are in particular separable. General field extensions can be split into a separable, followed by a purely inseparable field extension. For a purely inseparable extension "F" / "K", there is a Galois theory where the Galois group is replaced by the vector space of derivations, formula_11, i.e., "K"-linear endomorphisms of "F" satisfying the Leibniz rule. In this correspondence, an intermediate field "E" is assigned formula_12. Conversely, a subspace formula_13 satisfying appropriate further conditions is mapped to formula_14. Under the assumption formula_15, showed that this establishes a one-to-one correspondence. The condition imposed by Jacobson has been removed by , by giving a correspondence using notions of derived algebraic geometry. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^2 - 4x + 1 = 0. " }, { "math_id": 1, "text": "\\begin{align}\nA &= 2 + \\sqrt{3},\\\\\nB &= 2 - \\sqrt{3}.\n\\end{align}" }, { "math_id": 2, "text": "A + B = 4, " }, { "math_id": 3, "text": "AB = 1. " }, { "math_id": 4, "text": "x^4 - 10x^2 + 1." }, { "math_id": 5, "text": "(x^2-1)^2-8x^2 = (x^2-1-2x\\sqrt2 )(x^2-1+2x\\sqrt 2)." }, { "math_id": 6, "text": "\\begin{align}\nA &= \\sqrt{2} + \\sqrt{3},\\\\\nB &= \\sqrt{2} - \\sqrt{3},\\\\\nC &= -\\sqrt{2} + \\sqrt{3},\\\\\nD &= -\\sqrt{2} - \\sqrt{3}.\n\\end{align}" }, { "math_id": 7, "text": "\\begin{align}\nAB&=-1 \\\\ AC&=1 \\\\ A+D&=0\n\\end{align}" }, { "math_id": 8, "text": "\\begin{align}\n\\varphi(B)&=\\frac{-1}{\\varphi(A)}, \\\\ \\varphi(C)&=\\frac{1}{\\varphi(A)}, \\\\ \\varphi(D)&=-\\varphi(A).\n\\end{align}" }, { "math_id": 9, "text": "\\sqrt3" }, { "math_id": 10, "text": "\\sqrt2" }, { "math_id": 11, "text": "Der_K(F, F)" }, { "math_id": 12, "text": "Der_E(F, F) \\subset Der_K(F, F)" }, { "math_id": 13, "text": "V \\subset Der_K(F, F)" }, { "math_id": 14, "text": "\\{x \\in F, f(x)=0\\ \\forall f \\in V\\}" }, { "math_id": 15, "text": "F^p \\subset K" } ]
https://en.wikipedia.org/wiki?curid=61316
61323423
Hawkes process
A self-exciting counting process In probability theory and statistics, a Hawkes process, named after Alan G. Hawkes, is a kind of self-exciting point process. It has arrivals at times formula_0 where the infinitesimal probability of an arrival during the time interval formula_1 is formula_2 The function formula_3 is the intensity of an underlying Poisson process. The first arrival occurs at time formula_4 and immediately after that, the intensity becomes formula_5, and at the time formula_6 of the second arrival the intensity jumps to formula_7 and so on. During the time interval formula_8, the process is the sum of formula_9 independent processes with intensities formula_10 The arrivals in the process whose intensity is formula_11 are the "daughters" of the arrival at time formula_12 The integral formula_13 is the average number of daughters of each arrival and is called the "branching ratio". Thus viewing some arrivals as descendants of earlier arrivals, we have a Galton–Watson branching process. The number of such descendants is finite with probability 1 if branching ratio is 1 or less. If the branching ratio is more than 1, then each arrival has positive probability of having infinitely many descendants. Applications. Hawkes processes are used for statistical modeling of events in mathematical finance, epidemiology, earthquake seismology, and other fields in which a random event exhibits self-exciting behavior.
[ { "math_id": 0, "text": " 0 < t_1 < t_2 < t_3 < \\cdots " }, { "math_id": 1, "text": " [t,t+dt) " }, { "math_id": 2, "text": " \\lambda_t \\, dt = \\left( \\mu(t) + \\sum_{t_i\\,:\\, t_i\\,<\\,t} \\phi(t-t_i) \\right) \\, dt. " }, { "math_id": 3, "text": "\\mu" }, { "math_id": 4, "text": " t_1" }, { "math_id": 5, "text": " \\mu(t) + \\phi(t-t_1) " }, { "math_id": 6, "text": " t_2" }, { "math_id": 7, "text": " \\mu(t) + \\phi(t-t_1) + \\phi(t-t_2) " }, { "math_id": 8, "text": " (t_k, t_{k+1}) " }, { "math_id": 9, "text": " k+1" }, { "math_id": 10, "text": " \\mu(t), \\phi(t-t_1), \\ldots, \\phi(t-t_k). " }, { "math_id": 11, "text": " \\phi(t-t_k) " }, { "math_id": 12, "text": " t_k." }, { "math_id": 13, "text": " \\int_0^\\infty \\phi(t)\\,dt" } ]
https://en.wikipedia.org/wiki?curid=61323423
61325885
PURB (cryptography)
In cryptography, a padded uniform random blob or PURB is a discipline for encrypted data formats designed to minimize unintended information leakage either from its encryption format metadata or from its total length. Properties of PURBs. When properly created, a PURB's content is indistinguishable from a uniform random bit string to any observer without a relevant decryption key. A PURB therefore leaks "no" information through headers or other cleartext metadata associated with the encrypted data format. This leakage minimization "hygiene" practice contrasts with traditional encrypted data formats such as Pretty Good Privacy, which include cleartext metadata encoding information such as the application that created the data, the data format version, the number of recipients the data is encrypted for, the identities or public keys of the recipients, and the ciphers or suites that were used to encrypt the data. While such encryption metadata was considered non-sensitive when these encrypted formats were designed, modern attack techniques have found numerous ways to employ such incidentally-leaked metadata in facilitating attacks, such as by identifying data encrypted with weak ciphers or obsolete algorithms, fingerprinting applications to track users or identify software versions with known vulnerabilities, or traffic analysis techniques such as identifying all users, groups, and associated public keys involved in a conversation from an encrypted message observed between only two of them. In addition, a PURB is padded to a constrained set of possible lengths, in order to minimize the amount of information the encrypted data could potentially leak to observers via its total length. Without padding, encrypted objects such as files or bit strings up to formula_0 bits in length can leak up to formula_1 bits of information to an observer - namely the number of bits required to represent the length exactly. A PURB is padded to a length representable in a floating point number whose mantissa is no longer (i.e., contains no more significant bits) than its exponent. This constraint limits the maximum amount of information a PURB's total length can leak to formula_2 bits, a significant asymptotic reduction and the best achievable in general for variable-length encrypted formats whose multiplicative overhead is limited to a constant factor of the unpadded payload size. This asymptotic leakage is the same as one would obtain by padding encrypted objects to a power of some base, such as to a power of two. Allowing some significant mantissa bits in the length's representation rather than just an exponent, however, significantly reduces the overhead of padding. For example, padding to the next power of two can impose up to 100% overhead by nearly doubling the object's size, while a PURB's padding imposes overhead of at most 12% for small strings and decreasing gradually (to 6%, 3%, etc.) as objects get larger. Experimental evidence indicate that on data sets comprising objects such as files, software packages, and online videos, leaving objects unpadded or padding to a constant block size often leaves them uniquely identifiable by total length alone. Padding objects to a power of two or to a PURB length, in contrast, ensures that most objects are indistinguishable from at least some other objects and thus have a nontrivial "anonymity set". Encoding and decoding PURBs. Because a PURB is a discipline for designing encrypted formats and not a particular encrypted format, there is no single prescribed method for encoding or decoding PURBs. Applications may use any encryption and encoding scheme provided it produces a bit string that appears uniformly random to an observer without an appropriate key, provided the appropriate hardness assumptions are satisfied of course, and provided the PURB is padded to one of the allowed lengths. Correctly-encoded PURBs therefore "do not identify the application that created them" in their ciphertext. A decoding application, therefore, cannot readily tell before decryption whether a PURB was encrypted for that application or its user, other than by trying to decrypt it with any available decryption keys. Encoding and decoding a PURB presents technical efficiency challenges, in that traditional parsing techniques are not applicable because a PURB by definition has no metadata "markers" that a traditional parser could use to discern the PURB's structure before decrypting it. Instead, a PURB must be "decrypted first" obliviously to its internal structure, and then parsed only after the decoder has used an appropriate decryption key to find a suitable cryptographic "entrypoint" into the PURB. Encoding and decoding PURBs intended to be decrypted by several different recipients, public keys, and/or ciphers presents the additional technical challenge that each recipient must find a different entrypoint at a distinct location in the PURB non-overlapping with those of the other recipients, but the PURB presents no cleartext metadata indicating the positions of those entrypoints or even the total number of them. The paper that proposed PURBs also included algorithms for encrypting objects to multiple recipients using multiple cipher suites. With these algorithms, recipients can find their respective entrypoints into the PURB with only a logarithmic number of "trial decryptions" using symmetric-key cryptography and only one expensive public-key operation per cipher suite. A third technical challenge is representing the public-key cryptographic material that needs to be encoded into each entrypoint in a PURB, such as the ephemeral Diffie-Hellman public key a recipient needs to derive the shared secret, in an encoding indistinguishable from uniformly random bits. Because the standard encodings of elliptic-curve points are readily distinguishable from random bits, for example, special "indistinguishable" encoding algorithms must be used for this purpose, such as Elligator and its successors. Tradeoffs and limitations. The primary privacy advantage that PURBs offer is a strong assurance that correctly-encrypted data leaks nothing incidental via internal metadata that observers might readily use to identify weaknesses in the data or software used to produce it, or to fingerprint the application or user that created the PURB. This privacy advantage can translate into a security benefit for data encrypted with weak or obsolete ciphers, or by software with known vulnerabilities that an attacker might exploit based on trivially-observable information gleaned from cleartext metadata. A primary disadvantage of the PURB encryption discipline is the complexity of encoding and decoding, because the decoder cannot rely on conventional parsing techniques before decryption. A secondary disadvantage is the overhead that padding adds, although the padding scheme proposed for PURBs incurs at most only a few percent overhead for objects of significant size. The Padme padding proposed in the PURB paper only creates files of specific very distinct sizes. Thus, an encrypted file may often be identified as PURB encrypted with high confidence, as the probability of any other file having exactly one of those padded sizes is very low. Another padding problem occurs with very short messages, where the padding does not effectively hide the size of the content. One critique of incurring the complexity and overhead costs of PURB encryption is that the "context" in which a PURB is stored or transmitted may often leak metadata about the encrypted content anyway, and such metadata is outside of the encryption format's purview or control and thus cannot be addressed by the encryption format alone. For example, an application's or user's choice of filename and directory in which to store a PURB on disk may indicate allow an observer to infer the application that likely created it and to what purpose, even if the PURB's data content itself does not. Similarly, encrypting an E-mail's body as a PURB instead of with traditional PGP or S/MIME format may eliminate the encryption format's metadata leakage, but cannot prevent information leakage from the cleartext E-mail headers, or from the endpoint hosts and E-mail servers involved in the exchange. Nevertheless, separate but complementary disciplines are typically available to limit such contextual metadata leakage, such as appropriate file naming conventions or use of pseudonymous E-mail addresses for sensitive communications. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "O(\\log M)" }, { "math_id": 2, "text": "O(\\log \\log M)" } ]
https://en.wikipedia.org/wiki?curid=61325885
61328143
Performance and modelling of AC transmission
Performance modelling is the abstraction of a real system into a simplified representation to enable the prediction of performance. The creation of a model can provide insight into how a proposed or actual system will or does work. This can, however, point towards different things to people belonging to different fields of work. Performance modelling has many benefits, which includes: A model will often be created specifically so that it can be interpreted by a software tool that simulates the system's behaviour, based on the information contained in the performance model. Such tools provide further insight into the system's behaviour and can be used to identify bottlenecks or hot spots where the design is inadequate. Solutions to the problems identified might involve the provision of more physical resources or change in the structure of the design. Performance modelling is found helpful, in case of: Modelling of a transmission line is done to analyse its performance and characteristics. The gathered information vis simulating the model can be used to reduce losses or to compensate these losses. Moreover, it gives more insight into the working of transmission lines and helps to find a way to improve the overall transmission efficiency with minimum cost. Overview. Electric power transmission is the bulk movement of electrical energy from a generating site, such as a power plant, to an electrical substation and is different from the local wiring between high-voltage substations and customers, which is typically referred to as electric power distribution. The interconnected network which facilitates this movement is known as a transmission line. A transmission line is a set of electrical conductors carrying an electrical signal from one place to another. Coaxial cable and twisted pair cable are examples. The transmission line is capable of transmitting electrical power from one place to another. In many electric circuits, the length of the wires connecting the components can, for the most part, be ignored. That is, the voltage on the wire at a given time can be assumed to be the same at all points. However, when the voltage changes in a time interval comparable to the time it takes for the signal to travel down the wire, the length becomes important and the wire must be treated as a transmission line. Stated another way, the length of the wire is important when the signal includes frequency components with corresponding wavelengths comparable to or less than the length of the wire. So far transmission lines are categorized and defined in many ways. Few approaches to modelling have also being done by different methods. Most of them are mathematical and assumed circuit-based models. Transmission can be of two types : HVDC transmission. High-voltage direct current (HVDC) is used to transmit large amounts of power over long distances or for interconnections between asynchronous grids. When electrical energy is to be transmitted over very long distances, the power lost in AC transmission becomes appreciable and it is less expensive to use direct current instead of alternating current. For a very long transmission line, these lower losses (and reduced construction cost of a DC line) can offset the additional cost of the required converter stations at each end.In DC transmission line, the mercury arc rectifier converts the alternating current into the DC. The DC transmission line transmits the bulk power over long distance. At the consumer ends the thyratron converts the DC into the AC. HVAC transmission. The AC transmission line is used for transmitting the bulk of the power generation end to the consumer end. The power is generated in the generating station. The transmission line transmits the power from generation to the consumer end. High-voltage power transmission allows for lesser resistive losses over long distances in the wiring. This efficiency of high voltage transmission allows for the transmission of a larger proportion of the generated power to the substations and in turn to the loads, translating to operational cost savings. The power is transmitted from one end to another with the help of step-up and step down transformer. Most transmission lines are high-voltage three-phase alternating current (AC), although single phase AC is sometimes used in railway electrification systems. Electricity is transmitted at high voltages (115 kV or above) to reduce the energy loss which occurs in long-distance transmission. Power is usually transmitted through overhead power lines. Underground power transmission has a significantly higher installation cost and greater operational limitations, but reduced maintenance costs. Underground transmission is sometimes used in urban areas or environmentally sensitive locations. Terminologies. Lossless line. The lossless line approximation is the least accurate model; it is often used on short lines when the inductance of the line is much greater than its resistance. For this approximation, the voltage and current are identical at the sending and receiving ends. The characteristic impedance is purely real, which means resistive for that impedance, and it is often called surge impedance for a lossless line. When a lossless line is terminated by surge impedance, there is no voltage drop. Though the phase angles of voltage and current are rotated, the magnitudes of voltage and current remain constant along the length of the line. For load &gt;  SIL, the voltage will drop from sending end and the line will “consume” VARs. For load &lt;  SIL, the voltage will increase from sending end, and the line will generate VARs. Power factor. In electrical engineering, the power factor of an AC electrical power system is defined as the ratio of the "real power" absorbed by the load to the apparent power flowing in the circuit, and is a dimensionless number in the closed interval of −1 to 1. A power factor of less than one indicates the voltage and current are not in phase, reducing the instantaneous product of the two. A negative power factor occurs when the device (which is normally the load) generates power, which then flows back towards the source. In an electric power system, a load with a low power factor draws more current than a load with a high power factor for the same amount of useful power transferred. The higher currents increase energy loss in the distribution system and require larger wires and other equipment. Because of the costs of larger equipment and wasted energy, electrical utilities will usually charge a higher cost to industrial or commercial customers where there is a low power factor. Surge impedance. The characteristic impedance or surge impedance (usually written Z0) of a uniform transmission line is the ratio of the amplitudes of voltage and current of a single wave propagating along the line; that is, a wave travelling in one direction in the absence of reflections in the other direction. Alternatively and equivalently it can be defined as the input impedance of a transmission line when its length is infinite. Characteristic impedance is determined by the geometry and materials of the transmission line and, for a uniform line, is not dependent on its length. The SI unit of characteristic impedance is Ohm (Ώ) Surge impedance determines the loading capability of the line and reflection coefficient of the current or voltage propagating waves. formula_0 Where, Z0 = Characteristic Impedance of the Line L = Inductance per unit length of the Line C = Capacitance per unit length of the Line Line parameters. The transmission line has mainly four parameters, resistance, inductance, and capacitance and shunt conductance. These parameters are uniformly distributed along the line. Hence, it is also called the distributed parameter of the transmission line. Ferranti Effect. In electrical engineering, the Ferranti effect is the increase in voltage occurring at the receiving end of a very long (&gt; 200 km) AC electric power transmission line, relative to the voltage at the sending end, when the load is very small, or no load is connected. It can be stated as a factor, or as a percent increase:. The capacitive line charging current produces a voltage drop across the line inductance that is in-phase with the sending-end voltage, assuming negligible line resistance. Therefore, both line inductance and capacitance are responsible for this phenomenon. This can be analysed by considering the line as a transmission line where the source impedance is lower than the load impedance (unterminated). The effect is similar to an electrically short version of the quarter-wave impedance transformer, but with smaller voltage transformation. The Ferranti effect is more pronounced the longer the line and the higher the voltage applied. The relative voltage rise is proportional to the square of the line length and the square of frequency. The Ferranti effect is much more pronounced in underground cables, even in short lengths, because of their high capacitance per unit length, and lower electrical impedance. Corona discharge. A corona discharge is an electrical discharge brought on by the ionization of a fluid such as air surrounding a conductor that is electrically charged. Spontaneous corona discharges occur naturally in high-voltage systems unless care is taken to limit the electric field strength. A corona will occur when the strength of the electric field (potential gradient) around a conductor is high enough to form a conductive region, but not high enough to cause electrical breakdown or arcing to nearby objects. It is often seen as a bluish (or another colour) glow in the air adjacent to pointed metal conductors carrying high voltages and emits light by the same property as a gas discharge lamp. In many high voltage applications, the corona is an unwanted side effect. Corona discharge from high voltage electric power transmission lines constitutes an economically significant waste of energy. Corona discharges are suppressed by improved insulation, corona rings, and making high voltage electrodes in smooth rounded shapes. ABCD parameters. A, B, C, D are the constants also known as the transmission parameters or chain parameters. These parameters are used for the analysis of an electrical network. It is also used for determining the performance of input, output voltage and current of the transmission network. Propagation constant. The propagation constant of the sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given direction. The quantity being measured can be the voltage, the current in a circuit, or a field vector such as electric field strength or flux density. The propagation constant itself measures the change per unit length, but it is otherwise dimensionless. In the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next. Attenuation constant. The real part of the propagation constant is the attenuation constant and is denoted by Greek lowercase letter α (alpha). It causes signal amplitude to decrease along a transmission line. Phase constant. The imaginary part of the propagation constant is the phase constant and is denoted by Greek lowercase letter β (beta). It causes the signal phase to shift along a transmission line. Generally denoted in radians per meter (rad/m). The propagation constant is denoted by Greek lowercase letter γ (gamma), and γ = α + jβ Voltage Regulation. Voltage regulation is a measure of the change in the voltage magnitude between the sending and receiving end of a component, such as a transmission or distribution line. It is given in percentage for different lines. Mathematically, voltage regulation is given by, formula_1 Line parameters of AC transmission. The AC transmission has four line parameters, these are the series resistance &amp; inductance, and shunt capacitance and admittance. These parameters are responsible for distinct behavior of voltage and current waveforms along the transmission line. Line parameters are generally represented in their respective units per Km of length in transmission lines. So these parameters depend upon the geometric alignment of transmission lines (no of conductors used, shape of conductors, physical spacing between conductors and height above the ground etc.). These parameters are independent of current and voltage of any of the sending or receiving ends. Series resistance. Definition. The electrical resistance of an object is property of a substance due to which it restricts the flow of electric current due to a potential difference in its two ends. The inverse quantity is &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;electrical conductance, and is the ease with which an electric current passes. Electrical resistance shares some conceptual parallels with the notion of mechanical friction. The SI unit of electrical resistance is the ohm (Ω), while electrical conductance is measured in siemens (S). Characteristics. The resistance of an object depends in large part on the material it is made of—objects made of electrical insulators like rubber tend to have very high resistance and low conductivity, while objects made of electrical conductors like metals tend to have very low resistance and high conductivity. This material dependence is quantified by resistivity or conductivity. However, resistance and conductance are extensive rather than bulk properties, meaning that they also depend on the size and shape of an object. For example, a wire's resistance is higher if it is long and thin, and lower if it is short and thick. All objects show some resistance, except for superconductors, which have a resistance of zero. The resistance ("R") of an object is defined as the ratio of voltage across it ("V") to current through it ("I"), while the conductance ("G") is the inverse: formula_2 For a wide variety of materials and conditions, "V" and "I" are directly proportional to each other, and therefore "R" and "G" are constants (although they will depend on the size and shape of the object, the material it is made of, and other factors like temperature or strain). This proportionality is called Ohm's law, and materials that satisfy it are called "ohmic" materials. In other cases, such as a transformer, diode or battery, "V" and "I" are "not" directly proportional. The ratio "V"/"I" is sometimes still useful, and is referred to as a "chordal resistance" or "static resistance", since it corresponds to the inverse slope of a chord between the origin and an "I–V" curve. In other situations, the derivative formula_3 may be most useful; this is called the "differential resistance". Transmission lines, as they consist of conducting wires of very long length, have an electrical resistance that can't be neglected at all. Series inductance. Definition. When current flows within a conductor, magnetic flux is set up. With the variation of current in the conductor, the number of lines of flux also changes, and an emf is induced in it (Faraday's Law). This induced emf is represented by the parameter known as inductance. It is customary to use the symbol L for inductance, in honour of the physicist Heinrich Lenz. In the SI system, the unit of inductance is the henry ("H"), which is the amount of inductance which causes a voltage of 1 volt when the current is changing at a rate of one ampere per second. It is named for Joseph Henry, who discovered inductance independently of Faraday. Types of inductance. The flux linking with the conductor consists of two parts, namely, the internal flux and the external flux : Characteristics. The transmission line wiring is also inductive in nature and, the inductance of a single circuit line can be given mathematically by : formula_4 Where, For transposed lines with two or more phases, the inductance between any two lines can be calculated using : formula_9. Where, formula_10 is the geometric mean distance in between the conductors. If the lines are not properly transposed, the inductances become unequal and contain imaginary terms due to mutual inductances. In case of proper transposition, all the conductors occupy the available positions equal distance and thus the imaginary terms are cancelled out. And all the line inductances become equal. Shunt capacitance. Definition. Capacitance is the ratio of the change in an electric charge in a system to the corresponding change in its electric potential. The capacitance is a function only of the geometry of the design (e.g. area of the plates and the distance between them) and the permittivity of the dielectric material between the plates of the capacitor. For many dielectric materials, the permittivity and thus the capacitance is independent of the potential difference between the conductors and the total charge on them. The SI unit of capacitance is the farad (symbol: F), named after the English physicist Michael Faraday. A 1 farad capacitor, when charged with 1 coulomb of electrical charge, has a potential difference of 1 volt between its plates. The reciprocal of capacitance is called elastance. Types of capacitance. There are two closely related notions of capacitance self-capacitance and mutual capacitance : Charecteristics. Transmission line conductors constitute a capacitor between them, exhibiting mutual capacitance. The conductors of the transmission line act as a parallel plate of the capacitor and the air is just like a dielectric medium between them. The capacitance of a line gives rise to the leading current between the conductors. It depends on the length of the conductor. The capacitance of the line is proportional to the length of the transmission line. Their effect is negligible on the performance of lines with a short length and low voltage. In the case of high voltage and long lines, it is considered as one of the most important parameters. The shunt capacitance of the line is responsible for Ferranti effect. The capacitance of a single phase transmission line can be given mathematically by : formula_11 Where, For lines with two or more phases, the capacitance between any two lines can be calculated using : formula_14 Where, formula_10 is the geometric mean distance of the conductors. The effect of self-capacitance, on a transmission line, is generally neglected because the conductors are not isolated and thus there exists no detectable self-capacitance. Shunt admittance. Definition. In electrical engineering, admittance is a measure of how easily a circuit or device will allow a current to flow. It is defined as the reciprocal of impedance. The SI unit of admittance is the siemens (symbol S); the older, synonymous unit is mho, and its symbol is ℧ (an upside-down uppercase omega Ω). Oliver Heaviside coined the term "admittance" in December 1887. Admittance is defined as formula_15 where "Y" is the admittance, measured in siemens "Z" is the impedance, measured in ohms Characteristics. Resistance is a measure of the opposition of a circuit to the flow of a steady current, while impedance takes into account not only the resistance but also dynamic effects (known as reactance). Likewise, admittance is not only a measure of the ease with which a steady current can flow but also the dynamic effects of the material's susceptance to polarization: formula_16 where The dynamic effects of the material's susceptance relate to the universal dielectric response, the power-law scaling of a system's admittance with frequency under alternating current conditions. In the context of electrical modelling of transmission lines, shunt components that provide paths of least resistance in certain models are generally specified in terms of their admittance. Transmission lines can span hundreds of kilometres, over which the line's capacitance can affect voltage levels. For short length transmission line analysis, this capacitance can be ignored and shunt components are not necessary for the model. Lines with more length, contain a shunt admittance governed by formula_21 where Y – total shunt admittance y – shunt admittance per unit length l – length of the line C – capacitance of the line Modelling of transmission lines. Two port networks. A two-port network (a kind of four-terminal network or quadripole) is an electrical network (circuit) or device with two "pairs" of terminals to connect to external circuits. Two terminals constitute a port if the currents applied to them satisfy the essential requirement known as the port condition: the electric current entering one terminal must equal the current emerging from the other terminal on the same port. The ports constitute interfaces where the network connects to other networks, the points where signals are applied or outputs are taken. In a two-port network, often port 1 is considered the input port and port 2 is considered the output port. The two-port network model is used in mathematical circuit analysis techniques to isolate portions of larger circuits. A two-port network is regarded as a "black box" with its properties specified by a matrix of numbers. This allows the response of the network to signals applied to the ports to be calculated easily, without solving for all the internal voltages and currents in the network. It also allows similar circuits or devices to be compared easily. For example, transistors are often regarded as two-ports, characterized by their h-parameters (see below) which are listed by the manufacturer. Any linear circuit with four terminals can be regarded as a two-port network provided that it does not contain an independent source and satisfies the port conditions. Transmission matrix and ABCD parameters. Oftentimes, we are only interested in the terminal characteristics of the transmission line, which are the voltage and current at the sending and receiving ends, for performance analysis of the line. The transmission line itself is then modelled as a "black box" and a 2 by 2 transmission matrix is used to model its behaviour, as follows formula_22 Derivation. This equation in matrix form, consists of two individual equations as stated below: formula_23 formula_24 Where, formula_25 is the sending end voltage formula_26 is the receiving end voltage formula_27 is the sending end current formula_28 is the receiving end current 1.formula_29 So, the parameter A is the ratio of sending end voltage to receiving end voltage, thus called the voltage ratio. Being the ratio of two same quantities, the parameter A is unitless. 2.formula_30 So, the parameter C is the ratio of sending end current to receiving end voltage, thus called the transfer admittance and the unit of C is Mho (formula_31). 1.formula_32 So, the parameter B is the ratio of sending end voltage to receiving end current, thus called the transfer impedance and the unit of C is Ohm (Ω). 2.formula_33 So, the parameter D is the ratio of sending end current to receiving end current, thus called the current ratio. Being the ratio of two same quantities, the parameter D is unitless. ABCD parameter values. To summarize, ABCD Parameters for a two port(four terminal) passive, linear and bilateral network is given as : Properties. The line is assumed to be a reciprocal, symmetrical network, meaning that the receiving and sending labels can be switched with no consequence. The transmission matrix T also has the following properties: The parameters "A", "B", "C", and "D" differ depending on how the desired model handles the line's resistance ("R"), inductance ("L"), capacitance ("C"), and shunt (parallel, leak) conductance "G". The four main models are the short line approximation, the medium line approximation, the long line approximation (with distributed parameters), and the lossless line. In all models described, a capital letter such as "R" refers to the total quantity summed over the line and a lowercase letter such as "r" refers to the per-unit-length quantity. Classification of AC transmission line. Classification overview. The AC transmission line has resistance R, inductance L, capacitance C and the shunt or leakage conductance G. These parameters along with the load and the transmission line determines the performance of the line. The term performance means the sending end voltage, sending end currents, sending end factor, power loss in the line, efficiency of the transmission line, regulate and limit of power flow during efficiency and transmission, regulation and limits of power during the steady-state and transient condition. AC transmission line is generally categorized into three classes The classification of the transmission line depends on the frequency of power transfer and is an assumption made for ease of calculation of line performance parameters and its losses. And because this, the range of length for the categorization of a transmission line is not rigid. The ranges of length may vary (a little), and all of them are valid in their areas of approximation. Basis of classification. Derivation of voltage\current wavelength. Current and voltage propagate in a transmission line with a speed equal to the speed of light (c) i.e. appx. formula_36 and the frequency (f) of voltage or current is 50 Hz ( although in the America and parts of Asia it is typically 60 Hz) Therefore, wavelength (λ) can be calculated as below : formula_37 or, formula_38 or, formula_39 Reason behind classification. A transmission line with 60 km of length is very very small(formula_40 times) when compared with wavelength i.e. 6000 km. Up to 240  km (formula_41 times of wavelength) (250 km is taken for easy remembering) length of the line, current or voltage waveform is so small that it can be approximated to a straight line for all practical purposes. For line length of about 240  km parameters are assumed to be lumped (though practically these parameters are always distributed). Therefore, the response of transmission line for a length up to 250  km can be considered linear and hence the equivalent circuit of the line can be approximated to a linear circuit. But if the length of the line is more than 250  km say 400 km i.e. formula_42 times of wavelength, then the waveform of current or voltage can not be considered linear and therefore we need to use integration for the analysis of these lines. Short transmission line. The transmission lines which have a length less than 60 km are generally referred to as short transmission lines. For its short length, parameters like electrical resistance, impedance and inductance of these short lines are assumed to be lumped. The shunt capacitance for a short line is almost negligible and thus, are not taken into account (or assumed to be zero). Derivation of ABCD parameter values. Now, if the impedance per km for an l km of line is, formula_43 and the sending end &amp; receiving end voltages make an angle of formula_44 &amp; formula_45 respectively, with the receiving end current. Then, the total impedance of the line will be, formula_46 The sending end voltage and current for this approximation are given by : In this, the sending and receiving end voltages are denoted by formula_25 and formula_26 respectively. Also the currents formula_27 and formula_28 are entering and leaving the network respectively. So, by considering the equivalent circuit model for the short transmission line, the transmission matrix can be obtained as follows: Therefore, the ABCD parameters are given by : A = D =1, B = Z Ω and C = 0 Medium transmission line. The transmission line having its effective length more than 80 km but less than 250 km is generally referred to as a medium transmission line. Due to the line length being considerably high, shunt capacitance along with admittance Y of the network does play a role in calculating the effective circuit parameters, unlike in the case of short transmission lines. For this reason, the modelling of a medium length transmission line is done using lumped shunt admittance along with the lumped impedance in series to the circuit. Counterintuitive behaviours of medium-length transmission lines: These lumped parameters of a medium length transmission line can be represented using two different models, namely : Nominal Π representation. In case of a nominal Π representation, the total lumped shunt admittance is divided into 2 equal halves, and each half with value Y ⁄ 2 is placed at both the sending &amp; receiving end, while the entire circuit impedance is lumped in between the two halves. The circuit, so formed resembles the symbol of pi (Π), hence is known as the nominal Π (or Π network representation) of a medium transmission line. It is mainly used for determining the general circuit parameters and performing load flow analysis. Derivation of ABCD parameter values. Applying KCL at the two shunt ends, we get formula_47 In this, The sending and receiving end voltages are denoted by formula_25 and formula_26 respectively. Also the currents formula_27 and formula_28 are entering and leaving the network respectively. formula_48 are the currents through the shunt capacitances at the sending and receiving end respectively whereas formula_49 is the current through the series impedance. Again, formula_50 or, So, by substituting we get : formula_51 or, The equation obtained thus, eq(4) &amp; (5) can be written into matrix form as follows : so, the ABCD parameters are : A = D =formula_52 per unit B =Z Ω C = formula_53 Nominal T representation. In the nominal T model of a medium transmission line, the net series impedance is divided into two halves and placed on either side of the lumped shunt admittance i.e. placed in the middle. The circuit so formed resembles the symbol of a capital T or star(Y), and hence is known as the nominal T network of a medium length transmission line. derivation of ABCD parameter values. The application of KCL at the juncture(the neutral point for Y connection) gives, formula_54 The above equation can be rearranged as, formula_55 Here, the sending and receiving end voltages are denoted by formula_25 and formula_26 respectively. Also the currents formula_27 and formula_28 are entering and leaving the network respectively Now, for the receiving end current, we can write : By rearranging the equation and replacing the value of formula_56 with the derived value, we get : Now, the sending end current can be written as: formula_57 Replacing the value of formula_56 in the above equation : The equation obtained thus, eq.(8) &amp; eq.(9) can be written into matrix form as follows : So, the ABCD parameters are : A = D =formula_52 per unit B = formula_58 C =formula_59 Long Transmission Line. A transmission line having a length more than 250 km is considered as a long transmission line. Unlike short and medium lines the line parameters of long transmission line are assumed to be distributed at each point of line uniformly. Thus modelling of a long line is somewhat difficult. But a few approaches can be made based on the length and values of line parameters. For a long transmission line, it is considered that the line may be divided into various sections, and each section consists of inductance, capacitance, resistance and conductance, as shown in the RLC (resistance and inductance in series, with shunt capacitance) cascade model. Derivation of ABCD parameter values. Cascaded Model approach. Considering a bit smaller part of a long transmission line having length dx situated at a distance x from the receiving end. The series impedance of the line is represented by zdx and ydx is the shunt impedance of the line. Due to charging current and corona loss, the current is not uniform along the line. Voltage is also different in different parts of the line because of inductive reactance. Where, z – series impedance per unit length, per phase y – shunt admittance per unit length, per phase to neutral formula_60 Again, as formula_61 formula_62 Now for the current through the strip, applying KCL we get, The second term of the above equation is the product of two small quantities and therefore can be neglected. For formula_61 we have, formula_63 Taking the derivative concerning x of both sides, we get Substitution in the above equation results The roots of the above equation are located at formula_64. Hence the solution is of the form, Taking derivative with respect to x we get, Combining these two we have, The following two quantities are defined as, formula_65 , which is called the characteristic impedance formula_66 , which is called the propagation constant Then the previous equations can be written in terms of the characteristic impedance and propagation constant as, Now, at formula_67 we have, formula_68 and formula_69 Therefore, by putting formula_67 at eq.(17) &amp; eq.(18) we get, Solving eq.(19) &amp; eq.(20) we get the following values for formula_70 : Also, for formula_71, we have formula_72 and formula_73. Therefore, by replacing x by l we get, Where, formula_74 is called incident voltage wave formula_75 is called reflected voltage wave We can rewrite the eq.(22) &amp; eq.(23) as, So, by considering the corresponding analogy for long transmission line, the obtained equations i.e. eq.(24) eq.(25) can be written into matrix form as follows: The ABCD parameters are given by : A = D = formula_76 B = formula_77 C = formula_78 Π Representation approach. Like the medium transmission line, the long line can also be approximated into an equivalent Π representation. In the Π-equivalent of a long transmission line, the series impedance is denoted by Z′ while the shunt admittance is denoted by Y′. So, the ABCD parameters of this long line can be defined like medium transmission line as : A = D = formula_79 per unit B = Z′ Ω C = formula_80 Comparing it with the ABCD parameters of cascaded long transmission model, we can write : or, formula_82 Where Z(= zl), is the total impedance of the line. By rearranging the above equation, formula_84 or,formula_85 This can be further reduced to, formula_86 where Y(= yl) is called the total admittance of the line. Now, if the line length(l) is small, formula_87. Now, if the line length (l) is small, it is found that Z = Z′ and Y = Y′. This refers that if the line length(l) is small, the nominal-π representation incorporating the assumption of lumped parameters can be befitting. But if the length of the line(l) exceeds a certain boundary(near about 240 to 250) the nominal-π representation becomes erroneous and can not be used further, for performance analysis. Travelling waves. Travelling waves are the current and voltage waves that create a disturbance and moves along the transmission line from the sending end of a transmission line to the other end at a constant speed. The travelling wave plays a major role in knowing the voltages and currents at all the points in the power system. These waves also help in designing the insulators, protective equipment, the insulation of the terminal equipment, and overall insulation coordination. When the switch is closed at the transmission line's starting end, the voltage will not appear instantaneously at the other end. This is caused by the transient behaviour of inductor and capacitors that are present in the transmission line. The transmission lines may not have physical inductor and capacitor elements but the effects of inductance and capacitance exist in a line. Therefore, when the switch is closed the voltage will build up gradually over the line conductors. This phenomenon is usually called as the voltage wave is travelling from the transmission line's sending end to the other end. And similarly, the gradual charging of the capacitances happens due to the associated current wave. If the switch is closed at any instant of time, the voltage at load does not appear instantly. The 1st section will charge first and then it will charge the next section. Until and unless a section gets charged the successive section will not be charged .thus this process is a gradual one. It can be realized such that several water tanks are placed connectively and water flows from the 1st tank to the last tank. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Z_{0}=\\sqrt{\\frac{L}{C}}" }, { "math_id": 1, "text": "V.R = \\frac{V_{no-load}-V_{full-load}}{V_{no-load}}" }, { "math_id": 2, "text": "R = {V\\over I}, \\qquad G = {I\\over V} = \\frac{1}{R}" }, { "math_id": 3, "text": " \\frac{dV}{dI} \\,\\!" }, { "math_id": 4, "text": "L= \\frac{\\mu_0}{2 \\pi} ln\\frac{D}{r\\prime} H/m" }, { "math_id": 5, "text": "r\\prime = re^{-1/4}" }, { "math_id": 6, "text": "e^{-1/4}" }, { "math_id": 7, "text": "\\mu_0" }, { "math_id": 8, "text": "{\\mu_0} = 4 \\pi \\times 10^{-7} H/m" }, { "math_id": 9, "text": "L= \\frac{\\mu_0}{2 \\pi} ln\\frac{D_{GMD}}{r\\prime} H/m" }, { "math_id": 10, "text": "D_{GMD}" }, { "math_id": 11, "text": "C_{ab} = \\frac{\\pi \\epsilon_0}{ln\\frac{D}{r}} F/m" }, { "math_id": 12, "text": "\\epsilon_0" }, { "math_id": 13, "text": "{\\epsilon_0} = 8.854 \\times 10^{-12}" }, { "math_id": 14, "text": "C = \\frac{\\pi \\epsilon_0}{ln\\frac{D_{GMD}}{r}} F/m" }, { "math_id": 15, "text": "Y \\equiv \\frac{1}{Z} \\," }, { "math_id": 16, "text": "Y = G + j B \\," }, { "math_id": 17, "text": "Y" }, { "math_id": 18, "text": "G" }, { "math_id": 19, "text": "B" }, { "math_id": 20, "text": "j^2 = -1" }, { "math_id": 21, "text": "Y=yl=j\\omega Cl" }, { "math_id": 22, "text": "\n\\begin{bmatrix}\n\tV_\\mathrm{S}\\\\\n\tI_\\mathrm{S}\\\\\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\tA & B\\\\\n\tC & D\\\\\n\\end{bmatrix}\n\\begin{bmatrix}\n\tV_\\mathrm{R}\\\\\n\tI_\\mathrm{R}\\\\\n\\end{bmatrix}\n" }, { "math_id": 23, "text": "V_S = AV_R + BI_R" }, { "math_id": 24, "text": "I_S = CV_R + DI_R" }, { "math_id": 25, "text": "V_S" }, { "math_id": 26, "text": "V_R" }, { "math_id": 27, "text": "I_S" }, { "math_id": 28, "text": "I_R" }, { "math_id": 29, "text": "A = \\frac{V_S}{V_R}" }, { "math_id": 30, "text": "C= \\frac{I_S}{V_R}" }, { "math_id": 31, "text": "\\mho" }, { "math_id": 32, "text": "B = \\frac{V_S}{I_R}" }, { "math_id": 33, "text": "D = \\frac{I_S}{I_R}" }, { "math_id": 34, "text": "\\det(T) = AD - BC = 1" }, { "math_id": 35, "text": "A = D" }, { "math_id": 36, "text": "3 \\times 10^8 m/sec = 3 \\times 10^5 km/sec" }, { "math_id": 37, "text": "f \\lambda = c" }, { "math_id": 38, "text": "\\lambda = \\frac{c}{f}" }, { "math_id": 39, "text": "\\lambda = \\frac{3 \\times 10^5}{50} = 6000 Km" }, { "math_id": 40, "text": "\\tfrac{1}{100}" }, { "math_id": 41, "text": "\\tfrac{1}{25}" }, { "math_id": 42, "text": "\\tfrac{1}{15}" }, { "math_id": 43, "text": "z_{0} = r + j x " }, { "math_id": 44, "text": "\\Phi_s" }, { "math_id": 45, "text": "\\Phi_r" }, { "math_id": 46, "text": "Z = lr + ljx" }, { "math_id": 47, "text": "I_S = I_1 + I_2 = \\frac{Y}{2}V_S + \\frac{Y}{2}V_R + I_R" }, { "math_id": 48, "text": "I_1 \\& I_3" }, { "math_id": 49, "text": "I_2" }, { "math_id": 50, "text": "V_S = ZI_2 + V_R = Z(V_R \\frac{Y}{2} + I_R) + V_R" }, { "math_id": 51, "text": "I_S = \\frac{Y}{2}[(1 + \\frac{YZ}{2})V_R + ZI_R] + \\frac{Y}{2}V_R + I_R" }, { "math_id": 52, "text": "(1 + \\frac{YZ}{2})" }, { "math_id": 53, "text": "Y(1 + \\frac{YZ}{4}) \\mho" }, { "math_id": 54, "text": "V_J = \\frac{V_S - V_J}{\\frac{Z}{2}} = YV_J + \\frac{V_J - V_R}{\\frac{Z}{2}}" }, { "math_id": 55, "text": "V_J = \\frac{2}{YZ+4}(V_S + V_R)" }, { "math_id": 56, "text": "V_J" }, { "math_id": 57, "text": "I_S = YV_J + I_R" }, { "math_id": 58, "text": "Z(1+ \\frac{YZ}{4}) \\Omega" }, { "math_id": 59, "text": "Y \\mho" }, { "math_id": 60, "text": "\\Delta V = Iz \\Delta x \\Rightarrow \\frac{\\Delta V}{\\Delta x} = Iz" }, { "math_id": 61, "text": "\\Delta x \\rightarrow 0 " }, { "math_id": 62, "text": "\\frac{\\Delta V}{\\Delta x} = Iz" }, { "math_id": 63, "text": "\\frac{dI}{dx} = Vy" }, { "math_id": 64, "text": "\\pm \\sqrt{yz}" }, { "math_id": 65, "text": "Z_{c} = \\sqrt{\\frac{z}{y}} " }, { "math_id": 66, "text": "\\gamma = \\sqrt{yz}" }, { "math_id": 67, "text": "x = 0" }, { "math_id": 68, "text": "V = V_{r}" }, { "math_id": 69, "text": "I = I_{r}" }, { "math_id": 70, "text": "A_1 \\& A_2" }, { "math_id": 71, "text": "l = x" }, { "math_id": 72, "text": "V = V_{S}" }, { "math_id": 73, "text": "I = I_{S}" }, { "math_id": 74, "text": "\\frac{V_{r}+Z_{c}I_{r}}{2}e^{\\gamma l}" }, { "math_id": 75, "text": "\\frac{V_{r}-Z_{c}I_{r}}{2}e^{-\\gamma l}" }, { "math_id": 76, "text": "\\cosh \\gamma l " }, { "math_id": 77, "text": "Z_c \\sinh \\gamma l" }, { "math_id": 78, "text": "\\frac{ \\sinh \\gamma l}{Z_{c}}" }, { "math_id": 79, "text": "(1 + \\frac{Y\\prime Z\\prime}{2})" }, { "math_id": 80, "text": "Y(1 + \\frac{Y\\prime Z\\prime}{4}) \\mho" }, { "math_id": 81, "text": "Z\\prime = Z_C \\sinh \\gamma l = \\sqrt[]{\\frac{z}{y}} \\sinh \\gamma l = zl \\frac{\\sinh \\gamma l}{l \\sqrt[]{yz}}" }, { "math_id": 82, "text": "Z\\prime = Z \\frac{\\sinh \\gamma l}{\\gamma l} \\Omega" }, { "math_id": 83, "text": "\\cosh \\gamma l = 1 + \\frac{Y\\prime Z\\prime}{2} = \\frac{Y \\prime}{2} Z_C \\sinh \\gamma l + 1" }, { "math_id": 84, "text": "\\frac{Y\\prime}{2} = \\frac{1}{Z_C}\\frac{\\cosh \\gamma l -1}{\\sinh \\gamma l}" }, { "math_id": 85, "text": "\\frac{Y\\prime}{2} = \\frac{1}{Z_C}\\tanh (\\frac{\\gamma l}{2}) = \\sqrt[]{\\frac{y}{z}}\\tanh (\\frac{\\gamma l}{2})" }, { "math_id": 86, "text": "\\frac{Y\\prime}{2} = \\frac{yl}{2}\\frac{\\tanh (\\frac{\\gamma l}{2})}{\\frac{l}{2}\\sqrt[]{yz}} = \\frac{Y}{2}\\frac{\\tanh (\\frac{\\gamma l}{2})}{\\frac{(\\gamma l}{2})}" }, { "math_id": 87, "text": "\\sinh \\gamma l \\equiv \\gamma l\\quad \\&\\quad \\tanh (\\frac{\\gamma l}{2}) \\equiv \\frac{\\gamma l}{2}" } ]
https://en.wikipedia.org/wiki?curid=61328143
6133005
Generator matrix
In coding theory, a generator matrix is a matrix whose rows form a basis for a linear code. The codewords are all of the linear combinations of the rows of this matrix, that is, the linear code is the row space of its generator matrix. Terminology. If G is a matrix, it generates the codewords of a linear code "C" by formula_0 where w is a codeword of the linear code "C", and s is any input vector. Both w and s are assumed to be row vectors. A generator matrix for a linear formula_1-code has format formula_2, where "n" is the length of a codeword, "k" is the number of information bits (the dimension of "C" as a vector subspace), "d" is the minimum distance of the code, and "q" is size of the finite field, that is, the number of symbols in the alphabet (thus, "q" = 2 indicates a binary code, etc.). The number of redundant bits is denoted by formula_3. The "standard" form for a generator matrix is, formula_4, where formula_5 is the formula_6 identity matrix and P is a formula_7 matrix. When the generator matrix is in standard form, the code "C" is systematic in its first "k" coordinate positions. A generator matrix can be used to construct the parity check matrix for a code (and vice versa). If the generator matrix "G" is in standard form, formula_4, then the parity check matrix for "C" is formula_8, where formula_9 is the transpose of the matrix formula_10. This is a consequence of the fact that a parity check matrix of formula_11 is a generator matrix of the dual code formula_12. G is a formula_2 matrix, while H is a formula_13 matrix. Equivalent codes. Codes "C"1 and "C"2 are "equivalent" (denoted "C"1 ~ "C"2) if one code can be obtained from the other via the following two transformations: Equivalent codes have the same minimum distance. The generator matrices of equivalent codes can be obtained from one another via the following elementary operations: Thus, we can perform Gaussian elimination on "G". Indeed, this allows us to assume that the generator matrix is in the standard form. More precisely, for any matrix "G" we can find an invertible matrix "U" such that formula_14, where "G" and formula_15 generate equivalent codes. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "w=sG" }, { "math_id": 1, "text": "[n, k, d]_q" }, { "math_id": 2, "text": "k \\times n" }, { "math_id": 3, "text": "r = n - k" }, { "math_id": 4, "text": "G = \\begin{bmatrix} I_k | P \\end{bmatrix}" }, { "math_id": 5, "text": "I_k" }, { "math_id": 6, "text": "k \\times k" }, { "math_id": 7, "text": "k \\times (n-k)" }, { "math_id": 8, "text": "H = \\begin{bmatrix} -P^{\\top} | I_{n-k} \\end{bmatrix}" }, { "math_id": 9, "text": "P^{\\top}" }, { "math_id": 10, "text": "P" }, { "math_id": 11, "text": "C" }, { "math_id": 12, "text": "C^{\\perp}" }, { "math_id": 13, "text": "(n-k) \\times n" }, { "math_id": 14, "text": "UG = \\begin{bmatrix} I_k | P \\end{bmatrix}" }, { "math_id": 15, "text": "\\begin{bmatrix} I_k | P \\end{bmatrix}" } ]
https://en.wikipedia.org/wiki?curid=6133005
6133489
K (disambiguation)
K, or k, is the eleventh letter of the English alphabet. K may also refer to: &lt;templatestyles src="Template:TOC_right/styles.css" /&gt; See also. Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "K_n" }, { "math_id": 1, "text": "K_{m,n}" }, { "math_id": 2, "text": "\\mathbb K" }, { "math_id": 3, "text": "\\kappa" } ]
https://en.wikipedia.org/wiki?curid=6133489
61338
Addition
Arithmetic operation Addition (usually signified by the plus symbol +) is one of the four basic operations of arithmetic, the other three being subtraction, multiplication and division. The addition of two whole numbers results in the total amount or "sum" of those values combined. The example in the adjacent image shows two columns of three apples and two apples each, totaling at five apples. This observation is equivalent to the mathematical expression "3 + 2 = 5" (that is, "3 "plus" 2 is equal to 5"). Besides counting items, addition can also be defined and executed without referring to concrete objects, using abstractions called numbers instead, such as integers, real numbers and complex numbers. Addition belongs to arithmetic, a branch of mathematics. In algebra, another area of mathematics, addition can also be performed on abstract objects such as vectors, matrices, subspaces and subgroups. Addition has several important properties. It is commutative, meaning that the order of the operands does not matter, and it is associative, meaning that when one adds more than two numbers, the order in which addition is performed does not matter. Repeated addition of 1 is the same as counting (see "Successor function"). Addition of 0 does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication. Performing addition is one of the simplest numerical tasks to do. Addition of very small numbers is accessible to toddlers; the most basic task, 1 + 1, can be performed by infants as young as five months, and even some members of other animal species. In primary education, students are taught to add numbers in the decimal system, starting with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancient abacus to the modern computer, where research on the most efficient implementations of addition continues to this day. \\[1ex]\scriptstyle\frac{\scriptstyle\text{numerator}}{\scriptstyle\text{denominator}}\end{matrix}\right\}\,=\,&lt;/math&gt; Notation and terminology. Addition is written using the plus sign "+" between the terms; that is, in infix notation. The result is expressed with an equals sign. For example, formula_0 ("one plus two equals three") formula_1 (see "associativity" below) formula_2 (see "multiplication" below) There are also situations where addition is "understood", even though no symbol appears: The sum of a series of related numbers can be expressed through capital sigma notation, which compactly denotes iteration. For example, formula_4 Terms. The numbers or the objects to be added in general addition are collectively referred to as the terms, the addends or the summands; this terminology carries over to the summation of multiple terms. This is to be distinguished from "factors", which are multiplied. Some authors call the first addend the "augend". In fact, during the Renaissance, many authors did not consider the first addend an "addend" at all. Today, due to the commutative property of addition, "augend" is rarely used, and both terms are generally called addends. All of the above terminology derives from Latin. "Addition" and "add" are English words derived from the Latin verb "addere", which is in turn a compound of "ad" "to" and "dare" "to give", from the Proto-Indo-European root "*deh₃-" "to give"; thus to "add" is to "give to". Using the gerundive suffix "-nd" results in "addend", "thing to be added". Likewise from "augere" "to increase", one gets "augend", "thing to be increased". "Sum" and "summand" derive from the Latin noun "summa" "the highest, the top" and associated verb "summare". This is appropriate not only because the sum of two positive numbers is greater than either, but because it was common for the ancient Greeks and Romans to add upward, contrary to the modern practice of adding downward, so that a sum was literally higher than the addends. "Addere" and "summare" date back at least to Boethius, if not to earlier Roman writers such as Vitruvius and Frontinus; Boethius also used several other terms for the addition operation. The later Middle English terms "adden" and "adding" were popularized by Chaucer. The plus sign "+" (Unicode:U+002B; ASCII: codice_0) is an abbreviation of the Latin word "et", meaning "and". It appears in mathematical works dating back to at least 1489. Interpretations. Addition is used to model many physical processes. Even for the simple case of adding natural numbers, there are many possible interpretations and even more visual representations. Combining sets. Possibly the most basic interpretation of addition lies in combining sets: This interpretation is easy to visualize, with little danger of ambiguity. It is also useful in higher mathematics (for the rigorous definition it inspires, see below). However, it is not obvious how one should extend this version of addition to include fractional numbers or negative numbers. One possible fix is to consider collections of objects that can be easily divided, such as pies or, still better, segmented rods. Rather than solely combining collections of segments, rods can be joined end-to-end, which illustrates another conception of addition: adding not the rods but the lengths of the rods. Extending a length. A second interpretation of addition comes from extending an initial length by a given length: The sum "a" + "b" can be interpreted as a binary operation that combines "a" and "b", in an algebraic sense, or it can be interpreted as the addition of "b" more units to "a". Under the latter interpretation, the parts of a sum "a" + "b" play asymmetric roles, and the operation "a" + "b" is viewed as applying the unary operation +"b" to "a". Instead of calling both "a" and "b" addends, it is more appropriate to call "a" the augend in this case, since "a" plays a passive role. The unary view is also useful when discussing subtraction, because each unary addition operation has an inverse unary subtraction operation, and "vice versa". Properties. Commutativity. Addition is commutative, meaning that one can change the order of the terms in a sum, but still get the same result. Symbolically, if "a" and "b" are any two numbers, then "a" + "b" = "b" + "a". The fact that addition is commutative is known as the "commutative law of addition" or "commutative property of addition". Some other binary operations are commutative, such as multiplication, but many others, such as subtraction and division, are not. Associativity. Addition is associative, which means that when three or more numbers are added together, the order of operations does not change the result. As an example, should the expression "a" + "b" + "c" be defined to mean ("a" + "b") + "c" or "a" + ("b" + "c")? Given that addition is associative, the choice of definition is irrelevant. For any three numbers "a", "b", and "c", it is true that ("a" + "b") + "c" = "a" + ("b" + "c"). For example, (1 + 2) + 3 = 3 + 3 = 6 = 1 + 5 = 1 + (2 + 3). When addition is used together with other operations, the order of operations becomes important. In the standard order of operations, addition is a lower priority than exponentiation, nth roots, multiplication and division, but is given equal priority to subtraction. Identity element. Adding zero to any number, does not change the number; this means that zero is the identity element for addition, and is also known as the additive identity. In symbols, for every "a", one has "a" + 0 = 0 + "a" = "a". This law was first identified in Brahmagupta's "Brahmasphutasiddhanta" in 628 AD, although he wrote it as three separate laws, depending on whether "a" is negative, positive, or zero itself, and he used words rather than algebraic symbols. Later Indian mathematicians refined the concept; around the year 830, Mahavira wrote, "zero becomes the same as what is added to it", corresponding to the unary statement 0 + "a" = "a". In the 12th century, Bhaskara wrote, "In the addition of cipher, or subtraction of it, the quantity, positive or negative, remains the same", corresponding to the unary statement "a" + 0 = "a". Successor. Within the context of integers, addition of one also plays a special role: for any integer "a", the integer ("a" + 1) is the least integer greater than "a", also known as the successor of "a". For instance, 3 is the successor of 2 and 7 is the successor of 6. Because of this succession, the value of "a" + "b" can also be seen as the "b"th successor of "a", making addition iterated succession. For example, 6 + 2 is 8, because 8 is the successor of 7, which is the successor of 6, making 8 the 2nd successor of 6. Units. To numerically add physical quantities with units, they must be expressed with common units . For example, adding 50 milliliters to 150 milliliters gives 200 milliliters. However, if a measure of 5 feet is extended by 2 inches, the sum is 62 inches, since 60 inches is synonymous with 5 feet. On the other hand, it is usually meaningless to try to add 3 meters and 4 square meters, since those units are incomparable; this sort of consideration is fundamental in dimensional analysis. Performing addition. Innate ability. Studies on mathematical development starting around the 1980s have exploited the phenomenon of habituation: infants look longer at situations that are unexpected. A seminal experiment by Karen Wynn in 1992 involving Mickey Mouse dolls manipulated behind a screen demonstrated that five-month-old infants "expect" 1 + 1 to be 2, and they are comparatively surprised when a physical situation seems to imply that 1 + 1 is either 1 or 3. This finding has since been affirmed by a variety of laboratories using different methodologies. Another 1992 experiment with older toddlers, between 18 and 35 months, exploited their development of motor control by allowing them to retrieve ping-pong balls from a box; the youngest responded well for small numbers, while older subjects were able to compute sums up to 5. Even some nonhuman animals show a limited ability to add, particularly primates. In a 1995 experiment imitating Wynn's 1992 result (but using eggplants instead of dolls), rhesus macaque and cottontop tamarin monkeys performed similarly to human infants. More dramatically, after being taught the meanings of the Arabic numerals 0 through 4, one chimpanzee was able to compute the sum of two numerals without further training. More recently, Asian elephants have demonstrated an ability to perform basic arithmetic. Childhood learning. Typically, children first master counting. When given a problem that requires that two items and three items be combined, young children model the situation with physical objects, often fingers or a drawing, and then count the total. As they gain experience, they learn or discover the strategy of "counting-on": asked to find two plus three, children count three past two, saying "three, four, "five"" (usually ticking off fingers), and arriving at five. This strategy seems almost universal; children can easily pick it up from peers or teachers. Most discover it independently. With additional experience, children learn to add more quickly by exploiting the commutativity of addition by counting up from the larger number, in this case, starting with three and counting "four, "five"." Eventually children begin to recall certain addition facts ("number bonds"), either through experience or rote memorization. Once some facts are committed to memory, children begin to derive unknown facts from known ones. For example, a child asked to add six and seven may know that 6 + 6 = 12 and then reason that 6 + 7 is one more, or 13. Such derived facts can be found very quickly and most elementary school students eventually rely on a mixture of memorized and derived facts to add fluently. Different nations introduce whole numbers and arithmetic at different ages, with many countries teaching addition in pre-school. However, throughout the world, addition is taught by the end of the first year of elementary school. Table. Children are often presented with the addition table of pairs of numbers from 0 to 9 to memorize. Knowing this, children can perform any addition. Decimal system. The prerequisite to addition in the decimal system is the fluent recall or derivation of the 100 single-digit "addition facts". One could memorize all the facts by rote, but pattern-based strategies are more enlightening and, for most people, more efficient: As students grow older, they commit more facts to memory, and learn to derive other facts rapidly and fluently. Many students never commit all the facts to memory, but can still find any basic fact quickly. Carry. The standard algorithm for adding multidigit numbers is to align the addends vertically and add the columns, starting from the ones column on the right. If a column exceeds nine, the extra digit is "carried" into the next column. For example, in the addition 27 + 59 ¹ 27 + 59 86 7 + 9 = 16, and the digit 1 is the carry. An alternate strategy starts adding from the most significant digit on the left; this route makes carrying a little clumsier, but it is faster at getting a rough estimate of the sum. There are many alternative methods. Since the end of the 20th century, some US programs, including TERC, decided to remove the traditional transfer method from their curriculum. This decision was criticized, which is why some states and counties did not support this experiment. Decimal fractions. Decimal fractions can be added by a simple modification of the above process. One aligns two decimal fractions above each other, with the decimal point in the same location. If necessary, one can add trailing zeros to a shorter decimal to make it the same length as the longer decimal. Finally, one performs the same addition process as above, except the decimal point is placed in the answer, exactly where it was placed in the summands. As an example, 45.1 + 4.34 can be solved as follows: 4 5 . 1 0 + 0 4 . 3 4 4 9 . 4 4 Scientific notation. In scientific notation, numbers are written in the form formula_5, where formula_6 is the significand and formula_7 is the exponential part. Addition requires two numbers in scientific notation to be represented using the same exponential part, so that the two significands can simply be added. For example: formula_8 Non-decimal. Addition in other bases is very similar to decimal addition. As an example, one can consider addition in binary. Adding two single-digit binary numbers is relatively simple, using a form of carrying: 0 + 0 → 0 0 + 1 → 1 1 + 0 → 1 1 + 1 → 0, carry 1 (since 1 + 1 = 2 = 0 + (1 × 21)) Adding two "1" digits produces a digit "0", while 1 must be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented: 5 + 5 → 0, carry 1 (since 5 + 5 = 10 = 0 + (1 × 101)) 7 + 9 → 6, carry 1 (since 7 + 9 = 16 = 6 + (1 × 101)) This is known as "carrying". When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary: 1 1 1 1 1 (carried digits) 0 1 1 0 1 + 1 0 1 1 1 1 0 0 1 0 0 = 36 In this example, two numerals are being added together: 011012 (1310) and 101112 (2310). The top row shows the carry bits used. Starting in the rightmost column, 1 + 1 = 102. The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: 1 + 0 + 1 = 102 again; the 1 is carried, and 0 is written at the bottom. The third column: 1 + 1 + 1 = 112. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 1001002 (3610). Computers. Analog computers work directly with physical quantities, so their addition mechanisms depend on the form of the addends. A mechanical adder might represent two addends as the positions of sliding blocks, in which case they can be added with an averaging lever. If the addends are the rotation speeds of two shafts, they can be added with a differential. A hydraulic adder can add the pressures in two chambers by exploiting Newton's second law to balance forces on an assembly of pistons. The most common situation for a general-purpose analog computer is to add two voltages (referenced to ground); this can be accomplished roughly with a resistor network, but a better design exploits an operational amplifier. Addition is also fundamental to the operation of digital computers, where the efficiency of addition, in particular the carry mechanism, is an important limitation to overall performance. The abacus, also called a counting frame, is a calculating tool that was in use centuries before the adoption of the written modern numeral system and is still widely used by merchants, traders and clerks in Asia, Africa, and elsewhere; it dates back to at least 2700–2300 BC, when it was used in Sumer. Blaise Pascal invented the mechanical calculator in 1642; it was the first operational adding machine. It made use of a gravity-assisted carry mechanism. It was the only operational mechanical calculator in the 17th century and the earliest automatic, digital computer. Pascal's calculator was limited by its carry mechanism, which forced its wheels to only turn one way so it could add. To subtract, the operator had to use the Pascal's calculator's complement, which required as many steps as an addition. Giovanni Poleni followed Pascal, building the second functional mechanical calculator in 1709, a calculating clock made of wood that, once setup, could multiply two numbers automatically. Adders execute integer addition in electronic digital computers, usually using binary arithmetic. The simplest architecture is the ripple carry adder, which follows the standard multi-digit algorithm. One slight improvement is the carry skip design, again following human intuition; one does not perform all the carries in computing 999 + 1, but one bypasses the group of 9s and skips to the answer. In practice, computational addition may be achieved via XOR and AND bitwise logical operations in conjunction with bitshift operations as shown in the pseudocode below. Both XOR and AND gates are straightforward to realize in digital logic allowing the realization of full adder circuits which in turn may be combined into more complex logical operations. In modern digital computers, integer addition is typically the fastest arithmetic instruction, yet it has the largest impact on performance, since it underlies all floating-point operations as well as such basic tasks as address generation during memory access and fetching instructions during branching. To increase speed, modern designs calculate digits in parallel; these schemes go by such names as carry select, carry lookahead, and the Ling pseudocarry. Many implementations are, in fact, hybrids of these last three designs. Unlike addition on paper, addition on a computer often changes the addends. On the ancient abacus and adding board, both addends are destroyed, leaving only the sum. The influence of the abacus on mathematical thinking was strong enough that early Latin texts often claimed that in the process of adding "a number to a number", both numbers vanish. In modern times, the ADD instruction of a microprocessor often replaces the augend with the sum but preserves the addend. In a high-level programming language, evaluating "a" + "b" does not change either "a" or "b"; if the goal is to replace "a" with the sum this must be explicitly requested, typically with the statement "a" = "a" + "b". Some languages such as C or C++ allow this to be abbreviated as "a" += "b". // Iterative algorithm int add(int x, int y) { int carry = 0; while (y != 0) { carry = AND(x, y); // Logical AND x = XOR(x, y); // Logical XOR y = carry « 1; // left bitshift carry by one return x; // Recursive algorithm int add(int x, int y) { return x if (y == 0) else add(XOR(x, y), AND(x, y) « 1); On a computer, if the result of an addition is too large to store, an arithmetic overflow occurs, resulting in an incorrect answer. Unanticipated arithmetic overflow is a fairly common cause of program errors. Such overflow bugs may be hard to discover and diagnose because they may manifest themselves only for very large input data sets, which are less likely to be used in validation tests. The Year 2000 problem was a series of bugs where overflow errors occurred due to use of a 2-digit format for years. Addition of numbers. To prove the usual properties of addition, one must first define addition for the context in question. Addition is first defined on the natural numbers. In set theory, addition is then extended to progressively larger sets that include the natural numbers: the integers, the rational numbers, and the real numbers. (In mathematics education, positive fractions are added before negative numbers are even considered; this is also the historical route.) Natural numbers. There are two popular ways to define the sum of two natural numbers "a" and "b". If one defines natural numbers to be the cardinalities of finite sets, (the cardinality of a set is the number of elements in the set), then it is appropriate to define their sum as follows: Here, "A" ∪ "B" is the union of "A" and "B". An alternate version of this definition allows "A" and "B" to possibly overlap and then takes their disjoint union, a mechanism that allows common elements to be separated out and therefore counted twice. The other popular definition is recursive: Again, there are minor variations upon this definition in the literature. Taken literally, the above definition is an application of the recursion theorem on the partially ordered set N2. On the other hand, some sources prefer to use a restricted recursion theorem that applies only to the set of natural numbers. One then considers "a" to be temporarily "fixed", applies recursion on "b" to define a function ""a" +", and pastes these unary operations for all "a" together to form the full binary operation. This recursive formulation of addition was developed by Dedekind as early as 1854, and he would expand upon it in the following decades. He proved the associative and commutative properties, among others, through mathematical induction. Integers. The simplest conception of an integer is that it consists of an absolute value (which is a natural number) and a sign (generally either positive or negative). The integer zero is a special third case, being neither positive nor negative. The corresponding definition of addition must proceed by cases: Although this definition can be useful for concrete problems, the number of cases to consider complicates proofs unnecessarily. So the following method is commonly used for defining integers. It is based on the remark that every integer is the difference of two natural integers and that two such differences, "a" – "b" and "c" – "d" are equal if and only if "a" + "d" = "b" + "c". So, one can define formally the integers as the equivalence classes of ordered pairs of natural numbers under the equivalence relation ("a", "b") ~ ("c", "d") if and only if "a" + "d" = "b" + "c". The equivalence class of ("a", "b") contains either ("a" – "b", 0) if "a" ≥ "b", or (0, "b" – "a") otherwise. If n is a natural number, one can denote +"n" the equivalence class of ("n", 0), and by –"n" the equivalence class of (0, "n"). This allows identifying the natural number n with the equivalence class +"n". Addition of ordered pairs is done component-wise: formula_10 A straightforward computation shows that the equivalence class of the result depends only on the equivalences classes of the summands, and thus that this defines an addition of equivalence classes, that is integers. Another straightforward computation shows that this addition is the same as the above case definition. This way of defining integers as equivalence classes of pairs of natural numbers, can be used to embed into a group any commutative semigroup with cancellation property. Here, the semigroup is formed by the natural numbers and the group is the additive group of integers. The rational numbers are constructed similarly, by taking as semigroup the nonzero integers with multiplication. This construction has been also generalized under the name of Grothendieck group to the case of any commutative semigroup. Without the cancellation property the semigroup homomorphism from the semigroup into the group may be non-injective. Originally, the "Grothendieck group" was, more specifically, the result of this construction applied to the equivalences classes under isomorphisms of the objects of an abelian category, with the direct sum as semigroup operation. Rational numbers (fractions). Addition of rational numbers can be computed using the least common denominator, but a conceptually simpler definition involves only integer addition and multiplication: As an example, the sum formula_12. Addition of fractions is much simpler when the denominators are the same; in this case, one can simply add the numerators while leaving the denominator the same: formula_13, so formula_14. The commutativity and associativity of rational addition is an easy consequence of the laws of integer arithmetic. For a more rigorous and general discussion, see "field of fractions". Real numbers. A common construction of the set of real numbers is the Dedekind completion of the set of rational numbers. A real number is defined to be a Dedekind cut of rationals: a non-empty set of rationals that is closed downward and has no greatest element. The sum of real numbers "a" and "b" is defined element by element: This definition was first published, in a slightly modified form, by Richard Dedekind in 1872. The commutativity and associativity of real addition are immediate; defining the real number 0 to be the set of negative rationals, it is easily seen to be the additive identity. Probably the trickiest part of this construction pertaining to addition is the definition of additive inverses. Unfortunately, dealing with multiplication of Dedekind cuts is a time-consuming case-by-case process similar to the addition of signed integers. Another approach is the metric completion of the rational numbers. A real number is essentially defined to be the limit of a Cauchy sequence of rationals, lim "a""n". Addition is defined term by term: This definition was first published by Georg Cantor, also in 1872, although his formalism was slightly different. One must prove that this operation is well-defined, dealing with co-Cauchy sequences. Once that task is done, all the properties of real addition follow immediately from the properties of rational numbers. Furthermore, the other arithmetic operations, including multiplication, have straightforward, analogous definitions. Complex numbers. Complex numbers are added by adding the real and imaginary parts of the summands. That is to say: formula_17 Using the visualization of complex numbers in the complex plane, the addition has the following geometric interpretation: the sum of two complex numbers "A" and "B", interpreted as points of the complex plane, is the point "X" obtained by building a parallelogram three of whose vertices are "O", "A" and "B". Equivalently, "X" is the point such that the triangles with vertices "O", "A", "B", and "X", "B", "A", are congruent. Generalizations. There are many binary operations that can be viewed as generalizations of the addition operation on the real numbers. The field of abstract algebra is centrally concerned with such generalized operations, and they also appear in set theory and category theory. Abstract algebra. Vectors. In linear algebra, a vector space is an algebraic structure that allows for adding any two vectors and for scaling vectors. A familiar vector space is the set of all ordered pairs of real numbers; the ordered pair ("a","b") is interpreted as a vector from the origin in the Euclidean plane to the point ("a","b") in the plane. The sum of two vectors is obtained by adding their individual coordinates: formula_18 This addition operation is central to classical mechanics, in which velocities, accelerations and forces are all represented by vectors. Matrices. Matrix addition is defined for two matrices of the same dimensions. The sum of two "m" × "n" (pronounced "m by n") matrices A and B, denoted by A + B, is again an "m" × "n" matrix computed by adding corresponding elements: formula_19 For example: formula_20 Modular arithmetic. In modular arithmetic, the set of available numbers is restricted to a finite subset of the integers, and addition "wraps around" when reaching a certain value, called the modulus. For example, the set of integers modulo 12 has twelve elements; it inherits an addition operation from the integers that is central to musical set theory. The set of integers modulo 2 has just two elements; the addition operation it inherits is known in Boolean logic as the "exclusive or" function. A similar "wrap around" operation arises in geometry, where the sum of two angle measures is often taken to be their sum as real numbers modulo 2π. This amounts to an addition operation on the circle, which in turn generalizes to addition operations on many-dimensional tori. General theory. The general theory of abstract algebra allows an "addition" operation to be any associative and commutative operation on a set. Basic algebraic structures with such an addition operation include commutative monoids and abelian groups. Set theory and category theory. A far-reaching generalization of addition of natural numbers is the addition of ordinal numbers and cardinal numbers in set theory. These give two different generalizations of addition of natural numbers to the transfinite. Unlike most addition operations, addition of ordinal numbers is not commutative. Addition of cardinal numbers, however, is a commutative operation closely related to the disjoint union operation. In category theory, disjoint union is seen as a particular case of the coproduct operation, and general coproducts are perhaps the most abstract of all the generalizations of addition. Some coproducts, such as direct sum and wedge sum, are named to evoke their connection with addition. Related operations. Addition, along with subtraction, multiplication and division, is considered one of the basic operations and is used in elementary arithmetic. Arithmetic. Subtraction can be thought of as a kind of addition—that is, the addition of an additive inverse. Subtraction is itself a sort of inverse to addition, in that adding x and subtracting x are inverse functions. Given a set with an addition operation, one cannot always define a corresponding subtraction operation on that set; the set of natural numbers is a simple example. On the other hand, a subtraction operation uniquely determines an addition operation, an additive inverse operation, and an additive identity; for this reason, an additive group can be described as a set that is closed under subtraction. Multiplication can be thought of as repeated addition. If a single term x appears in a sum "n" times, then the sum is the product of "n" and x. If "n" is not a natural number, the product may still make sense; for example, multiplication by −1 yields the additive inverse of a number. In the real and complex numbers, addition and multiplication can be interchanged by the exponential function: formula_21 This identity allows multiplication to be carried out by consulting a table of logarithms and computing addition by hand; it also enables multiplication on a slide rule. The formula is still a good first-order approximation in the broad context of Lie groups, where it relates multiplication of infinitesimal group elements with addition of vectors in the associated Lie algebra. There are even more generalizations of multiplication than addition. In general, multiplication operations always distribute over addition; this requirement is formalized in the definition of a ring. In some contexts, such as the integers, distributivity over addition and the existence of a multiplicative identity is enough to uniquely determine the multiplication operation. The distributive property also provides information about addition; by expanding the product (1 + 1)("a" + "b") in both ways, one concludes that addition is forced to be commutative. For this reason, ring addition is commutative in general. Division is an arithmetic operation remotely related to addition. Since "a"/"b" = "a"("b"−1), division is right distributive over addition: ("a" + "b") / "c" = "a"/"c" + "b"/"c". However, division is not left distributive over addition; 1 / (2 + 2) is not the same as 1/2 + 1/2. Ordering. The maximum operation "max ("a", "b")" is a binary operation similar to addition. In fact, if two nonnegative numbers "a" and "b" are of different orders of magnitude, then their sum is approximately equal to their maximum. This approximation is extremely useful in the applications of mathematics, for example in truncating Taylor series. However, it presents a perpetual difficulty in numerical analysis, essentially since "max" is not invertible. If "b" is much greater than "a", then a straightforward calculation of ("a" + "b") − "b" can accumulate an unacceptable round-off error, perhaps even returning zero. See also "Loss of significance". The approximation becomes exact in a kind of infinite limit; if either "a" or "b" is an infinite cardinal number, their cardinal sum is exactly equal to the greater of the two. Accordingly, there is no subtraction operation for infinite cardinals. Maximization is commutative and associative, like addition. Furthermore, since addition preserves the ordering of real numbers, addition distributes over "max" in the same way that multiplication distributes over addition: formula_22 For these reasons, in tropical geometry one replaces multiplication with addition and addition with maximization. In this context, addition is called "tropical multiplication", maximization is called "tropical addition", and the tropical "additive identity" is negative infinity. Some authors prefer to replace addition with minimization; then the additive identity is positive infinity. Tying these observations together, tropical addition is approximately related to regular addition through the logarithm: formula_23 which becomes more accurate as the base of the logarithm increases. The approximation can be made exact by extracting a constant "h", named by analogy with the Planck constant from quantum mechanics, and taking the "classical limit" as "h" tends to zero: formula_24 In this sense, the maximum operation is a "dequantized" version of addition. Other ways to add. Incrementation, also known as the successor operation, is the addition of 1 to a number. Summation describes the addition of arbitrarily many numbers, usually more than just two. It includes the idea of the sum of a single number, which is itself, and the empty sum, which is zero. An infinite summation is a delicate procedure known as a series. Counting a finite set is equivalent to summing 1 over the set. Integration is a kind of "summation" over a continuum, or more precisely and generally, over a differentiable manifold. Integration over a zero-dimensional manifold reduces to summation. Linear combinations combine multiplication and summation; they are sums in which each term has a multiplier, usually a real or complex number. Linear combinations are especially useful in contexts where straightforward addition would violate some normalization rule, such as mixing of strategies in game theory or superposition of states in quantum mechanics. Convolution is used to add two independent random variables defined by distribution functions. Its usual definition combines integration, subtraction, and multiplication. In general, convolution is useful as a kind of domain-side addition; by contrast, vector addition is a kind of range-side addition. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "1 + 2 = 3" }, { "math_id": 1, "text": "5 + 4 + 2 = 11" }, { "math_id": 2, "text": "3 + 3 + 3 + 3 = 12" }, { "math_id": 3, "text": "3\\frac{1}{2}=3+\\frac{1}{2}=3.5." }, { "math_id": 4, "text": "\\sum_{k=1}^5 k^2 = 1^2 + 2^2 + 3^2 + 4^2 + 5^2 = 55." }, { "math_id": 5, "text": "x=a\\times10^{b}" }, { "math_id": 6, "text": "a" }, { "math_id": 7, "text": "10^{b}" }, { "math_id": 8, "text": "2.34\\times10^{-5} + 5.67\\times10^{-6} = 2.34\\times10^{-5} + 0.567\\times10^{-5} = 2.907\\times10^{-5}" }, { "math_id": 9, "text": " N(A \\cup B)" }, { "math_id": 10, "text": "\n(a, b)+(c, d)=(a+c,b+d)." }, { "math_id": 11, "text": "\\frac ab + \\frac cd = \\frac{ad+bc}{bd}." }, { "math_id": 12, "text": "\\frac 34 + \\frac 18 = \\frac{3 \\times 8+4 \\times 1}{4 \\times 8} = \\frac{24 + 4}{32} = \\frac{28}{32} = \\frac78" }, { "math_id": 13, "text": "\\frac ac + \\frac bc = \\frac{a + b}{c}" }, { "math_id": 14, "text": "\\frac 14 + \\frac 24 = \\frac{1 + 2}{4} = \\frac 34" }, { "math_id": 15, "text": "a+b = \\{q+r \\mid q\\in a, r\\in b\\}." }, { "math_id": 16, "text": "\\lim_na_n+\\lim_nb_n = \\lim_n(a_n+b_n)." }, { "math_id": 17, "text": "(a+bi) + (c+di) = (a+c) + (b+d)i." }, { "math_id": 18, "text": "(a,b) + (c,d) = (a+c,b+d)." }, { "math_id": 19, "text": "\\begin{align}\n\\mathbf{A}+\\mathbf{B} & = \\begin{bmatrix}\n a_{11} & a_{12} & \\cdots & a_{1n} \\\\\n a_{21} & a_{22} & \\cdots & a_{2n} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{m1} & a_{m2} & \\cdots & a_{mn} \\\\\n\\end{bmatrix} +\n\n\\begin{bmatrix}\n b_{11} & b_{12} & \\cdots & b_{1n} \\\\\n b_{21} & b_{22} & \\cdots & b_{2n} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n b_{m1} & b_{m2} & \\cdots & b_{mn} \\\\\n\\end{bmatrix} \\\\\n& = \\begin{bmatrix}\n a_{11} + b_{11} & a_{12} + b_{12} & \\cdots & a_{1n} + b_{1n} \\\\\n a_{21} + b_{21} & a_{22} + b_{22} & \\cdots & a_{2n} + b_{2n} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{m1} + b_{m1} & a_{m2} + b_{m2} & \\cdots & a_{mn} + b_{mn} \\\\\n\\end{bmatrix} \\\\\n\n\\end{align}" }, { "math_id": 20, "text": "\n \\begin{bmatrix}\n 1 & 3 \\\\\n 1 & 0 \\\\\n 1 & 2\n \\end{bmatrix}\n+\n \\begin{bmatrix}\n 0 & 0 \\\\\n 7 & 5 \\\\\n 2 & 1\n \\end{bmatrix}\n=\n \\begin{bmatrix}\n 1+0 & 3+0 \\\\\n 1+7 & 0+5 \\\\\n 1+2 & 2+1\n \\end{bmatrix}\n=\n \\begin{bmatrix}\n 1 & 3 \\\\\n 8 & 5 \\\\\n 3 & 3\n \\end{bmatrix}\n" }, { "math_id": 21, "text": "e^{a+b} = e^a e^b." }, { "math_id": 22, "text": "a + \\max(b,c) = \\max(a+b,a+c)." }, { "math_id": 23, "text": "\\log(a+b) \\approx \\max(\\log a, \\log b)," }, { "math_id": 24, "text": "\\max(a,b) = \\lim_{h\\to 0}h\\log(e^{a/h}+e^{b/h})." } ]
https://en.wikipedia.org/wiki?curid=61338
61340435
Stochastic transitivity
Stochastic transitivity models are stochastic versions of the transitivity property of binary relations studied in mathematics. Several models of stochastic transitivity exist and have been used to describe the probabilities involved in experiments of paired comparisons, specifically in scenarios where transitivity is expected, however, empirical observations of the binary relation is probabilistic. For example, players' skills in a sport might be expected to be transitive, i.e. "if player A is better than B and B is better than C, then player A must be better than C"; however, in any given match, a weaker player might still end up winning with a positive probability. Tightly matched players might have a higher chance of observing this inversion while players with large differences in their skills might only see these inversions happen seldom. Stochastic transitivity models formalize such relations between the probabilities (e.g. of an outcome of a match) and the underlying transitive relation (e.g. the skills of the players). A binary relation formula_0 on a set formula_1 is called transitive, in the standard "non-stochastic" sense, if formula_2 and formula_3 implies formula_4 for all members formula_5 of formula_1. "Stochastic" versions of transitivity include: A toy example. The marble game - Assume two kids, Billy and Gabriela, collect marbles. Billy collects blue marbles and Gabriela green marbles. When they get together they play a game where they mix all their marbles in a bag and sample one randomly. If the sampled marble is green, then Gabriela wins and if it is blue then Billy wins. If formula_15 is the number of blue marbles and formula_16 is the number of green marbles in the bag, then the probability formula_17 of Billy winning against Gabriela is formula_18. In this example, the marble game satisfies linear stochastic transitivity, where the comparison function formula_13 is given by formula_19 and the merit function formula_14 is given by formula_20, where formula_21 is the number of marbles of the player. This game happens to be an example of a Bradley–Terry model. Connections between models. Positive Results: Negative Results: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\succsim" }, { "math_id": 1, "text": "\\mathcal{A}" }, { "math_id": 2, "text": "a \\succsim b" }, { "math_id": 3, "text": "b \\succsim c" }, { "math_id": 4, "text": "a \\succsim c" }, { "math_id": 5, "text": "a,b,c" }, { "math_id": 6, "text": "\\mathbb{P}(a\\succsim b)\\geq \\tfrac{1}{2}" }, { "math_id": 7, "text": "\\mathbb{P}(b\\succsim c)\\geq \\tfrac{1}{2}" }, { "math_id": 8, "text": "\\mathbb{P}(a\\succsim c)\\geq \\tfrac{1}{2}" }, { "math_id": 9, "text": "a,b,c \\in \\mathcal{A}" }, { "math_id": 10, "text": "\\mathbb{P}(a\\succsim c)\\geq \\max \\{\\mathbb{P}(a\\succsim b),\\mathbb{P}(b\\succsim c)\\}" }, { "math_id": 11, "text": "\\mathbb{P}(a\\succsim b) = F(\\mu(a) - \\mu(b))" }, { "math_id": 12, "text": "a,b \\in \\mathcal{A}" }, { "math_id": 13, "text": "F:\\mathbb{R} \\to [0,1]" }, { "math_id": 14, "text": "\\mu: \\mathcal{A}\\to \\mathbb{R}" }, { "math_id": 15, "text": "B" }, { "math_id": 16, "text": "G" }, { "math_id": 17, "text": "\\mathbb{P}(\\text{Billy} \\succsim \\text{Gabriela})" }, { "math_id": 18, "text": "\\mathbb{P}(\\text{Billy} \\succsim \\text{Gabriela}) = \\frac{B}{B+G} = \\frac{e^{\\ln(B)}}{e^{\\ln(B)}+e^{\\ln(G)}} = \\frac{1}{1+e^{\\ln(G)-\\ln(B)}}" }, { "math_id": 19, "text": "F(x) = \\frac{1}{1+e^{-x\n}}" }, { "math_id": 20, "text": "\\mu(M) = \\ln(M)" }, { "math_id": 21, "text": "M" }, { "math_id": 22, "text": "\\implies" }, { "math_id": 23, "text": "F(x)" }, { "math_id": 24, "text": "G(x)" }, { "math_id": 25, "text": "F(x) = G(\\kappa x)" }, { "math_id": 26, "text": "\\kappa \\geq 0." } ]
https://en.wikipedia.org/wiki?curid=61340435
61341798
Exact diagonalization
Numerical technique for solving quantum Hamiltonians. Exact diagonalization (ED) is a numerical technique used in physics to determine the eigenstates and energy eigenvalues of a quantum Hamiltonian. In this technique, a Hamiltonian for a discrete, finite system is expressed in matrix form and diagonalized using a computer. Exact diagonalization is only feasible for systems with a few tens of particles, due to the exponential growth of the Hilbert space dimension with the size of the quantum system. It is frequently employed to study lattice models, including the Hubbard model, Ising model, Heisenberg model, "t"-"J" model, and SYK model. Expectation values from exact diagonalization. After determining the eigenstates formula_0 and energies formula_1 of a given Hamiltonian, exact diagonalization can be used to obtain expectation values of observables. For example, if formula_2 is an observable, its thermal expectation value is formula_3 where formula_4 is the partition function. If the observable can be written down in the initial basis for the problem, then this sum can be evaluated after transforming to the basis of eigenstates. Green's functions may be evaluated similarly. For example, the retarded Green's function formula_5 can be written formula_6 Exact diagonalization can also be used to determine the time evolution of a system after a quench. Suppose the system has been prepared in an initial state formula_7, and then for time formula_8 evolves under a new Hamiltonian, formula_9. The state at time formula_10 is formula_11 Memory requirements. The dimension of the Hilbert space describing a quantum system scales exponentially with system size. For example, consider a system of formula_12 spins localized on fixed lattice sites. The dimension of the on-site basis is 2, because the state of each spin can be described as a superposition of spin-up and spin-down, denoted formula_13 and formula_14. The full system has dimension formula_15, and the Hamiltonian represented as a matrix has size formula_16. This implies that computation time and memory requirements scale very unfavorably in exact diagonalization. In practice, the memory requirements can be reduced by taking advantage of symmetry of the problem, imposing conservation laws, working with sparse matrices, or using other techniques. Comparison with other techniques. Exact diagonalization is useful for extracting exact information about finite systems. However, often small systems are studied to gain insight into infinite lattice systems. If the diagonalized system is too small, its properties will not reflect the properties of the system in the thermodynamic limit, and the simulation is said to suffer from finite size effects. Unlike some other exact theory techniques, such as Auxiliary-field Monte Carlo, exact diagonalization obtains Green's functions directly in real time, as opposed to imaginary time. Unlike in these other techniques, exact diagonalization results do not need to be numerically analytically continued. This is an advantage, because numerical analytic continuation is an ill-posed and difficult optimization problem. Implementations. Numerous software packages implementing exact diagonalization of quantum Hamiltonians exist. These include QuSpin, ALPS, DoQo, EdLib, edrixs, and many others. Generalizations. Exact diagonalization results from many small clusters can be combined to obtain more accurate information about systems in the thermodynamic limit using the numerical linked cluster expansion. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|n\\rangle" }, { "math_id": 1, "text": "\\epsilon_n" }, { "math_id": 2, "text": "\\mathcal{O}" }, { "math_id": 3, "text": "\\langle \\mathcal{O}\\rangle = \\frac{1}{Z} \\sum_n e^{-\\beta \\epsilon_n} \\langle n | \\mathcal{O} | n \\rangle, " }, { "math_id": 4, "text": "Z = \\sum_n e^{-\\beta \\epsilon_n}" }, { "math_id": 5, "text": " G^R(t) = -i \\theta(t) \\langle [A(t), B(0)] \\rangle " }, { "math_id": 6, "text": " G^R(t) = -\\frac{i \\theta(t)}{Z} \\sum_{n,m} \\left(e^{-\\beta \\epsilon_n} - e^{-\\beta \\epsilon_m} \\right) \\langle n | A(0) | m \\rangle \\langle m | B(0) | n \\rangle e^{-i(\\epsilon_m - \\epsilon_n)t/\\hbar}. " }, { "math_id": 7, "text": "| \\psi \\rangle" }, { "math_id": 8, "text": "t>0" }, { "math_id": 9, "text": "\\mathcal{H}" }, { "math_id": 10, "text": "t" }, { "math_id": 11, "text": "| \\psi(t) \\rangle = \\sum_n e^{-i\\epsilon_n t/\\hbar} \\langle n | \\psi(0) \\rangle | n \\rangle. " }, { "math_id": 12, "text": "N" }, { "math_id": 13, "text": "\\left|\\uparrow \\right\\rangle" }, { "math_id": 14, "text": "\\left|\\downarrow \\right\\rangle" }, { "math_id": 15, "text": "2^N" }, { "math_id": 16, "text": "2^N \\times 2^N" } ]
https://en.wikipedia.org/wiki?curid=61341798
6134187
History of mathematical notation
Origin and evolution of the symbols used to write equations and formulas The history of mathematical notation includes the commencement, progress, and cultural diffusion of mathematical symbols and the conflict of the methods of notation confronted in a notation's move to popularity or inconspicuousness. Mathematical notation comprises the symbols used to write mathematical equations and formulas. Notation generally implies a set of well-defined representations of quantities and symbols operators. The history includes Hindu–Arabic numerals, letters from the Roman, Greek, Hebrew, and German alphabets, and a host of symbols invented by mathematicians over the past several centuries. The development of mathematical notation can be divided in stages: The area of study known as the history of mathematics is primarily an investigation into the origin of discoveries in mathematics and the focus here, the investigation into the mathematical methods and notation of the past. Rhetorical stage. Although the history commences with that of the Ionian schools, there is no doubt that those Ancient Greeks who paid attention to it were largely indebted to the previous investigations of the Ancient Egyptians and Ancient Phoenicians. Numerical notation's distinctive feature, i.e. symbols having local as well as intrinsic values (arithmetic), implies a state of civilization at the period of its invention. Our knowledge of the mathematical attainments of these early peoples, to which this section is devoted, is imperfect and the following brief notes be regarded as a summary of the conclusions which seem most probable, and the history of mathematics begins with the symbolic sections. Many areas of mathematics began with the study of real world problems, before the underlying rules and concepts were identified and defined as abstract structures. For example, geometry has its origins in the calculation of distances and areas in the real world; algebra started with methods of solving problems in arithmetic. There can be no doubt that most early peoples which have left records knew something of numeration and mechanics and that a few were also acquainted with the elements of land-surveying. In particular, the Egyptians paid attention to geometry and numbers, and the Phoenicians to practical arithmetic, book-keeping, navigation, and land-surveying. The results attained by these people seem to have been accessible, under certain conditions, to travelers. It is probable that the knowledge of the Egyptians and Phoenicians was largely the result of observation and measurement, and represented the accumulated experience of many ages. Beginning of notation. Written mathematics began with numbers expressed as tally marks, with each tally representing a single unit. The numerical symbols consisted probably of strokes or notches cut in wood or stone, and intelligible alike to all nations. For example, one notch in a bone represented one animal, or person, or anything else. The peoples with whom the Greeks of Asia Minor (amongst whom notation in western history begins) were likely to have come into frequent contact were those inhabiting the eastern littoral of the Mediterranean: and Greek tradition uniformly assigned the special development of geometry to the Egyptians, and that of the science of numbers either to the Egyptians or to the Phoenicians. The Ancient Egyptians had a symbolic notation which was the numeration by Hieroglyphics. The Egyptian mathematics had a symbol for one, ten, one hundred, one thousand, ten thousand, one hundred thousand, and one million. Smaller digits were placed on the left of the number, as they are in Hindu–Arabic numerals. Later, the Egyptians used hieratic instead of hieroglyphic script to show numbers. Hieratic was more like cursive and replaced several groups of symbols with individual ones. For example, the four vertical lines used to represent four were replaced by a single horizontal line. This is found in the Rhind Mathematical Papyrus (c. 2000–1800 BC) and the Moscow Mathematical Papyrus (c. 1890 BC). The system the Egyptians used was discovered and modified by many other civilizations in the Mediterranean. The Egyptians also had symbols for basic operations: legs going forward represented addition, and legs walking backward to represent subtraction. The Mesopotamians had symbols for each power of ten. Later, they wrote their numbers in almost exactly the same way done in modern times. Instead of having symbols for each power of ten, they would just put the coefficient of that number. Each digit was separated by only a space, but by the time of Alexander the Great, they had created a symbol that represented zero and was a placeholder. The Mesopotamians also used a sexagesimal system, that is base sixty. It is this system that is used in modern times when measuring time and angles. Babylonian mathematics is derived from more than 400 clay tablets unearthed since the 1850s. Written in Cuneiform script, tablets were inscribed whilst the clay was moist, and baked hard in an oven or by the heat of the sun. Some of these appear to be graded homework. The earliest evidence of written mathematics dates back to the ancient Sumerians and the system of metrology from 3000 BC. From around 2500 BC onwards, the Sumerians wrote multiplication tables on clay tablets and dealt with geometrical exercises and division problems. The earliest traces of the Babylonian numerals also date back to this period. The majority of Mesopotamian clay tablets date from 1800 to 1600 BC, and cover topics which include fractions, algebra, quadratic and cubic equations, and the calculation of regular, reciprocal and pairs. The tablets also include multiplication tables and methods for solving linear and quadratic equations. The Babylonian tablet YBC 7289 gives an approximation of √2 that is accurate to an equivalent of six decimal places. Babylonian mathematics were written using a sexagesimal (base-60) numeral system. From this derives the modern-day usage of 60 seconds in a minute, 60 minutes in an hour, and 360 (60 × 6) degrees in a circle, as well as the use of minutes and seconds of arc to denote fractions of a degree. Babylonian advances in mathematics were facilitated by the fact that 60 has many divisors: the reciprocal of any integer which is a multiple of divisors of 60 has a finite expansion in base 60. (In decimal arithmetic, only reciprocals of multiples of 2 and 5 have finite decimal expansions.) Also, unlike the Egyptians, Greeks, and Romans, the Babylonians had a true place-value system, where digits written in the left column represented larger values, much as in the decimal system. They lacked, however, an equivalent of the decimal point, and so the place value of a symbol often had to be inferred from the context. Rhetorical algebra was first developed by the ancient Babylonians and remained dominant up to the 16th century. In this system, equations are written in full sentences. For example, the rhetorical form of formula_0 is "The thing plus one equals two" or possibly "The thing plus 1 equals 2". Syncopated stage. The history of mathematics cannot with certainty be traced back to any school or period before that of the Ionian Greeks. Still, the subsequent history may be divided into periods, the distinctions between which are tolerably well-marked. Greek mathematics, which originated with the study of geometry, tended to be deductive and scientific from its commencement. Since the fourth century AD, Pythagoras has commonly been given credit for discovering the Pythagorean theorem, a theorem in geometry that states that in a right-angled triangle the area of the square on the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares of the other two sides. The ancient mathematical texts are available with the previously mentioned Ancient Egyptians notation and with Plimpton 322, a Babylonian mathematics around 1900 BC. The study of mathematics as a subject in its own right begins in the 6th century BC with the Pythagoreans, who coined the term "mathematics" from the ancient Greek "μάθημα" ("mathema"), meaning "subject of instruction". Plato's influence has been especially strong in mathematics and the sciences. He helped to distinguish between pure and applied mathematics by widening the gap between "arithmetic", now called number theory and "logistic", now called arithmetic. Greek mathematics greatly refined the methods (especially through the introduction of deductive reasoning and mathematical rigor in proofs) and expanded the subject matter of mathematics. Aristotle is credited with what later would be called the law of excluded middle. Abstract or pure mathematics deals with concepts like magnitude and quantity without regard to any practical application or situation, and includes arithmetic and geometry. In contrast, in mixed or applied mathematics, mathematical properties and relationships are applied to real-world objects to model laws of physics, for example in hydrostatics, optics, and navigation. Archimedes is generally considered to be the greatest mathematician of antiquity and one of the greatest of all time. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of pi. He also defined the spiral bearing his name, formulae for the volumes of surfaces of revolution and an ingenious system for expressing very large numbers. In the historical development of geometry, the steps in the abstraction of geometry were made by the ancient Greeks. Euclid's Elements being the earliest extant documentation of the axioms of plane geometry— though Proclus tells of an earlier axiomatisation by Hippocrates of Chios. Euclid's "Elements" (c. 300 BC) is one of the oldest extant Greek mathematical treatises and consisted of 13 books written in Alexandria; collecting theorems proven by other mathematicians, supplemented by some original work. The document is a successful collection of definitions, postulates (axioms), propositions (theorems and constructions), and mathematical proofs of the propositions. Euclid's first theorem is a lemma that possesses properties of prime numbers. The influential thirteen books cover Euclidean geometry, geometric algebra, and the ancient Greek version of algebraic systems and elementary number theory. It was ubiquitous in the Quadrivium and is instrumental in the development of logic, mathematics, and science. Autolycus' "On the Moving Sphere" is another ancient mathematical manuscript of the time. The next phase of notation for algebra was syncopated algebra, in which some symbolism is used, but which does not contain all of the characteristics of symbolic algebra. For instance, there may be a restriction that subtraction may be used only once within one side of an equation, which is not the case with symbolic algebra. Syncopated algebraic expression first appeared in a serious of books called "Arithmetica", by Diophantus of Alexandria (3rd century AD, many lost), followed by Brahmagupta's "Brahma Sphuta Siddhanta" (7th century). Acrophonic and Milesian numeration. The Greeks employed Attic numeration, which was based on the system of the Egyptians and was later adapted and used by the Romans. Greek numerals one through four were vertical lines, as in the hieroglyphics. The symbol for five was the Greek letter Π (pi), which is the letter of the Greek word for five, "pente". Numbers six through nine were "pente" with vertical lines next to it. Ten was represented by the letter (Δ) of the word for ten, "deka", one hundred by the letter from the word for hundred, etc. The Ionian numeration used their entire alphabet including three archaic letters. The numeral notation of the Greeks, though far less convenient than that now in use, was formed on a perfectly regular and scientific plan, and could be used with tolerable effect as an instrument of calculation, to which purpose the Roman system was totally inapplicable. The Greeks divided the twenty-four letters of their alphabet into three classes, and, by adding another symbol to each class, they had characters to represent the units, tens, and hundreds. (Jean Baptiste Joseph Delambre's Astronomie Ancienne, t. ii.) This system appeared in the third century BC, before the letters digamma (Ϝ), koppa (Ϟ), and sampi (Ϡ) became obsolete. When lowercase letters became differentiated from upper case letters, the lower case letters were used as the symbols for notation. Multiples of one thousand were written as the nine numbers with a stroke in front of them: thus one thousand was ",α", two-thousand was ",β", etc. M (for μύριοι, as in "myriad") was used to multiply numbers by ten thousand. For example, the number 88,888,888 would be written as M,ηωπη*ηωπη. Greek mathematical reasoning was almost entirely geometric (albeit often used to reason about non-geometric subjects such as number theory), and hence the Greeks had no interest in algebraic symbols. The great exception was Diophantus of Alexandria, the great algebraist. His "Arithmetica" was one of the texts to use symbols in equations. It was not completely symbolic, but was much more so than previous books. An unknown number was called s. The square of s was formula_1; the cube was formula_2; the fourth power was formula_3; and the fifth power was formula_4. So for example, the expression: formula_5 would be written as: SS2 C3 x5 M S4 u6 Chinese mathematical notation. The Chinese used numerals that look much like the tally system. Numbers one through four were horizontal lines. Five was an X between two horizontal lines; it looked almost exactly the same as the Roman numeral for ten. Nowadays, the huama system is only used for displaying prices in Chinese markets or on traditional handwritten invoices. In the history of the Chinese, there were those who were familiar with the sciences of arithmetic, geometry, mechanics, optics, navigation, and astronomy. Mathematics in China emerged independently by the 11th century BC. The Chinese were acquainted with astronomical cycles, geometrical implements like the rule, compass, and plumb-bob, and machines like the wheel and axle. The Chinese independently developed very large and negative numbers, decimals, a place value decimal system, a binary system, algebra, geometry, and trigonometry. Chinese mathematics made early contributions, including a place value system. The geometrical theorem known to the ancient Chinese were acquainted was applicable in certain cases, namely the ratio of sides. It is that geometrical theorems which can be demonstrated in the quasi-experimental way of superposition were also known to them. In arithmetic their knowledge seems to have been confined to the art of calculation by means of the suanpan, and the power of expressing the results in writing. Our knowledge of the early attainments of the Chinese, slight though it is, is more complete than in the case of most of their contemporaries. It is thus instructive, and serves to illustrate the fact, that it can be known a nation may possess considerable skill in the applied arts with but our knowledge of the later mathematics on which those arts are founded can be scarce. Knowledge of Chinese mathematics before 254 BC is somewhat fragmentary, and even after this date the manuscript traditions are obscure. Dates centuries before the classical period are generally considered conjectural by Chinese scholars unless accompanied by verified archaeological evidence. As in other early societies the focus was on astronomy in order to perfect the agricultural calendar, and other practical tasks, and not on establishing formal systems. The Chinese Board of Mathematics duties were confined to the annual preparation of an almanac, the dates and predictions in which it regulated. Ancient Chinese mathematicians did not develop an axiomatic approach, but made advances in algorithm development and algebra. The achievement of Chinese algebra reached its zenith in the 13th century, when Zhu Shijie invented method of four unknowns. As a result of obvious linguistic and geographic barriers, as well as content, Chinese mathematics and that of the mathematics of the ancient Mediterranean world are presumed to have developed more or less independently up to the time when "The Nine Chapters on the Mathematical Art" reached its final form, while the "Book on Numbers and Computation" and "Huainanzi" are roughly contemporary with classical Greek mathematics. Some exchange of ideas across Asia through known cultural exchanges from at least Roman times is likely. Frequently, elements of the mathematics of early societies correspond to rudimentary results found later in branches of modern mathematics such as geometry or number theory. For example, the Pythagorean theorem has been attested in the "Zhoubi Suanjing". Knowledge of Pascal's triangle has also been shown to have existed in China centuries before Blaise Pascal, articulated by mathematicians like the polymath Shen Kuo (1031–1095). The state of trigonometry advanced during the Song dynasty (960–1279), where Chinese mathematicians began to express greater emphasis for the need of spherical trigonometry in calendrical science and astronomical calculations. Shen Kuo used trigonometric functions to solve mathematical problems of chords and arcs. Sal Restivo writes that Shen's work in the lengths of arcs of circles provided the basis for spherical trigonometry developed in the 13th century by the mathematician and astronomer Guo Shoujing (1231–1316). As the historians L. Gauchet and Joseph Needham state, Guo Shoujing used spherical trigonometry in his calculations to improve the calendar system and Chinese astronomy. The mathematical science of the Chinese would incorporate the work and teaching of Arab missionaries with knowledge of spherical trigonometry who had come to China in the course of the thirteenth century. Indian and Arabic numerals and notation. Although the origin of our present system of numerical notation is ancient, there is no doubt that it was in use among the Hindus over two thousand years ago. The algebraic notation of the Indian mathematician, Brahmagupta, was syncopated. Addition was indicated by placing the numbers side by side, subtraction by placing a dot over the subtrahend (the number to be subtracted), and division by placing the divisor below the dividend, similar to our notation but without the bar. Multiplication, evolution, and unknown quantities were represented by abbreviations of appropriate terms. The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today, likely evolved over the course of the first millennium AD in India and was transmitted to the west via Islamic mathematics. Despite their name, Arabic numerals have roots in India. The reason for this misnomer is Europeans saw the numerals used in an Arabic book, "Concerning the Hindu Art of Reckoning", by Muhammed ibn-Musa al-Khwarizmi. Al-Khwārizmī wrote several important books on the Hindu–Arabic numerals and on methods for solving equations. His book "On the Calculation with Hindu Numerals", written about 825, along with the work of Al-Kindi, were instrumental in spreading Indian mathematics and Indian numerals to the West. Al-Khwarizmi did not claim the numerals as Arabic, but over several Latin translations, the fact that the numerals were Indian in origin was lost. The word "algorithm" is derived from the Latinization of Al-Khwārizmī's name, Algoritmi, and the word "algebra" from the title of one of his works, "Al-Kitāb al-mukhtaṣar fī hīsāb al-ğabr wa'l-muqābala" ("The Compendious Book on Calculation by Completion and Balancing"). Islamic mathematics developed and expanded the mathematics known to Central Asian civilizations, including the addition of the decimal point notation to the Arabic numerals. The modern Arabic numeral symbols used around the world first appeared in Islamic North Africa in the 10th century. A distinctive Western Arabic variant of the Eastern Arabic numerals began to emerge around the 10th century in the Maghreb and Al-Andalus (sometimes called "ghubar" numerals, though the term is not always accepted), which are the direct ancestor of the modern Arabic numerals used throughout the world. Many Greek and Arabic texts on mathematics were then translated into Latin, which led to further development of mathematics in medieval Europe. In the 12th century, scholars traveled to Spain and Sicily seeking scientific Arabic texts, including al-Khwārizmī's (translated into Latin by Robert of Chester) and the complete text of Euclid's "Elements" (translated in various versions by Adelard of Bath, Herman of Carinthia, and Gerard of Cremona). One of the European books that advocated using the numerals was "Liber Abaci", by Leonardo of Pisa, better known as Fibonacci. "Liber Abaci" is better known for the mathematical problem Fibonacci wrote in it about a population of rabbits. The growth of the population ended up being a Fibonacci sequence, where a term is the sum of the two preceding terms. Symbolic stage. Early arithmetic and multiplication. The transition to symbolic algebra, where only symbols are used, can first be seen in the work of Ibn al-Banna' al-Marrakushi (1256–1321) and Abū al-Ḥasan ibn ʿAlī al-Qalaṣādī (1412–1482). Al-Qalasādī was the last major medieval Arab algebraist, who improved on the algebraic notation earlier used in the Maghreb by Ibn al-Banna. In contrast to the syncopated notations of their predecessors, Diophantus and Brahmagupta, which lacked symbols for mathematical operations, al-Qalasadi's algebraic notation was the first to have symbols for these functions and was thus "the first steps toward the introduction of algebraic symbolism." He represented mathematical symbols using characters from the Arabic alphabet. The 14th century saw the development of new mathematical concepts to investigate a wide range of problems. The two widely used arithmetic symbols are addition and subtraction, + and −. The plus sign was used starting around 1351 by Nicole Oresme and publicized in his 1360 in his work "Algorismus proportionum". It is thought an abbreviation for "et", meaning "and" in Latin, in much the same way the ampersand sign also began as "et". Oresme at the University of Paris and the Italian Giovanni di Casali independently provided graphical demonstrations of the distance covered by a body undergoing uniformly accelerated motion, asserting that the area under the line depicting the constant acceleration and represented the total distance traveled. The minus sign was used in 1489 by Johannes Widmann in "Mercantile Arithmetic" or "Behende und hüpsche Rechenung auff allen Kauffmanschafft,". Widmann used the minus symbol with the plus symbol, to indicate deficit and surplus, respectively. In "Summa de arithmetica, geometria, proportioni e proportionalità", Luca Pacioli used symbols for plus and minus symbols and contained algebra, though much of the work originated from Piero della Francesca whom he appropriated and purloined. The radical symbol (√), for square root was introduced by Christoph Rudolff in the early 1500s.Michael Stifel's important work "Arithmetica integra" contained important innovations in mathematical notation. In 1556, Niccolò Tartaglia used parentheses for precedence grouping. In 1557 Robert Recorde published "The Whetstone of Witte" which introduced the equal sign (=), as well as plus and minus signs for the English reader. In 1564, Gerolamo Cardano analyzed games of chance beginning the early stages of probability theory. In 1572 Rafael Bombelli published his "L'Algebra" in which he showed how to deal with the imaginary quantities that could appear in Cardano's formula for solving cubic equations. Simon Stevin's book "De Thiende" ('the art of tenths'), published in Dutch in 1585, contained a systematic treatment of decimal notation, which influenced all later work on the real number system. The new algebra (1591) of François Viète introduced the modern notational manipulation of algebraic expressions. John Napier is best known as the inventor of logarithms (published in "Description of the Marvelous Canon of Logarithms") and made common the use of the decimal point in arithmetic and mathematics. After Napier, Edmund Gunter created the logarithmic scales (lines, or rules) upon which slide rules are based, it was William Oughtred who used two such scales sliding by one another to perform direct multiplication and division; and he is credited as the inventor of the slide rule in 1622. In 1631 Oughtred introduced the multiplication sign (×), his proportionality sign (∷) and abbreviations "sin" and "cos" for the sine and cosine functions. Albert Girard also used the abbreviations 'sin', 'cos' and 'tan' for the trigonometric functions in his treatise. René Descartes is credited as the father of analytical geometry, the bridge between algebra and geometry, crucial to the discovery of infinitesimal calculus and analysis. In the 17th century, Descartes introduced Cartesian co-ordinates which allowed the development of analytic geometry, bring the notation of equations to geometry. Blaise Pascal influenced mathematics throughout his life. His "Traité du triangle arithmétique" ("Treatise on the Arithmetical Triangle") of 1653 described a convenient tabular presentation for binomial coefficients, now called Pascal's triangle. John Wallis introduced the infinity symbol (∞) and also used this notation for infinitesimals, for example, . Johann Rahn introduced the division sign (÷, an obelus variant repurposed) and the therefore sign in 1659. William Jones used π in "Synopsis palmariorum mathesios" in 1706 because it is the initial letter of the Greek word Perimetron (περιμετρον), which means perimeter in Greek. This usage was popularized in 1737 by Euler. In 1734, Pierre Bouguer used double horizontal bar below the inequality sign. Derivatives notation: Leibniz and Newton. The study of linear algebra emerged from the study of determinants, which were used to solve systems of linear equations. Calculus had two main systems of notation, each created by one of the creators: that developed by Isaac Newton and the notation developed by Gottfried Leibniz. Leibniz's is the notation used most often today. Newton's was simply a dot or dash placed above the function. For example, the derivative of the function "x" would be written as formula_6. The second derivative of x would be written as formula_7. In modern usage, this notation generally denotes derivatives of physical quantities with respect to time, and is used frequently in the science of mechanics. Leibniz, on the other hand, used the letter d as a prefix to indicate differentiation, and introduced the notation representing derivatives as if they were a special type of fraction. For example, the derivative of the function x with respect to the variable t in Leibniz's notation would be written as formula_8. This notation makes explicit the variable with respect to which the derivative of the function is taken. Leibniz also created the integral symbol (∫). For example: formula_9. When finding areas under curves, integration is often illustrated by dividing the area into infinitely many tall, thin rectangles, whose areas are added. Thus, the integral symbol is an elongated S, representing the Latin word "summa", meaning "sum". High division operators and functions. Letters of the alphabet in this time were to be used as symbols of quantity; and although much diversity existed with respect to the choice of letters, there were to be several universally recognized rules in the following history. Here thus in the history of equations the first letters of the alphabet were indicatively known as coefficients, the last letters the unknown terms (an "incerti ordinis"). In algebraic geometry, again, a similar rule was to be observed, the last letters of the alphabet there denoting the variable or current coordinates. Certain letters, such as formula_10, formula_11, etc., were by universal consent appropriated as symbols of the frequently occurring numbers 3.14159 ..., and 2.7182818 ..., and other uses were to be avoided as much as possible. Letters, too, were to be employed as symbols of operation, and with them other previously mentioned arbitrary operation characters. The letters formula_12, elongated formula_13 were to be appropriated as operative symbols in the differential calculus and integral calculus, formula_14 and Σ in the calculus of differences. In functional notation, a letter, as a symbol of operation, is combined with another which is regarded as a symbol of quantity. Thus, formula_15 denotes the mathematical result of the performance of the operation formula_16 upon the subject formula_17. If upon this result the same operation were repeated, the new result would be expressed by formula_18, or more concisely by formula_19, and so on. The quantity formula_17 itself regarded as the result of the same operation formula_16 upon some other function; the proper symbol for which is, by analogy, formula_20. Thus formula_16 and formula_21 are symbols of inverse operations, the former cancelling the effect of the latter on the subject formula_17. formula_15 and formula_20 in a similar manner are termed inverse functions. Beginning in 1718, Thomas Twinin used the division slash (solidus), deriving it from the earlier Arabic horizontal fraction bar. Pierre-Simon, marquis de Laplace developed the widely used Laplacian differential operator (e.g. formula_22). In 1750, Gabriel Cramer developed "Cramer's Rule" for solving linear systems. Euler and prime notations. Leonhard Euler was one of the most prolific mathematicians in history, and also a prolific inventor of canonical notation. His contributions include his use of "e" to represent the base of natural logarithms. It is not known exactly why formula_11 was chosen, but it was probably because the four letters of the alphabet were already commonly used to represent variables and other constants. Euler used formula_10 to represent pi consistently. The use of formula_10 was suggested by William Jones, who used it as shorthand for perimeter. Euler used formula_23 to represent the square root of negative one (formula_24) although he earlier used it as an "infinite number." Today, the symbol created by John Wallis, formula_25, is used for infinity, as in e.g. formula_26. For summation, Euler used an enlarged form of the upright capital Greek letter Sigma (Σ), known as capital-sigma notation. This is defined as: formula_27 where, "i" represents the "index of summation"; "ai" is an indexed variable representing each successive term in the series; "m" is the "lower bound of summation", and "n" is the "upper bound of summation". The "i = m" under the summation symbol means that the index "i" starts out equal to "m". The index, "i", is incremented by 1 for each successive term, stopping when "i" = "n". For functions, Euler used the notation formula_15 to represent a function of formula_17. The mathematician William Emerson would develop the proportionality sign (∝). Proportionality is the ratio of one quantity to another, and the sign is used to indicate the ratio between two variables is constant. Much later in the abstract expressions of the value of various proportional phenomena, the parts-per notation would become useful as a set of pseudo units to describe small values of miscellaneous dimensionless quantities. Marquis de Condorcet, in 1768, advanced the partial differential sign, known as the "curly d" or "Jacobi's delta". The prime symbol for derivatives was also made by Joseph-Louis Lagrange. &lt;templatestyles src="Rquote/styles.css"/&gt;{ class="rquote pullquote floatright" role="presentation" style="display:table; border-collapse:collapse; border-style:none; float:right; margin:0.5em 0.75em; width:33%; " Gauss, Hamilton, and Matrix notations. At the turn of the 19th century, Carl Friedrich Gauss developed the identity sign for congruence relation and, in Quadratic reciprocity, the integral part. Gauss contributed functions of complex variables, in geometry, and on the convergence of series. He gave the satisfactory proofs of the fundamental theorem of algebra and of the quadratic reciprocity law. Gauss developed the theory of solving linear systems by using Gaussian elimination, which was initially listed as an advancement in geodesy. He would also develop the product sign. After the 1800s, Christian Kramp would promote factorial notation during his research in generalized factorial function which applied to non-integers. Joseph Diaz Gergonne introduced the set inclusion signs (⊆, ⊇), later redeveloped by Ernst Schröder. Peter Gustav Lejeune Dirichlet developed Dirichlet "L"-functions to give the proof of Dirichlet's theorem on arithmetic progressions and began analytic number theory. In 1829. Carl Gustav Jacob Jacobi published Fundamenta nova theoriae functionum ellipticarum with his elliptic theta functions. Matrix notation would be more fully developed by Arthur Cayley in his three papers, on subjects which had been suggested by reading the Mécanique analytique of Lagrange and some of the works of Laplace. Cayley defined matrix multiplication and matrix inverses. Cayley used a single letter to denote a matrix, thus treating a matrix as an aggregate object. He also realized the connection between matrices and determinants, and wrote "There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants". William Rowan Hamilton would introduce the nabla symbol (formula_28 or, later called "del", ∇) for vector differentials. This was previously used by Hamilton as a general-purpose operator sign. formula_29, formula_30, and formula_31 are used for the Hamiltonian operator in quantum mechanics and ℋ for the Hamiltonian function in classical Hamiltonian mechanics. In mathematics, Hamilton is perhaps best known as the inventor of quaternion notation and biquaternions. Maxwell, Clifford, and Ricci notations. In 1864 James Clerk Maxwell reduced all of the then current knowledge of electromagnetism into a linked set of differential equations with 20 equations in 20 variables, contained in "A Dynamical Theory of the Electromagnetic Field". (See Maxwell's equations.) The method of calculation which it is necessary to employ was given by Lagrange, and afterwards developed, with some modifications, by Hamilton's equations. It is usually referred to as Hamilton's principle; when the equations in the original form are used they are known as Lagrange's equations. In 1871 Richard Dedekind called a set of real or complex numbers which is closed under the four arithmetic operations a field. In 1873 Maxwell presented "A Treatise on Electricity and Magnetism". In 1878, William Kingdon Clifford published his Elements of Dynamic. Clifford developed split-biquaternions (e.g. formula_32) which he called "algebraic motors". Clifford obviated quaternion study by separating the dot product and cross product of two vectors from the complete quaternion notation. The common vector notations are used when working with vectors which are spatial or more abstract members of vector spaces, while angle notation (or phasor notation) is a notation used in electronics. Lord Kelvin's aetheric atom theory (1860s) led Peter Guthrie Tait, in 1885, to publish a topological table of knots with up to ten crossings known as the Tait conjectures. Tensor calculus was developed by Gregorio Ricci-Curbastro between 1887 and 1896, presented in 1892 under the title "absolute differential calculus", and the contemporary usage of "tensor" was stated by Woldemar Voigt in 1898. In 1895, Henri Poincaré published "Analysis Situs". In 1897, Charles Proteus Steinmetz would publish , with the assistance of Ernst J. Berg. From formula mathematics to tensors. In 1895 Giuseppe Peano issued his "Formulario mathematico", an effort to digest mathematics into terse text based on special symbols. He would provide a definition of a vector space and linear map. He would also introduce the intersection sign, the union sign, the membership sign (is an element of), and existential quantifier (there exists). Peano would pass to Bertrand Russell his work in 1900 at a Paris conference; it so impressed Russell that Russell too was taken with the drive to render mathematics more concisely. The result was Principia Mathematica written with Alfred North Whitehead. This treatise marks a watershed in modern literature where symbol became dominant. Peano's "Formulario Mathematico", though less popular than Russell's work, continued through five editions. The fifth appeared in 1908 and included 4200 formulas and theorems. Ricci-Curbastro and Tullio Levi-Civita popularized the tensor index notation around 1900. Mathematical logic and abstraction. Georg Cantor, inventor of set theory introduced Aleph numbers, so named because they use the aleph symbol (א) with natural-number subscripts for cardinality in infinite sets. For the ordinals he employed the Greek letter ω (omega). This notation is still in use today in ordinal notation of a finite sequence of symbols from a finite alphabet which names an ordinal number according to some scheme which gives meaning to the language. After the turn of the 20th century, Josiah Willard Gibbs would in physical chemistry introduce middle dot for dot product and the multiplication sign for cross products. He would also supply notation for the scalar and vector products, which was introduced in "Vector Analysis". Bertrand Russell would shortly afterward introduce logical disjunction (OR) in 1906. Gerhard Kowalewski and Cuthbert Edmund Cullis would successively introduce matrices notation, parenthetical matrix and box matrix notation respectively. Albert Einstein, in 1916, introduced Einstein notation, which summed over a set of indexed terms in a formula, thus exerting notational brevity. For example, the indices range over set {1, 2, 3}: formula_33 is reduced by convention to: formula_34 Upper indices are not exponents but are indices of coordinates, coefficients or basis vectors. Arnold Sommerfeld would create the contour integral sign in 1917. Also in 1917, Dimitry Mirimanoff proposes axiom of regularity. In 1919, Theodor Kaluza would solve general relativity equations using five dimensions, the results would have electromagnetic equations emerge. This would be published in 1921 in "Zum Unitätsproblem der Physik". In 1922, Abraham Fraenkel and Thoralf Skolem independently proposed replacing the axiom schema of specification with the axiom schema of replacement. Also in 1922, Zermelo–Fraenkel set theory was developed. In 1923, Steinmetz would publish "Four Lectures on Relativity and Space". Around 1924, Jan Arnoldus Schouten would develop the modern notation and formalism for the Ricci calculus framework during the absolute differential calculus applications to general relativity and differential geometry in the early twentieth century. Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields. In 1925, Enrico Fermi would describe a system comprising many identical particles that obey the Pauli exclusion principle, afterwards developing a diffusion equation (Fermi age equation). In 1926, Oskar Klein would develop the Kaluza–Klein theory. In 1928, Emil Artin abstracted ring theory with Artinian rings. In 1933, Andrey Kolmogorov introduces the "Kolmogorov axioms". In 1937, Bruno de Finetti deduced the "operational subjective" concept. Mathematical symbolism. Mathematical abstraction began as a process of extracting the underlying essence of a mathematical concept, removing any dependence on real world objects with which it might originally have been connected, and generalizing it so that it has wider applications or matching among other abstract descriptions of equivalent phenomena. Two abstract areas of modern mathematics are category theory and model theory. Bertrand Russell, said, "Ordinary language is totally unsuited for expressing what physics really asserts, since the words of everyday life are not sufficiently abstract. Only mathematics and mathematical logic can say as little as the physicist means to say". Though, one can substituted mathematics for real world objects, and wander off through equation after equation, and can build a concept structure which has no relation to reality. Some of the introduced mathematical logic notation during this time included the set of symbols used in Boolean algebra. This was created by George Boole in 1854. Boole himself did not see logic as a branch of mathematics, but it has come to be encompassed anyway. Symbols found in Boolean algebra include formula_35 (AND), formula_36 (OR), and formula_37 ("not"). With these symbols, and letters to represent different truth values, one can make logical statements such as formula_38, that is "("a" is true OR "a" is "not" true) is true", meaning it is true that "a" is either true or not true (i.e. false). Boolean algebra has many practical uses as it is, but it also was the start of what would be a large set of symbols to be used in logic. Most of these symbols can be found in propositional calculus, a formal system described as formula_39. formula_40 is the set of elements, such as the "a" in the example with Boolean algebra above. formula_41 is the set that contains the subsets that contain operations, such as formula_36 or formula_35. formula_42 contains the inference rules, which are the rules dictating how inferences may be logically made, and formula_43 contains the axioms. Predicate logic, originally called "predicate calculus", expands on propositional logic by the introduction of variables, usually denoted by "x", "y", "z", or other lowercase letters, and by sentences containing variables, called predicates. These are usually denoted by an uppercase letter followed by a list of variables, such as P("x") or Q("y","z"). Predicate logic uses special symbols for quantifiers: ∃ for "there exists" and ∀ for "for all". Gödel incompleteness notation. &lt;templatestyles src="Rquote/styles.css"/&gt;{ class="rquote pullquote floatright" role="presentation" style="display:table; border-collapse:collapse; border-style:none; float:right; margin:0.5em 0.75em; width:33%; " While proving his incompleteness theorems, Kurt Gödel created an alternative to the symbols normally used in logic. He used Gödel numbers, which were numbers that represented operations with set numbers, and variables with the prime numbers greater than 10. With Gödel numbers, logic statements can be broken down into a number sequence. Gödel then took this one step farther, taking the "n" prime numbers and putting them to the power of the numbers in the sequence. These numbers were then multiplied together to get the final product, giving every logic statement its own number. For example, take the statement "There exists a number "x" such that it is not "y"". Using the symbols of propositional calculus, this would become formula_44. If the Gödel numbers replace the symbols, it becomes: formula_45. There are ten numbers, so the ten prime numbers are found and these are: formula_46. Then, the Gödel numbers are made the powers of the respective primes and multiplied, giving: formula_47. The resulting number is approximately formula_48. Contemporary notation and topics. Early 20th-century notation. Abstraction of notation is an ongoing process and the historical development of many mathematical topics exhibits a progression from the concrete to the abstract. Various set notations would be developed for fundamental object sets. Around 1924, David Hilbert and Richard Courant published "Methods of mathematical physics. Partial differential equations". In 1926, Oskar Klein and Walter Gordon proposed the Klein–Gordon equation to describe relativistic particles: formula_49 The first formulation of a quantum theory describing radiation and matter interaction is due to Paul Adrien Maurice Dirac, who, during 1920, was first able to compute the coefficient of spontaneous emission of an atom. In 1928, the relativistic Dirac equation was formulated by Dirac to explain the behavior of the relativistically moving electron. The Dirac equation in the form originally proposed by Dirac is: formula_50 where, ψ ψ(x, "t") is the wave function for the electron, x and "t" are the space and time coordinates, "m" is the rest mass of the electron, "p" is the momentum, understood to be the momentum operator in the Schrödinger theory, "c" is the speed of light, and "ħ" "h"/2"π" is the reduced Planck constant. Dirac described the quantification of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, and Werner Heisenberg, and an elegant formulation of quantum electrodynamics due to Enrico Fermi, physicists came to believe that, in principle, it would be possible to perform any computation for any physical process involving photons and charged particles. In 1931, Alexandru Proca developed the Proca equation (Euler–Lagrange equation) for the vector meson theory of nuclear forces and the relativistic quantum field equations. John Archibald Wheeler in 1937 develops S-matrix. Studies by Felix Bloch with Arnold Nordsieck, and Victor Weisskopf, in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer. At higher orders in the series infinities emerged, making such computations meaningless and casting serious doubts on the internal consistency of the theory itself. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics. In the 1930s, the double-struck capital Z for integer number sets was created by Edmund Landau. Nicolas Bourbaki created the double-struck capital Q for rational number sets. In 1935, Gerhard Gentzen made universal quantifiers. André Weil and Nicolas Bourbaki would develop the empty set sign in 1939. That same year, Nathan Jacobson would coin the double-struck capital C for complex number sets. Around the 1930s, Voigt notation (so named to honor Voigt's 1898 work) would be developed for multilinear algebra as a way to represent a symmetric tensor by reducing its order. Schönflies notation became one of two conventions used to describe point groups (the other being Hermann–Mauguin notation). Also in this time, van der Waerden notation became popular for the usage of two-component spinors (Weyl spinors) in four spacetime dimensions. Arend Heyting would introduce Heyting algebra and Heyting arithmetic. The arrow, e.g., →, was developed for function notation in 1936 by Øystein Ore to denote images of specific elements and to denote Galois connections. Later, in 1940, it took its present form, e.g., "f: X → Y", through the work of Witold Hurewicz. Werner Heisenberg, in 1941, proposed the S-matrix theory of particle interactions. Bra–ket notation (Dirac notation) is a standard notation for describing quantum states, composed of angle brackets and vertical bars. It can also be used to denote abstract vectors and linear functionals. It is so called because the inner product (or dot product on a complex vector space) of two states is denoted by a ⟨bra|ket⟩: formula_51. The notation was introduced in 1939 by Paul Dirac, though the notation has precursors in Grassmann's use of the notation ["φ"|"ψ"] for his inner products nearly 100 years previously. Bra–ket notation is widespread in quantum mechanics: almost every phenomenon that is explained using quantum mechanics—including a large portion of modern physics—is usually explained with the help of bra–ket notation. The notation establishes an encoded abstract representation-independence, producing a versatile specific representation (e.g., "x", or "p", or eigenfunction base) without much ado, or excessive reliance on, the nature of the linear spaces involved. The overlap expression ⟨"φ"|"ψ"⟩ is typically interpreted as the probability amplitude for the state "ψ" to collapse into the state "ϕ". The Feynman slash notation (Dirac slash notation) was developed by Richard Feynman for the study of Dirac fields in quantum field theory. Geoffrey Chew, along with others, would promote matrix notation for the strong interaction in particle physics, and the associated bootstrap principle, in 1960. In the 1960s, set-builder notation was developed for describing a set by stating the properties that its members must satisfy. Also in the 1960s, tensors are abstracted within category theory by means of the concept of monoidal category. Later, multi-index notation eliminates conventional notions used in multivariable calculus, partial differential equations, and the theory of distributions, by abstracting the concept of an integer index to an ordered tuple of indices. Modern mathematical notation. In the modern mathematics of special relativity, electromagnetism and wave theory, the d'Alembert operator (formula_52) is the Laplace operator of Minkowski space. The Levi-Civita symbol, also known as the permutation symbol is used in tensor calculus. Feynman diagrams are used in particle physics, equivalent to the operator-based approach of Sin-Itiro Tomonaga and Julian Schwinger. The orbifold notation system, invented by William Thurston, has been developed for representing types of symmetry groups in two-dimensional spaces of constant curvature. The tetrad formalism (tetrad index notation) would be introduced as an approach to general relativity that replaces the choice of a coordinate basis by the less restrictive choice of a local basis for the tangent bundle (a locally defined set of four linearly independent vector fields called a tetrad). In the 1990s, Roger Penrose would propose Penrose graphical notation (tensor diagram notation) as a, usually handwritten, visual depiction of multilinear functions or tensors. Penrose would also introduce abstract index notation. His usage of the Einstein summation was in order to offset the inconvenience in describing contractions and covariant differentiation in modern abstract tensor notation, while maintaining explicit covariance of the expressions involved. John Conway would further various notations, including the Conway chained arrow notation, the Conway notation of knot theory, and the Conway polyhedron notation. The Coxeter notation system classifies symmetry groups, describing the angles between with fundamental reflections of a Coxeter group. It uses a bracketed notation, with modifiers to indicate certain subgroups. The notation is named after H. S. M. Coxeter and Norman Johnson more comprehensively defined it. Combinatorial LCF notation, devised by Joshua Lederberg and extended by Harold Scott MacDonald Coxeter and Robert Frucht, was developed for the representation of cubic graphs that are Hamiltonian. The cycle notation is the convention for writing down a permutation in terms of its constituent cycles. This is also called circular notation and the permutation called a "cyclic" or "circular" permutation. Computers and markup notation. In 1931, IBM produces the IBM 601 Multiplying Punch; it is an electromechanical machine that could read two numbers, up to 8 digits long, from a card and punch their product onto the same card. In 1934, Wallace Eckert used a rigged IBM 601 Multiplying Punch to automate the integration of differential equations. In 1962, Kenneth E. Iverson developed an integral part notation, which became APL; it became known as Iverson notation. In the 1970s within computer architecture, Quote notation was developed for a representing number system of rational numbers. Also in this decade, the Z notation (just like the APL language, long before it) uses many non-ASCII symbols, the specification includes suggestions for rendering the Z notation symbols in ASCII and in LaTeX. There are presently various C mathematical functions (Math.h) and numerical libraries. They are libraries used in software development for performing numerical calculations. These calculations can be handled by symbolic executions; analyzing a program to determine what inputs cause each part of a program to execute. Mathematica and SymPy are examples of computational software programs based on symbolic mathematics. References and citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x + 1 = 2" }, { "math_id": 1, "text": "\\Delta^y" }, { "math_id": 2, "text": "K^y" }, { "math_id": 3, "text": "\\Delta^y\\Delta" }, { "math_id": 4, "text": "\\Delta K^y" }, { "math_id": 5, "text": "2x^4+3x^3-4x^2+5x-6" }, { "math_id": 6, "text": "\\dot{x}" }, { "math_id": 7, "text": "\\ddot{x}" }, { "math_id": 8, "text": "{ dx \\over dt }" }, { "math_id": 9, "text": "\\int_{-N}^{N} f(x)\\, dx" }, { "math_id": 10, "text": "\\pi" }, { "math_id": 11, "text": "e" }, { "math_id": 12, "text": "d" }, { "math_id": 13, "text": "S" }, { "math_id": 14, "text": "\\Delta" }, { "math_id": 15, "text": "f(x)" }, { "math_id": 16, "text": "f" }, { "math_id": 17, "text": "x" }, { "math_id": 18, "text": "f[f(x)]" }, { "math_id": 19, "text": "f^2(x)" }, { "math_id": 20, "text": "f^{-1} (x)" }, { "math_id": 21, "text": "f^{-1}" }, { "math_id": 22, "text": "\\Delta f(p) " }, { "math_id": 23, "text": "i" }, { "math_id": 24, "text": "\\sqrt{-1}" }, { "math_id": 25, "text": "\\infty" }, { "math_id": 26, "text": "\\sum_{n=1}^\\infty\\frac{1}{n^2}" }, { "math_id": 27, "text": "\\sum_{i=m}^n a_i = a_m + a_{m+1} + a_{m+2} +\\cdots+ a_{n-1} + a_n. " }, { "math_id": 28, "text": " \\nabla" }, { "math_id": 29, "text": "\\hat{H}" }, { "math_id": 30, "text": "H" }, { "math_id": 31, "text": "\\check{H}" }, { "math_id": 32, "text": "q = w + xi + yj + zk " }, { "math_id": 33, "text": " y = \\sum_{i=1}^3 c_i x^i = c_1 x^1 + c_2 x^2 + c_3 x^3 " }, { "math_id": 34, "text": " y = c_i x^i \\,." }, { "math_id": 35, "text": "\\land" }, { "math_id": 36, "text": "\\lor" }, { "math_id": 37, "text": "\\lnot" }, { "math_id": 38, "text": "a\\lor\\lnot a=1" }, { "math_id": 39, "text": "\\mathcal{L} = \\mathcal{L}\\ (\\Alpha,\\ \\Omega,\\ \\Zeta,\\ \\Iota)" }, { "math_id": 40, "text": "\\Alpha" }, { "math_id": 41, "text": "\\Omega" }, { "math_id": 42, "text": "\\Zeta" }, { "math_id": 43, "text": "\\Iota" }, { "math_id": 44, "text": "(\\exists x)(x=\\lnot y)" }, { "math_id": 45, "text": "\\{8, 4, 11, 9, 8, 11, 5, 1, 13, 9\\}" }, { "math_id": 46, "text": "\\{2, 3, 5, 7, 11, 13, 17, 19, 23, 29\\}" }, { "math_id": 47, "text": "2^8\\times3^4\\times5^{11}\\times7^9\\times11^8\\times13^{11}\\times17^5\\times19^1\\times23^{13}\\times29^9" }, { "math_id": 48, "text": "3.096262735\\times10^{78}" }, { "math_id": 49, "text": " \\frac {1}{c^2} \\frac{\\partial^2}{\\partial t^2} \\psi - \\nabla^2 \\psi + \\frac {m^2 c^2}{\\hbar^2} \\psi = 0. " }, { "math_id": 50, "text": "\\left(\\beta mc^2 + \\sum_{k = 1}^3 \\alpha_k p_k \\, c\\right) \\psi (\\mathbf{x},t) = i \\hbar \\frac{\\partial\\psi(\\mathbf{x},t) }{\\partial t} " }, { "math_id": 51, "text": "\\langle\\phi|\\psi\\rangle" }, { "math_id": 52, "text": "\\scriptstyle\\Box" } ]
https://en.wikipedia.org/wiki?curid=6134187
6134192
Graeffe's method
Algorithm for finding polynomial roots In mathematics, Graeffe's method or Dandelin–Lobachesky–Graeffe method is an algorithm for finding all of the roots of a polynomial. It was developed independently by Germinal Pierre Dandelin in 1826 and Lobachevsky in 1834. In 1837 Karl Heinrich Gräffe also discovered the principal idea of the method. The method separates the roots of a polynomial by squaring them repeatedly. This squaring of the roots is done implicitly, that is, only working on the coefficients of the polynomial. Finally, Viète's formulas are used in order to approximate the roots. Dandelin–Graeffe iteration. Let "p"("x") be a polynomial of degree n formula_0 Then formula_1 Let "q"("x") be the polynomial which has the squares formula_2 as its roots, formula_3 Then we can write: formula_4 "q"("x") can now be computed by algebraic operations on the coefficients of the polynomial "p"("x") alone. Let: formula_5 then the coefficients are related by formula_6 Graeffe observed that if one separates "p"("x") into its odd and even parts: formula_7 then one obtains a simplified algebraic expression for "q"("x"): formula_8 This expression involves the squaring of two polynomials of only half the degree, and is therefore used in most implementations of the method. Iterating this procedure several times separates the roots with respect to their magnitudes. Repeating "k" times gives a polynomial of degree n: formula_9 with roots formula_10 If the magnitudes of the roots of the original polynomial were separated by some factor formula_11, that is, formula_12, then the roots of the "k"-th iterate are separated by a fast growing factor formula_13. Classical Graeffe's method. Next the Vieta relations are used formula_14 If the roots formula_15 are sufficiently separated, say by a factor formula_11, formula_16, then the iterated powers formula_17 of the roots are separated by the factor formula_18, which quickly becomes very big. The coefficients of the iterated polynomial can then be approximated by their leading term, formula_19 formula_20 and so on, implying formula_21 Finally, logarithms are used in order to find the absolute values of the roots of the original polynomial. These magnitudes alone are already useful to generate meaningful starting points for other root-finding methods. To also obtain the angle of these roots, a multitude of methods has been proposed, the most simple one being to successively compute the square root of a (possibly complex) root of formula_22, "m" ranging from "k" to 1, and testing which of the two sign variants is a root of formula_23. Before continuing to the roots of formula_24, it might be necessary to numerically improve the accuracy of the root approximations for formula_23, for instance by Newton's method. Graeffe's method works best for polynomials with simple real roots, though it can be adapted for polynomials with complex roots and coefficients, and roots with higher multiplicity. For instance, it has been observed that for a root formula_25 with multiplicity "d", the fractions formula_26 tend to formula_27 for formula_28. This allows to estimate the multiplicity structure of the set of roots. From a numerical point of view, this method is problematic since the coefficients of the iterated polynomials span very quickly many orders of magnitude, which implies serious numerical errors. One second, but minor concern is that many different polynomials lead to the same Graeffe iterates. Tangential Graeffe method. This method replaces the numbers by truncated power series of degree 1, also known as dual numbers. Symbolically, this is achieved by introducing an "algebraic infinitesimal" formula_29 with the defining property formula_30. Then the polynomial formula_31 has roots formula_32, with powers formula_33 Thus the value of formula_34 is easily obtained as fraction formula_35 This kind of computation with infinitesimals is easy to implement analogous to the computation with complex numbers. If one assumes complex coordinates or an initial shift by some randomly chosen complex number, then all roots of the polynomial will be distinct and consequently recoverable with the iteration. Renormalization. Every polynomial can be scaled in domain and range such that in the resulting polynomial the first and the last coefficient have size one. If the size of the inner coefficients is bounded by "M", then the size of the inner coefficients after one stage of the Graeffe iteration is bounded by formula_36. After "k" stages one gets the bound formula_37 for the inner coefficients. To overcome the limit posed by the growth of the powers, Malajovich–Zubelli propose to represent coefficients and intermediate results in the "k"th stage of the algorithm by a scaled polar form formula_38 where formula_39 is a complex number of unit length and formula_40 is a positive real. Splitting off the power formula_41 in the exponent reduces the absolute value of "c" to the corresponding dyadic root. Since this preserves the magnitude of the (representation of the) initial coefficients, this process was named renormalization. Multiplication of two numbers of this type is straightforward, whereas addition is performed following the factorization formula_42, where formula_43 is chosen as the larger of both numbers, that is, formula_44. Thus formula_45 and formula_46 with formula_47 The coefficients formula_48 of the final stage "k" of the Graeffe iteration, for some reasonably large value of "k", are represented by pairs formula_49, formula_50. By identifying the corners of the convex envelope of the point set formula_51 one can determine the multiplicities of the roots of the polynomial. Combining this renormalization with the tangent iteration one can extract directly from the coefficients at the corners of the envelope the roots of the original polynomial.
[ { "math_id": 0, "text": "p(x) = (x-x_1)\\cdots(x-x_n)." }, { "math_id": 1, "text": "p(-x) = (-1)^n (x+x_1)\\cdots(x+x_n)." }, { "math_id": 2, "text": "x_1^2, \\cdots, x_n^2" }, { "math_id": 3, "text": "q(x)= \\left (x-x_1^2 \\right )\\cdots \\left (x-x_n^2 \\right ). " }, { "math_id": 4, "text": "\\begin{align}\nq(x^2) & = \\left (x^2-x_1^2 \\right )\\cdots \\left (x^2-x_n^2 \\right ) \\\\\n& = (x-x_1)(x+x_1) \\cdots (x-x_n) (x+x_n) \\\\\n& = \\left \\{(x - x_1) \\cdots (x - x_n) \\right \\} \\times \\left \\{(x + x_1) \\cdots (x + x_n) \\right \\} \\\\\n& = p(x) \\times \\left \\{(-1)^n (-x - x_1) \\cdots (-x - x_n) \\right \\} \\\\\n& = p(x) \\times \\left \\{(-1)^n p(-x) \\right \\} \\\\\n& = (-1)^n p(x) p(-x)\n\\end{align}" }, { "math_id": 5, "text": "\\begin{align}\np(x) &= x^n+a_1x^{n-1}+\\cdots+a_{n-1}x+a_n \\\\\nq(x) &= x^n+b_1x^{n-1}+\\cdots+b_{n-1}x+b_n\n\\end{align}" }, { "math_id": 6, "text": "b_k=(-1)^k a_k^2 + 2\\sum_{j=0}^{k-1}(-1)^j\\,a_ja_{2k-j}, \\qquad a_0=b_0=1. " }, { "math_id": 7, "text": "p(x)=p_e \\left (x^2 \\right )+x p_o\\left (x^2 \\right )," }, { "math_id": 8, "text": "q(x)=(-1)^n \\left (p_e(x)^2-x p_o(x)^2 \\right )." }, { "math_id": 9, "text": "q^k(y) = y^n + {a^k}_1\\,y^{n-1} + \\cdots + {a^k}_{n-1}\\,y + {a^k}_n \\, " }, { "math_id": 10, "text": "y_1=x_1^{2^k},\\,y_2=x_2^{2^k},\\,\\dots,\\,y_n=x_n^{2^k}." }, { "math_id": 11, "text": "\\rho>1" }, { "math_id": 12, "text": "|x_k|\\ge\\rho |x_{k+1}|" }, { "math_id": 13, "text": "\\rho^{2^k}\\ge 1+2^k(\\rho-1)" }, { "math_id": 14, "text": "\\begin{align}\na^k_{\\;1} &= -(y_1+y_2+\\cdots+y_n)\\\\\na^k_{\\;2} &= y_1 y_2 + y_1 y_3+\\cdots+y_{n-1} y_n\\\\\n &\\;\\vdots\\\\\na^k_{\\;n} &= (-1)^n(y_1 y_2 \\cdots y_n).\n\\end{align}" }, { "math_id": 15, "text": "x_1,\\dots,x_n" }, { "math_id": 16, "text": "|x_m|\\ge \\rho|x_{m+1}|" }, { "math_id": 17, "text": "y_1,y_2,...,y_n" }, { "math_id": 18, "text": "\\rho^{2^k}" }, { "math_id": 19, "text": "a^k_{\\;1} \\approx -y_1" }, { "math_id": 20, "text": "a^k_{\\;2} \\approx y_1 y_2" }, { "math_id": 21, "text": "\n y_1\\approx -a^k_{\\;1},\\; \n y_2\\approx -a^k_{\\;2}/a^k_{\\;1},\n \\;\\dots\\;\n y_n\\approx -a^k_{\\;n}/a^k_{\\;n-1}.\n" }, { "math_id": 22, "text": "q^m(y)" }, { "math_id": 23, "text": "q^{m-1}(x)" }, { "math_id": 24, "text": "q^{m-2}(x)" }, { "math_id": 25, "text": "x_{\\ell+1}=x_{\\ell+2}=\\dots=x_{\\ell+d}" }, { "math_id": 26, "text": "\\left|\\frac{(a^{m-1}_{\\;\\ell+i})^2}{a^{m}_{\\;\\ell+i}}\\right|" }, { "math_id": 27, "text": "\\binom{d}{i}" }, { "math_id": 28, "text": "i=0,1,\\dots,d" }, { "math_id": 29, "text": "\\varepsilon" }, { "math_id": 30, "text": "\\varepsilon^2=0" }, { "math_id": 31, "text": "p(x+\\varepsilon)=p(x)+\\varepsilon\\,p'(x)" }, { "math_id": 32, "text": "x_m-\\varepsilon" }, { "math_id": 33, "text": "(x_m-\\varepsilon)^{2^k}=x_m^{2^k}-\\varepsilon\\,{2^k}\\,x_m^{2^k-1}=y_m+\\varepsilon\\,\\dot y_m." }, { "math_id": 34, "text": "x_m" }, { "math_id": 35, "text": "x_m=-\\tfrac{2^k\\,y_m}{\\dot y_m}." }, { "math_id": 36, "text": "nM^2" }, { "math_id": 37, "text": "n^{2^k-1}M^{2^k}" }, { "math_id": 38, "text": "c=\\alpha\\,e^{-2^k\\,r}," }, { "math_id": 39, "text": "\\alpha=\\frac{c}{|c|}" }, { "math_id": 40, "text": "r=-2^{-k}\\log|c|" }, { "math_id": 41, "text": "2^k" }, { "math_id": 42, "text": "c_3=c_1+c_2=|c_1|\\cdot\\left(\\alpha_1+\\alpha_2\\tfrac{|c_2|}{|c_1|}\\right)" }, { "math_id": 43, "text": "c_1" }, { "math_id": 44, "text": "r_1<r_2" }, { "math_id": 45, "text": "\\alpha_3=\\tfrac{s}{|s|}" }, { "math_id": 46, "text": "r_3=r_1+2^{-k}\\,\\log{|s|}" }, { "math_id": 47, "text": "s=\\alpha_1+\\alpha_2\\,e^{2^k(r_1-r_2)}." }, { "math_id": 48, "text": "a_0,a_1,\\dots,a_n" }, { "math_id": 49, "text": "(\\alpha_m,r_m)" }, { "math_id": 50, "text": "m=0,\\dots,n" }, { "math_id": 51, "text": "\\{(m,r_m):\\;m=0,\\dots,n\\}" } ]
https://en.wikipedia.org/wiki?curid=6134192
61343694
Santaló's formula
In differential geometry, Santaló's formula describes how to integrate a function on the unit sphere bundle of a Riemannian manifold by first integrating along every geodesic separately and then over the space of all geodesics. It is a standard tool in integral geometry and has applications in isoperimetric and rigidity results. The formula is named after Luis Santaló, who first proved the result in 1952. Formulation. Let formula_0 be a compact, oriented Riemannian manifold with boundary. Then for a function formula_1, Santaló's formula takes the form formula_2 where Validity. Under the assumptions that Santaló's formula is valid for all formula_17. In this case it is equivalent to the following identity of measures: formula_18 where formula_19 and formula_20 is defined by formula_21. In particular this implies that the "geodesic X-ray transform" formula_22 extends to a bounded linear map formula_23, where formula_24 and thus there is the following, formula_25-version of Santaló's formula: formula_26 If the non-trapping or the convexity condition from above fail, then there is a set formula_27 of positive measure, such that the geodesics emerging from formula_28 either fail to hit the boundary of formula_29 or hit it non-transversely. In this case Santaló's formula only remains true for functions with support disjoint from this exceptional set formula_28. Proof. The following proof is taken from [ Lemma 3.3], adapted to the (simpler) setting when conditions 1) and 2) from above are true. Santaló's formula follows from the following two ingredients, noting that formula_30 has measure zero. formula_32 formula_34 For the integration by parts formula, recall that formula_31 leaves the Liouville-measure formula_6 invariant and hence formula_35, the divergence with respect to the Sasaki-metric formula_36. The result thus follows from the divergence theorem and the observation that formula_37, where formula_38 is the inward-pointing unit-normal to formula_39. The resolvent is explicitly given by formula_40 and the mapping property formula_41 follows from the smoothness of formula_42, which is a consequence of the non-trapping and the convexity assumption.
[ { "math_id": 0, "text": "(M,\\partial M,g)" }, { "math_id": 1, "text": " f: SM \\rightarrow \\mathbb{C} " }, { "math_id": 2, "text": " \\int_{SM} f(x,v) \\, d\\mu(x,v) = \\int_{\\partial_+ SM} \\left[ \\int_0^{\\tau(x,v)} f(\\varphi_t(x,v)) \\, dt \\right] \\langle v, \\nu(x) \\rangle \\, d \\sigma(x,v)," }, { "math_id": 3, "text": " (\\varphi_t)_t " }, { "math_id": 4, "text": "\\tau(x,v) = \\sup\\{t\\ge 0: \\forall s\\in [0,t]:~ \\varphi_s(x,v)\\in SM \\} " }, { "math_id": 5, "text": " (x,v)\\in SM " }, { "math_id": 6, "text": " \\mu " }, { "math_id": 7, "text": " \\sigma " }, { "math_id": 8, "text": " SM " }, { "math_id": 9, "text": " \\partial S M " }, { "math_id": 10, "text": " \\nu " }, { "math_id": 11, "text": " \\partial M " }, { "math_id": 12, "text": " \\partial_+ SM := \\{(x,v) \\in SM: x \\in \\partial M, \\langle v,\\nu(x) \\rangle \\ge 0 \\}" }, { "math_id": 13, "text": "M" }, { "math_id": 14, "text": " \\tau(x,v) <\\infty " }, { "math_id": 15, "text": " II_{\\partial M}(x)" }, { "math_id": 16, "text": " x \\in \\partial M " }, { "math_id": 17, "text": "f\\in C^\\infty(M)" }, { "math_id": 18, "text": " \\Phi^*d \\mu (x,v,t) = \\langle \\nu(x),x\\rangle d \\sigma(x,v) d t, " }, { "math_id": 19, "text": " \\Omega=\\{(x,v,t): (x,v)\\in \\partial_+SM, t\\in (0,\\tau(x,v)) \\}" }, { "math_id": 20, "text": "\\Phi:\\Omega \\rightarrow SM" }, { "math_id": 21, "text": "\\Phi(x,v,t)=\\varphi_t(x,v)" }, { "math_id": 22, "text": " I f(x,v) = \\int_0^{\\tau(x,v)} f(\\varphi_t(x,v)) \\, dt " }, { "math_id": 23, "text": " I: L^1(SM, \\mu) \\rightarrow L^1(\\partial_+ SM, \\sigma_\\nu)" }, { "math_id": 24, "text": " d\\sigma_\\nu(x,v) = \\langle v, \\nu(x) \\rangle \\, d \\sigma(x,v) " }, { "math_id": 25, "text": "L^1" }, { "math_id": 26, "text": " \\int_{SM} f \\, d \\mu = \\int_{\\partial_+ SM} If ~ d \\sigma_\\nu \\quad \\text{for all } f \\in L^1(SM,\\mu). " }, { "math_id": 27, "text": "E\\subset SM" }, { "math_id": 28, "text": " E" }, { "math_id": 29, "text": " M " }, { "math_id": 30, "text": "\\partial_0SM=\\{(x,v):\\langle \\nu(x), v\\rangle =0 \\}" }, { "math_id": 31, "text": " X " }, { "math_id": 32, "text": " \\int_{SM} Xu ~ d \\mu = - \\int_{\\partial_+ SM} u ~ d \\sigma_\\nu \\quad \\text{for all } u \\in C^\\infty(SM) " }, { "math_id": 33, "text": "X u = - f" }, { "math_id": 34, "text": " \\exists R: C_c^\\infty( SM\\smallsetminus\\partial_0 SM) \\rightarrow C^\\infty(SM): XRf = - f \\text{ and } Rf\\vert_{\\partial_+ SM} = If \\quad \\text{for all } f\\in C_c^\\infty( SM\\smallsetminus\\partial_0 SM) " }, { "math_id": 35, "text": " Xu = \\operatorname{div}_G (uX) " }, { "math_id": 36, "text": " G " }, { "math_id": 37, "text": " \\langle X(x,v), N(x,v)\\rangle_G = \\langle v, \\nu(x)\\rangle_g " }, { "math_id": 38, "text": " N " }, { "math_id": 39, "text": "\\partial SM" }, { "math_id": 40, "text": " Rf(x,v) = \\int_0^{\\tau(x,v)} f(\\varphi_t(x,v)) \\, dt " }, { "math_id": 41, "text": " C_c^\\infty( SM\\smallsetminus\\partial_0 SM) \\rightarrow C^\\infty(SM) " }, { "math_id": 42, "text": " \\tau: SM\\smallsetminus\\partial_0 SM \\rightarrow [0,\\infty)" } ]
https://en.wikipedia.org/wiki?curid=61343694
61346
Commutative ring
Algebraic structure In mathematics, a commutative ring is a ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra. Complementarily, noncommutative algebra is the study of ring properties that are not specific to commutative rings. This distinction results from the high number of fundamental properties of commutative rings that do not extend to noncommutative rings. Definition and first examples. Definition. A "ring" is a set formula_0 equipped with two binary operations, i.e. operations combining any two elements of the ring to a third. They are called "addition" and "multiplication" and commonly denoted by "formula_1" and "formula_2"; e.g. formula_3 and formula_4. To form a ring these two operations have to satisfy a number of properties: the ring has to be an abelian group under addition as well as a monoid under multiplication, where multiplication distributes over addition; i.e., formula_5. The identity elements for addition and multiplication are denoted formula_6 and formula_7, respectively. If the multiplication is commutative, i.e. formula_8 then the ring formula_0 is called "commutative". In the remainder of this article, all rings will be commutative, unless explicitly stated otherwise. First examples. An important example, and in some sense crucial, is the ring of integers formula_9 with the two operations of addition and multiplication. As the multiplication of integers is a commutative operation, this is a commutative ring. It is usually denoted formula_9 as an abbreviation of the German word "Zahlen" (numbers). A field is a commutative ring where formula_10 and every non-zero element formula_11 is invertible; i.e., has a multiplicative inverse formula_12 such that formula_13. Therefore, by definition, any field is a commutative ring. The rational, real and complex numbers form fields. If formula_0 is a given commutative ring, then the set of all polynomials in the variable formula_14 whose coefficients are in formula_0 forms the polynomial ring, denoted formula_15. The same holds true for several variables. If formula_16 is some topological space, for example a subset of some formula_17, real- or complex-valued continuous functions on formula_16 form a commutative ring. The same is true for differentiable or holomorphic functions, when the two concepts are defined, such as for formula_16 a complex manifold. Divisibility. In contrast to fields, where every nonzero element is multiplicatively invertible, the concept of divisibility for rings is richer. An element formula_11 of ring formula_0 is called a unit if it possesses a multiplicative inverse. Another particular type of element is the zero divisors, i.e. an element formula_11 such that there exists a non-zero element formula_12 of the ring such that formula_18. If formula_0 possesses no non-zero zero divisors, it is called an integral domain (or domain). An element formula_11 satisfying formula_19 for some positive integer formula_20 is called nilpotent. Localizations. The "localization" of a ring is a process in which some elements are rendered invertible, i.e. multiplicative inverses are added to the ring. Concretely, if formula_21 is a multiplicatively closed subset of formula_0 (i.e. whenever formula_22 then so is formula_23) then the "localization" of formula_0 at formula_21, or "ring of fractions" with denominators in formula_21, usually denoted formula_24 consists of symbols &lt;templatestyles src="Block indent/styles.css"/&gt;formula_25 with formula_26 subject to certain rules that mimic the cancellation familiar from rational numbers. Indeed, in this language formula_27 is the localization of formula_9 at all nonzero integers. This construction works for any integral domain formula_0 instead of formula_9. The localization formula_28 is a field, called the quotient field of formula_0. Ideals and modules. Many of the following notions also exist for not necessarily commutative rings, but the definitions and properties are usually more complicated. For example, all ideals in a commutative ring are automatically two-sided, which simplifies the situation considerably. Modules. For a ring formula_0, an formula_0-"module" formula_29 is like what a vector space is to a field. That is, elements in a module can be added; they can be multiplied by elements of formula_0 subject to the same axioms as for a vector space. The study of modules is significantly more involved than the one of vector spaces, since there are modules that do not have any basis, that is, do not contain a spanning set whose elements are linearly independents. A module that has a basis is called a free module, and a submodule of a free module needs not to be free. A module of finite type is a module that has a finite spanning set. Modules of finite type play a fundamental role in the theory of commutative rings, similar to the role of the finite-dimensional vector spaces in linear algebra. In particular, Noetherian rings (see also "", below) can be defined as the rings such that every submodule of a module of finite type is also of finite type. Ideals. "Ideals" of a ring formula_0 are the submodules of formula_0, i.e., the modules contained in formula_0. In more detail, an ideal formula_30 is a non-empty subset of formula_0 such that for all formula_31 in formula_0, formula_32 and formula_33 in formula_30, both formula_34 and formula_35 are in formula_30. For various applications, understanding the ideals of a ring is of particular importance, but often one proceeds by studying modules in general. Any ring has two ideals, namely the zero ideal formula_36 and formula_0, the whole ring. These two ideals are the only ones precisely if formula_0 is a field. Given any subset formula_37 of formula_0 (where formula_38 is some index set), the ideal "generated by" formula_39 is the smallest ideal that contains formula_39. Equivalently, it is given by finite linear combinations formula_40 Principal ideal domains. If formula_39 consists of a single element formula_31, the ideal generated by formula_39 consists of the multiples of formula_31, i.e., the elements of the form formula_41 for arbitrary elements formula_42. Such an ideal is called a principal ideal. If every ideal is a principal ideal, formula_0 is called a principal ideal ring; two important cases are formula_9 and formula_43, the polynomial ring over a field formula_44. These two are in addition domains, so they are called principal ideal domains. Unlike for general rings, for a principal ideal domain, the properties of individual elements are strongly tied to the properties of the ring as a whole. For example, any principal ideal domain formula_0 is a unique factorization domain (UFD) which means that any element is a product of irreducible elements, in a (up to reordering of factors) unique way. Here, an element formula_11 in a domain is called irreducible if the only way of expressing it as a product formula_45 is by either formula_12 or formula_46 being a unit. An example, important in field theory, are irreducible polynomials, i.e., irreducible elements in formula_43, for a field formula_44. The fact that formula_9 is a UFD can be stated more elementarily by saying that any natural number can be uniquely decomposed as product of powers of prime numbers. It is also known as the fundamental theorem of arithmetic. An element formula_11 is a prime element if whenever formula_11 divides a product formula_47, formula_11 divides formula_12 or formula_46. In a domain, being prime implies being irreducible. The converse is true in a unique factorization domain, but false in general. Factor ring. The definition of ideals is such that "dividing" formula_30 "out" gives another ring, the "factor ring" formula_48: it is the set of cosets of formula_30 together with the operations formula_49 and formula_50. For example, the ring formula_51 (also denoted formula_52), where formula_20 is an integer, is the ring of integers modulo formula_20. It is the basis of modular arithmetic. An ideal is "proper" if it is strictly smaller than the whole ring. An ideal that is not strictly contained in any proper ideal is called maximal. An ideal formula_53 is maximal if and only if formula_54 is a field. Except for the zero ring, any ring (with identity) possesses at least one maximal ideal; this follows from Zorn's lemma. Noetherian rings. A ring is called "Noetherian" (in honor of Emmy Noether, who developed this concept) if every ascending chain of ideals formula_55 becomes stationary, i.e. becomes constant beyond some index formula_20. Equivalently, any ideal is generated by finitely many elements, or, yet equivalent, submodules of finitely generated modules are finitely generated. Being Noetherian is a highly important finiteness condition, and the condition is preserved under many operations that occur frequently in geometry. For example, if formula_0 is Noetherian, then so is the polynomial ring formula_56 (by Hilbert's basis theorem), any localization formula_24, and also any factor ring formula_48. Any non-Noetherian ring formula_0 is the union of its Noetherian subrings. This fact, known as Noetherian approximation, allows the extension of certain theorems to non-Noetherian rings. Artinian rings. A ring is called Artinian (after Emil Artin), if every descending chain of ideals formula_57 becomes stationary eventually. Despite the two conditions appearing symmetric, Noetherian rings are much more general than Artinian rings. For example, formula_9 is Noetherian, since every ideal can be generated by one element, but is not Artinian, as the chain formula_58 shows. In fact, by the Hopkins–Levitzki theorem, every Artinian ring is Noetherian. More precisely, Artinian rings can be characterized as the Noetherian rings whose Krull dimension is zero. Spectrum of a commutative ring. Prime ideals. As was mentioned above, formula_9 is a unique factorization domain. This is not true for more general rings, as algebraists realized in the 19th century. For example, in formula_59 there are two genuinely distinct ways of writing 6 as a product: formula_60 Prime ideals, as opposed to prime elements, provide a way to circumvent this problem. A prime ideal is a proper (i.e., strictly contained in formula_0) ideal formula_61 such that, whenever the product formula_62 of any two ring elements formula_11 and formula_12 is in formula_63 at least one of the two elements is already in formula_64 (The opposite conclusion holds for any ideal, by definition.) Thus, if a prime ideal is principal, it is equivalently generated by a prime element. However, in rings such as formula_65 prime ideals need not be principal. This limits the usage of prime elements in ring theory. A cornerstone of algebraic number theory is, however, the fact that in any Dedekind ring (which includes formula_66 and more generally the ring of integers in a number field) any ideal (such as the one generated by 6) decomposes uniquely as a product of prime ideals. Any maximal ideal is a prime ideal or, more briefly, is prime. Moreover, an ideal formula_67 is prime if and only if the factor ring formula_68 is an integral domain. Proving that an ideal is prime, or equivalently that a ring has no zero-divisors can be very difficult. Yet another way of expressing the same is to say that the complement formula_69 is multiplicatively closed. The localisation formula_70 is important enough to have its own notation: formula_71. This ring has only one maximal ideal, namely formula_72. Such rings are called local. Spectrum. The "spectrum of a ring" formula_73, denoted by formula_74, is the set of all prime ideals of formula_73. It is equipped with a topology, the Zariski topology, which reflects the algebraic properties of formula_73: a basis of open subsets is given by formula_75 where formula_76 is any ring element. Interpreting formula_76 as a function that takes the value "f" mod "p" (i.e., the image of "f" in the residue field "R"/"p"), this subset is the locus where "f" is non-zero. The spectrum also makes precise the intuition that localisation and factor rings are complementary: the natural maps "R" → "R""f" and "R" → "R" / "fR" correspond, after endowing the spectra of the rings in question with their Zariski topology, to complementary open and closed immersions respectively. Even for basic rings, such as illustrated for "R" = Z at the right, the Zariski topology is quite different from the one on the set of real numbers. The spectrum contains the set of maximal ideals, which is occasionally denoted mSpec ("R"). For an algebraically closed field "k", mSpec (k["T"1, ..., "T""n"] / ("f"1, ..., "f""m")) is in bijection with the set Thus, maximal ideals reflect the geometric properties of solution sets of polynomials, which is an initial motivation for the study of commutative rings. However, the consideration of non-maximal ideals as part of the geometric properties of a ring is useful for several reasons. For example, the minimal prime ideals (i.e., the ones not strictly containing smaller ones) correspond to the irreducible components of Spec "R". For a Noetherian ring "R", Spec "R" has only finitely many irreducible components. This is a geometric restatement of primary decomposition, according to which any ideal can be decomposed as a product of finitely many primary ideals. This fact is the ultimate generalization of the decomposition into prime ideals in Dedekind rings. Affine schemes. The notion of a spectrum is the common basis of commutative algebra and algebraic geometry. Algebraic geometry proceeds by endowing Spec "R" with a sheaf formula_77 (an entity that collects functions defined locally, i.e. on varying open subsets). The datum of the space and the sheaf is called an affine scheme. Given an affine scheme, the underlying ring "R" can be recovered as the global sections of formula_77. Moreover, this one-to-one correspondence between rings and affine schemes is also compatible with ring homomorphisms: any "f" : "R" → "S" gives rise to a continuous map in the opposite direction &lt;templatestyles src="Block indent/styles.css"/&gt; Spec "S" → Spec "R", "q" ↦ "f"−1("q"), i.e. any prime ideal of "S" is mapped to its preimage under "f", which is a prime ideal of "R". The resulting equivalence of the two said categories aptly reflects algebraic properties of rings in a geometrical manner. Similar to the fact that manifolds are locally given by open subsets of R"n", affine schemes are local models for schemes, which are the object of study in algebraic geometry. Therefore, several notions concerning commutative rings stem from geometric intuition. Dimension. The "Krull dimension" (or dimension) dim "R" of a ring "R" measures the "size" of a ring by, roughly speaking, counting independent elements in "R". The dimension of algebras over a field "k" can be axiomatized by four properties: The dimension is defined, for any ring "R", as the supremum of lengths "n" of chains of prime ideals &lt;templatestyles src="Block indent/styles.css"/&gt;"p"0 ⊊ "p"1 ⊊ ... ⊊ "p""n". For example, a field is zero-dimensional, since the only prime ideal is the zero ideal. The integers are one-dimensional, since chains are of the form (0) ⊊ ("p"), where "p" is a prime number. For non-Noetherian rings, and also non-local rings, the dimension may be infinite, but Noetherian local rings have finite dimension. Among the four axioms above, the first two are elementary consequences of the definition, whereas the remaining two hinge on important facts in commutative algebra, the going-up theorem and Krull's principal ideal theorem. Ring homomorphisms. A "ring homomorphism" or, more colloquially, simply a "map", is a map "f" : "R" → "S" such that &lt;templatestyles src="Block indent/styles.css"/&gt;"f"("a" + "b") = "f"("a") + "f"("b"), "f"("ab") = "f"("a")"f"("b") and "f"(1) = 1. These conditions ensure "f"(0) = 0. Similarly as for other algebraic structures, a ring homomorphism is thus a map that is compatible with the structure of the algebraic objects in question. In such a situation "S" is also called an "R"-algebra, by understanding that "s" in "S" may be multiplied by some "r" of "R", by setting &lt;templatestyles src="Block indent/styles.css"/&gt;"r" · "s" := "f"("r") · "s". The "kernel" and "image" of "f" are defined by ker("f") = {"r" ∈ "R", "f"("r") = 0} and im("f") = "f"("R") = {"f"("r"), "r" ∈ "R"}. The kernel is an ideal of "R", and the image is a subring of "S". A ring homomorphism is called an isomorphism if it is bijective. An example of a ring isomorphism, known as the Chinese remainder theorem, is formula_78 where "n" = "p"1"p"2..."p""k" is a product of pairwise distinct prime numbers. Commutative rings, together with ring homomorphisms, form a category. The ring Z is the initial object in this category, which means that for any commutative ring "R", there is a unique ring homomorphism Z → "R". By means of this map, an integer "n" can be regarded as an element of "R". For example, the binomial formula formula_79 which is valid for any two elements "a" and "b" in any commutative ring "R" is understood in this sense by interpreting the binomial coefficients as elements of "R" using this map. Given two "R"-algebras "S" and "T", their tensor product &lt;templatestyles src="Block indent/styles.css"/&gt;"S" ⊗"R" "T" is again a commutative "R"-algebra. In some cases, the tensor product can serve to find a "T"-algebra which relates to "Z" as "S" relates to "R". For example, &lt;templatestyles src="Block indent/styles.css"/&gt;"R"["X"] ⊗"R" "T" = "T"["X"]. Finite generation. An "R"-algebra "S" is called finitely generated (as an algebra) if there are finitely many elements "s"1, ..., "s""n" such that any element of "s" is expressible as a polynomial in the "s""i". Equivalently, "S" is isomorphic to &lt;templatestyles src="Block indent/styles.css"/&gt;"R"["T"1, ..., "T""n"] / "I". A much stronger condition is that "S" is finitely generated as an "R"-module, which means that any "s" can be expressed as a "R"-linear combination of some finite set "s"1, ..., "s""n". Local rings. A ring is called local if it has only a single maximal ideal, denoted by "m". For any (not necessarily local) ring "R", the localization &lt;templatestyles src="Block indent/styles.css"/&gt;"R""p" at a prime ideal "p" is local. This localization reflects the geometric properties of Spec "R" "around "p"". Several notions and problems in commutative algebra can be reduced to the case when "R" is local, making local rings a particularly deeply studied class of rings. The residue field of "R" is defined as &lt;templatestyles src="Block indent/styles.css"/&gt;"k" = "R" / "m". Any "R"-module "M" yields a "k"-vector space given by "M" / "mM". Nakayama's lemma shows this passage is preserving important information: a finitely generated module "M" is zero if and only if "M" / "mM" is zero. Regular local rings. The "k"-vector space "m"/"m"2 is an algebraic incarnation of the cotangent space. Informally, the elements of "m" can be thought of as functions which vanish at the point "p", whereas "m"2 contains the ones which vanish with order at least 2. For any Noetherian local ring "R", the inequality &lt;templatestyles src="Block indent/styles.css"/&gt;dim"k" "m"/"m"2 ≥ dim "R" holds true, reflecting the idea that the cotangent (or equivalently the tangent) space has at least the dimension of the space Spec "R". If equality holds true in this estimate, "R" is called a regular local ring. A Noetherian local ring is regular if and only if the ring (which is the ring of functions on the tangent cone) formula_80 is isomorphic to a polynomial ring over "k". Broadly speaking, regular local rings are somewhat similar to polynomial rings. Regular local rings are UFD's. Discrete valuation rings are equipped with a function which assign an integer to any element "r". This number, called the valuation of "r" can be informally thought of as a zero or pole order of "r". Discrete valuation rings are precisely the one-dimensional regular local rings. For example, the ring of germs of holomorphic functions on a Riemann surface is a discrete valuation ring. Complete intersections. By Krull's principal ideal theorem, a foundational result in the dimension theory of rings, the dimension of &lt;templatestyles src="Block indent/styles.css"/&gt;"R" = "k"["T"1, ..., "T""r"] / ("f"1, ..., "f""n") is at least "r" − "n". A ring "R" is called a complete intersection ring if it can be presented in a way that attains this minimal bound. This notion is also mostly studied for local rings. Any regular local ring is a complete intersection ring, but not conversely. A ring "R" is a "set-theoretic" complete intersection if the reduced ring associated to "R", i.e., the one obtained by dividing out all nilpotent elements, is a complete intersection. As of 2017, it is in general unknown, whether curves in three-dimensional space are set-theoretic complete intersections. Cohen–Macaulay rings. The depth of a local ring "R" is the number of elements in some (or, as can be shown, any) maximal regular sequence, i.e., a sequence "a"1, ..., "a""n" ∈ "m" such that all "a""i" are non-zero divisors in &lt;templatestyles src="Block indent/styles.css"/&gt;"R" / ("a"1, ..., "a""i"−1). For any local Noetherian ring, the inequality &lt;templatestyles src="Block indent/styles.css"/&gt;depth("R") ≤ dim("R") holds. A local ring in which equality takes place is called a Cohen–Macaulay ring. Local complete intersection rings, and a fortiori, regular local rings are Cohen–Macaulay, but not conversely. Cohen–Macaulay combine desirable properties of regular rings (such as the property of being universally catenary rings, which means that the (co)dimension of primes is well-behaved), but are also more robust under taking quotients than regular local rings. Constructing commutative rings. There are several ways to construct new rings out of given ones. The aim of such constructions is often to improve certain properties of the ring so as to make it more readily understandable. For example, an integral domain that is integrally closed in its field of fractions is called normal. This is a desirable property, for example any normal one-dimensional ring is necessarily regular. Rendering a ring normal is known as "normalization". Completions. If "I" is an ideal in a commutative ring "R", the powers of "I" form topological neighborhoods of "0" which allow "R" to be viewed as a topological ring. This topology is called the "I"-adic topology. "R" can then be completed with respect to this topology. Formally, the "I"-adic completion is the inverse limit of the rings "R"/"In". For example, if "k" is a field, "k""X", the formal power series ring in one variable over "k", is the "I"-adic completion of "k"["X"] where "I" is the principal ideal generated by "X". This ring serves as an algebraic analogue of the disk. Analogously, the ring of "p"-adic integers is the completion of Z with respect to the principal ideal ("p"). Any ring that is isomorphic to its own completion, is called complete. Complete local rings satisfy Hensel's lemma, which roughly speaking allows extending solutions (of various problems) over the residue field "k" to "R". Homological notions. Several deeper aspects of commutative rings have been studied using methods from homological algebra. lists some open questions in this area of active research. Projective modules and Ext functors. Projective modules can be defined to be the direct summands of free modules. If "R" is local, any finitely generated projective module is actually free, which gives content to an analogy between projective modules and vector bundles. The Quillen–Suslin theorem asserts that any finitely generated projective module over "k"["T"1, ..., "T""n"] ("k" a field) is free, but in general these two concepts differ. A local Noetherian ring is regular if and only if its global dimension is finite, say "n", which means that any finitely generated "R"-module has a resolution by projective modules of length at most "n". The proof of this and other related statements relies on the usage of homological methods, such as the Ext functor. This functor is the derived functor of the functor &lt;templatestyles src="Block indent/styles.css"/&gt;Hom"R"("M", −). The latter functor is exact if "M" is projective, but not otherwise: for a surjective map "E" → "F" of "R"-modules, a map "M" → "F" need not extend to a map "M" → "E". The higher Ext functors measure the non-exactness of the Hom-functor. The importance of this standard construction in homological algebra stems can be seen from the fact that a local Noetherian ring "R" with residue field "k" is regular if and only if &lt;templatestyles src="Block indent/styles.css"/&gt;Ext"n"("k", "k") vanishes for all large enough "n". Moreover, the dimensions of these Ext-groups, known as Betti numbers, grow polynomially in "n" if and only if "R" is a local complete intersection ring. A key argument in such considerations is the Koszul complex, which provides an explicit free resolution of the residue field "k" of a local ring "R" in terms of a regular sequence. Flatness. The tensor product is another non-exact functor relevant in the context of commutative rings: for a general "R"-module "M", the functor &lt;templatestyles src="Block indent/styles.css"/&gt;"M" ⊗"R" − is only right exact. If it is exact, "M" is called flat. If "R" is local, any finitely presented flat module is free of finite rank, thus projective. Despite being defined in terms of homological algebra, flatness has profound geometric implications. For example, if an "R"-algebra "S" is flat, the dimensions of the fibers &lt;templatestyles src="Block indent/styles.css"/&gt;"S" / "pS" = "S" ⊗"R" "R" / "p" (for prime ideals "p" in "R") have the "expected" dimension, namely dim "S" − dim "R" + dim("R" / "p"). Properties. By Wedderburn's theorem, every finite division ring is commutative, and therefore a finite field. Another condition ensuring commutativity of a ring, due to Jacobson, is the following: for every element "r" of "R" there exists an integer "n" &gt; 1 such that "r""n" = "r". If, "r"2 = "r" for every "r", the ring is called Boolean ring. More general conditions which guarantee commutativity of a ring are also known. Generalizations. Graded-commutative rings. A graded ring "R" = ⨁"i"∊Z "R""i" is called graded-commutative if, for all homogeneous elements "a" and "b", &lt;templatestyles src="Block indent/styles.css"/&gt;"ab" = (−1)deg "a" ⋅ deg "b" "ba". If the "R""i" are connected by differentials ∂ such that an abstract form of the product rule holds, i.e., &lt;templatestyles src="Block indent/styles.css"/&gt;∂("ab") = ∂("a")"b" + (−1)deg "a"a∂("b"), "R" is called a commutative differential graded algebra (cdga). An example is the complex of differential forms on a manifold, with the multiplication given by the exterior product, is a cdga. The cohomology of a cdga is a graded-commutative ring, sometimes referred to as the cohomology ring. A broad range examples of graded rings arises in this way. For example, the Lazard ring is the ring of cobordism classes of complex manifolds. A graded-commutative ring with respect to a grading by Z/2 (as opposed to Z) is called a superalgebra. A related notion is an almost commutative ring, which means that "R" is filtered in such a way that the associated graded ring &lt;templatestyles src="Block indent/styles.css"/&gt;gr "R" := ⨁ "F""i""R" / ⨁ "F""i"−1"R" is commutative. An example is the Weyl algebra and more general rings of differential operators. Simplicial commutative rings. A simplicial commutative ring is a simplicial object in the category of commutative rings. They are building blocks for (connective) derived algebraic geometry. A closely related but more general notion is that of E∞-ring. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " R " }, { "math_id": 1, "text": "+" }, { "math_id": 2, "text": "\\cdot" }, { "math_id": 3, "text": "a+b" }, { "math_id": 4, "text": "a \\cdot b" }, { "math_id": 5, "text": "a \\cdot \\left(b + c\\right) = \\left(a \\cdot b\\right) + \\left(a \\cdot c\\right)" }, { "math_id": 6, "text": " 0 " }, { "math_id": 7, "text": " 1 " }, { "math_id": 8, "text": "a \\cdot b = b \\cdot a," }, { "math_id": 9, "text": " \\mathbb{Z} " }, { "math_id": 10, "text": " 0 \\not = 1 " }, { "math_id": 11, "text": " a " }, { "math_id": 12, "text": " b " }, { "math_id": 13, "text": " a \\cdot b = 1 " }, { "math_id": 14, "text": " X " }, { "math_id": 15, "text": " R \\left[ X \\right] " }, { "math_id": 16, "text": " V " }, { "math_id": 17, "text": " \\mathbb{R}^n " }, { "math_id": 18, "text": " ab = 0 " }, { "math_id": 19, "text": " a^n = 0 " }, { "math_id": 20, "text": " n " }, { "math_id": 21, "text": " S " }, { "math_id": 22, "text": " s,t \\in S " }, { "math_id": 23, "text": " st " }, { "math_id": 24, "text": " S^{-1}R " }, { "math_id": 25, "text": "\\frac{r}{s}" }, { "math_id": 26, "text": " r \\in R, s \\in S " }, { "math_id": 27, "text": " \\mathbb{Q} " }, { "math_id": 28, "text": " \\left(R\\setminus \\left\\{0\\right\\}\\right)^{-1}R " }, { "math_id": 29, "text": " M " }, { "math_id": 30, "text": " I " }, { "math_id": 31, "text": " r " }, { "math_id": 32, "text": " i " }, { "math_id": 33, "text": " j " }, { "math_id": 34, "text": " ri " }, { "math_id": 35, "text": " i+j " }, { "math_id": 36, "text": " \\left\\{0\\right\\} " }, { "math_id": 37, "text": " F=\\left\\{f_j\\right\\}_{j \\in J} " }, { "math_id": 38, "text": " J " }, { "math_id": 39, "text": " F " }, { "math_id": 40, "text": " r_1 f_1 + r_2 f_2 + \\dots + r_n f_n ." }, { "math_id": 41, "text": " rs " }, { "math_id": 42, "text": " s " }, { "math_id": 43, "text": " k \\left[X\\right] " }, { "math_id": 44, "text": " k " }, { "math_id": 45, "text": " a=bc ," }, { "math_id": 46, "text": " c " }, { "math_id": 47, "text": " bc " }, { "math_id": 48, "text": " R / I " }, { "math_id": 49, "text": " \\left(a+I\\right)+\\left(b+I\\right)=\\left(a+b\\right)+I " }, { "math_id": 50, "text": " \\left(a+I\\right) \\left(b+I\\right)=ab+I " }, { "math_id": 51, "text": " \\mathbb{Z}/n\\mathbb{Z} " }, { "math_id": 52, "text": " \\mathbb{Z}_n " }, { "math_id": 53, "text": " m " }, { "math_id": 54, "text": " R / m " }, { "math_id": 55, "text": " 0 \\subseteq I_0 \\subseteq I_1 \\subseteq \\dots \\subseteq I_n \\subseteq I_{n+1} \\dots " }, { "math_id": 56, "text": " R \\left[X_1,X_2,\\dots,X_n\\right] " }, { "math_id": 57, "text": " R \\supseteq I_0 \\supseteq I_1 \\supseteq \\dots \\supseteq I_n \\supseteq I_{n+1} \\dots " }, { "math_id": 58, "text": " \\mathbb{Z} \\supsetneq 2\\mathbb{Z} \\supsetneq 4\\mathbb{Z} \\supsetneq 8\\mathbb{Z} \\dots " }, { "math_id": 59, "text": "\\mathbb{Z}\\left[\\sqrt{-5}\\right]" }, { "math_id": 60, "text": "6 = 2 \\cdot 3 = \\left(1 + \\sqrt{-5}\\right)\\left(1 - \\sqrt{-5}\\right)." }, { "math_id": 61, "text": " p " }, { "math_id": 62, "text": " ab " }, { "math_id": 63, "text": " p, " }, { "math_id": 64, "text": " p ." }, { "math_id": 65, "text": "\\mathbb{Z}\\left[\\sqrt{-5}\\right]," }, { "math_id": 66, "text": "\\mathbb{Z}\\left[\\sqrt{-5}\\right]" }, { "math_id": 67, "text": "I" }, { "math_id": 68, "text": "R/I" }, { "math_id": 69, "text": "R \\setminus p" }, { "math_id": 70, "text": "\\left(R \\setminus p\\right)^{-1}R" }, { "math_id": 71, "text": "R_p" }, { "math_id": 72, "text": "pR_p" }, { "math_id": 73, "text": "R" }, { "math_id": 74, "text": "\\text{Spec}\\ R" }, { "math_id": 75, "text": "D\\left(f\\right) = \\left\\{p \\in \\text{Spec} \\ R,f \\not\\in p\\right\\}," }, { "math_id": 76, "text": "f" }, { "math_id": 77, "text": "\\mathcal O" }, { "math_id": 78, "text": "\\mathbf Z/n = \\bigoplus_{i=0}^k \\mathbf Z/p_i ," }, { "math_id": 79, "text": "(a+b)^n = \\sum_{k=0}^n \\binom n k a^k b^{n-k}" }, { "math_id": 80, "text": "\\bigoplus_n m^n / m^{n+1}" } ]
https://en.wikipedia.org/wiki?curid=61346
61349765
Social utility efficiency
Social utility efficiency (SUE) is a measurement of the utilitarian performance of voting methods—how likely they are to elect the candidate who best represents the voters' preferences. It is also known as utilitarian efficiency, voter satisfaction index (VSI) or voter satisfaction efficiency (VSE). Definition. Social utility efficiency is defined as the ratio between the social utility of the candidate who is actually elected by a given voting method and that of the candidate who would maximize social utility, where formula_0is the expected value over many iterations of the sum of all voter utilities for a given candidate: formula_1 A voting method with 100% efficiency would always pick the candidate that maximizes voter utility. A method that chooses a winner randomly would have efficiency of 0%, and a (pathological) method that did worse than a random pick would have less than 0% efficiency. SUE is not only affected by the voting method, but is a function of the number of voters, number of candidates, and of any strategies used by the voters. History. The concept was originally introduced as a system's "effectiveness" by Robert J. Weber in 1977, defined as: formula_2 Where formula_3 is the expected social utility of the given candidate, formula_4 is the number of voters, and formula_5 is the number of candidates. He used a random society (impartial culture) model to analytically calculate the effectiveness of FPTP, two Approval variants, and Borda, as the number of voters approaches infinity. It was given the name "social utility efficiency" and extended to the more realistic spatial model of voting by Samuel Merrill III in the 1980s, calculated statistically from random samples, with 25–201 voters and 2–10 candidates. This analysis included FPTP, Runoff, IRV, Coombs, Approval, Black, and Borda (in increasing order of efficiency). (Merrill's model normalizes individual voter utility before finding the utility winner, while Weber's does not, so that Merrill considers all 2-candidate voting systems to have an SUE of 100%, decreasing with more candidates, while Weber considers them to have an effectiveness of formula_6 = 81.6%, with some systems increasing with more candidates.)
[ { "math_id": 0, "text": "E[]" }, { "math_id": 1, "text": "\\operatorname{SUE}= \\frac{E[\\text{selected candidate}]-E[\\text{random candidate}]}{E[\\text{maximizing candidate}]-E[\\text{random candidate}]}" }, { "math_id": 2, "text": "\\operatorname{Effectiveness}=\\lim_{n \\to \\infty} \\frac {E_\\text{elected}(m,n) - E_\\text{random}(m)}{E_\\text{maximal}(m,n) - E_\\text{random}(m)}" }, { "math_id": 3, "text": "E" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "m" }, { "math_id": 6, "text": "\\sqrt{2/3}" } ]
https://en.wikipedia.org/wiki?curid=61349765
61351
Laurent polynomial
Polynomial with finitely many terms of the form axⁿ where n ∈ Z In mathematics, a Laurent polynomial (named after Pierre Alphonse Laurent) in one variable over a field formula_0 is a linear combination of positive and negative powers of the variable with coefficients in formula_0. Laurent polynomials in formula_1 form a ring denoted formula_2. They differ from ordinary polynomials in that they may have terms of negative degree. The construction of Laurent polynomials may be iterated, leading to the ring of Laurent polynomials in several variables. Laurent polynomials are of particular importance in the study of complex variables. Definition. A Laurent polynomial with coefficients in a field formula_0 is an expression of the form formula_3 where formula_1 is a formal variable, the summation index formula_4 is an integer (not necessarily positive) and only finitely many coefficients formula_5 are non-zero. Two Laurent polynomials are equal if their coefficients are equal. Such expressions can be added, multiplied, and brought back to the same form by reducing similar terms. Formulas for addition and multiplication are exactly the same as for the ordinary polynomials, with the only difference that both positive and negative powers of formula_1 can be present: formula_6 and formula_7 Since only finitely many coefficients formula_8 and formula_9 are non-zero, all sums in effect have only finitely many terms, and hence represent Laurent polynomials. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{F}" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "\\mathbb{F}[X, X^{-1}]" }, { "math_id": 3, "text": "p = \\sum_k p_k X^k, \\quad p_k \\in \\mathbb{F}" }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "p_{k}" }, { "math_id": 6, "text": "\\bigg(\\sum_i a_i X^i\\bigg) + \\bigg(\\sum_i b_i X^i\\bigg) = \n\\sum_i (a_i+b_i)X^i" }, { "math_id": 7, "text": "\\bigg(\\sum_i a_i X^i\\bigg) \\cdot \\bigg(\\sum_j b_j X^j\\bigg) = \n\\sum_k \\Bigg(\\sum_{i,j \\atop i+j=k} a_i b_j\\Bigg)X^k." }, { "math_id": 8, "text": "a_{i}" }, { "math_id": 9, "text": "b_{j}" }, { "math_id": 10, "text": "\\mathbb{C}" }, { "math_id": 11, "text": "R\\left [X, X^{-1} \\right ]" }, { "math_id": 12, "text": "R[X]" }, { "math_id": 13, "text": "R" }, { "math_id": 14, "text": "uX^{k}" }, { "math_id": 15, "text": "u" }, { "math_id": 16, "text": "K" }, { "math_id": 17, "text": "K[X, X^{-1}]" }, { "math_id": 18, "text": "aX^{k}" }, { "math_id": 19, "text": "a" }, { "math_id": 20, "text": "R[X, X^{-1}]" }, { "math_id": 21, "text": "\\mathbb{Z}" }, { "math_id": 22, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=61351
61351870
Phase-field models on graphs
Graph-based mathematical model Phase-field models on graphs are a discrete analogue to phase-field models, defined on a graph. They are used in image analysis (for feature identification) and for the segmentation of social networks. Graph Ginzburg–Landau functional. For a graph with vertices "V" and edge weights formula_0, the graph Ginzburg–Landau functional of a map formula_1 is given by formula_2 where "W" is a double well potential, for example the quartic potential "W"("x") = "x"2(1 − "x"2). The graph Ginzburg–Landau functional was introduced by Bertozzi and Flenner. In analogy to continuum phase-field models, where regions with "u" close to 0 or 1 are models for two phases of the material, vertices can be classified into those with "u""j" close to 0 or close to 1, and for small formula_3, minimisers of formula_4 will satisfy that "u""j" is close to 0 or 1 for most nodes, splitting the nodes into two classes. Graph Allen–Cahn equation. To effectively minimise formula_4, a natural approach is by gradient flow (steepest descent). This means to introduce an artificial time parameter and to solve the graph version of the Allen–Cahn equation, formula_5 where formula_6 is the graph Laplacian. The ordinary continuum Allen–Cahn equation and the graph Allen–Cahn equation are natural counterparts, just replacing ordinary calculus by calculus on graphs. A convergence result for a numerical graph Allen–Cahn scheme has been established by Luo and Bertozzi. It is also possible to adapt other computational schemes for mean curvature flow, for example schemes involving thresholding like the Merriman–Bence–Osher scheme, to a graph setting, with analogous results. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega_{i,j}" }, { "math_id": 1, "text": "u:V\\to \\mathbb{R}" }, { "math_id": 2, "text": "F_\\varepsilon(u) = \\frac\\varepsilon2 \\sum_{i,j\\in V} \\omega_{ij} (u_i-u_j)^2 + \\frac1\\varepsilon \\sum_{i \\in V} W(u_i), \n" }, { "math_id": 3, "text": "\\varepsilon" }, { "math_id": 4, "text": "F_\\varepsilon" }, { "math_id": 5, "text": "\\frac{d}{dt} u_j = -\\varepsilon (\\Delta u)_j-\\frac1\\varepsilon W'(u_j)," }, { "math_id": 6, "text": "\\Delta" } ]
https://en.wikipedia.org/wiki?curid=61351870
61352041
Plumbing (mathematics)
Way to create new manifolds out of disk bundles In the mathematical field of geometric topology, among the techniques known as surgery theory, the process of plumbing is a way to create new manifolds out of disk bundles. It was first described by John Milnor and subsequently used extensively in surgery theory to produce manifolds and normal maps with given surgery obstructions. Definition. Let formula_0 be a rank "n" vector bundle over an "n"-dimensional smooth manifold formula_1 for "i" = 1,2. Denote by formula_2 the total space of the associated (closed) disk bundle formula_3and suppose that formula_4 and formula_2are oriented in a compatible way. If we pick two points formula_5, "i" = 1,2, and consider a ball neighbourhood of formula_6 in formula_1, then we get neighbourhoods formula_7 of the fibre over formula_6 in formula_2. Let formula_8 and formula_9 be two diffeomorphisms (either both orientation preserving or reversing). The plumbing of formula_10 and formula_11 at formula_12 and formula_13 is defined to be the quotient space formula_14 where formula_15 is defined by formula_16. The smooth structure on the quotient is defined by "straightening the angles". Plumbing according to a tree. If the base manifold is an "n"-sphere formula_17, then by iterating this procedure over several vector bundles over formula_17 one can plumb them together according to a tree§8. If formula_18 is a tree, we assign to each vertex a vector bundle "formula_19" over formula_17 and we plumb the corresponding disk bundles together if two vertices are connected by an edge. One has to be careful that neighbourhoods in the total spaces do not overlap. Milnor manifolds. Let formula_20 denote the disk bundle associated to the tangent bundle of the "2k"-sphere. If we plumb eight copies of formula_20 according to the diagram formula_21, we obtain a "4k"-dimensional manifold which certain authors call the Milnor manifold formula_22 (see also E8 manifold). For formula_23, the boundary formula_24 is a homotopy sphere which generates formula_25, the group of "h"-cobordism classes of homotopy spheres which bound π-manifolds (see also exotic spheres for more details). Its signature is formula_26 and there exists V.2.9 a normal map formula_27 such that the surgery obstruction is formula_28, where formula_29 is a map of degree 1 and formula_30 is a bundle map from the stable normal bundle of the Milnor manifold to a certain stable vector bundle. The plumbing theorem. A crucial theorem for the development of surgery theory is the so-called "Plumbing Theorem" II.1.3 (presented here in the simply connected case): For all formula_31, there exists a "2k"-dimensional manifold formula_32 with boundary formula_33 and a normal map formula_34 where formula_35 is such that formula_36 is a homotopy equivalence, formula_37 is a bundle map into the trivial bundle and the surgery obstruction is formula_38. The proof of this theorem makes use of the Milnor manifolds defined above. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\xi_i=(E_i,M_i,p_i)" }, { "math_id": 1, "text": "M_i" }, { "math_id": 2, "text": "D(E_i)" }, { "math_id": 3, "text": "D(\\xi_i)" }, { "math_id": 4, "text": "\\xi_i, M_i" }, { "math_id": 5, "text": "x_i\\in M_i" }, { "math_id": 6, "text": "x_i" }, { "math_id": 7, "text": "D^n_i\\times D^n_i" }, { "math_id": 8, "text": "h:D^n_1\\rightarrow D^n_2" }, { "math_id": 9, "text": "k:D^n_1\\rightarrow D^n_2" }, { "math_id": 10, "text": "D(E_1)" }, { "math_id": 11, "text": "D(E_2)" }, { "math_id": 12, "text": "x_1" }, { "math_id": 13, "text": "x_2" }, { "math_id": 14, "text": "P=D(E_1)\\cup_f D(E_2)" }, { "math_id": 15, "text": "f:D^n_1\\times D^n_1\\rightarrow D^n_2\\times D^n_2" }, { "math_id": 16, "text": "f(x,y)=(k(y),h(x))" }, { "math_id": 17, "text": "S^n" }, { "math_id": 18, "text": "T" }, { "math_id": 19, "text": "\\xi" }, { "math_id": 20, "text": "D(\\tau_{S^{2k}})" }, { "math_id": 21, "text": "E_8" }, { "math_id": 22, "text": "M^{4k}_B" }, { "math_id": 23, "text": "k>1" }, { "math_id": 24, "text": "\\Sigma^{4k-1}=\\partial M^{4k}_B" }, { "math_id": 25, "text": "\\theta^{4k-1}(\\partial \\pi)" }, { "math_id": 26, "text": "sgn(M^{4k}_B)=8" }, { "math_id": 27, "text": "(f,b)" }, { "math_id": 28, "text": "\\sigma(f,b)=1" }, { "math_id": 29, "text": "g:(M^{4k}_B,\\partial M^{4k}_B)\\rightarrow (D^{4k},S^{4k-1})" }, { "math_id": 30, "text": "b:\\nu_{M^{4k}_B} \\rightarrow \\xi" }, { "math_id": 31, "text": "k>1, l\\in \\Z" }, { "math_id": 32, "text": "M" }, { "math_id": 33, "text": "\\partial M" }, { "math_id": 34, "text": "(g,c)" }, { "math_id": 35, "text": "g:(M,\\partial M)\\rightarrow (D^{2k},S^{2k-1})" }, { "math_id": 36, "text": "g|_{\\partial M}" }, { "math_id": 37, "text": "c" }, { "math_id": 38, "text": "\\sigma(g,c)=l" } ]
https://en.wikipedia.org/wiki?curid=61352041
613539
Spike-timing-dependent plasticity
Biological process that adjusts the strength of connections between neurons in the brain Spike-timing-dependent plasticity (STDP) is a biological process that adjusts the strength of connections between neurons in the brain. The process adjusts the connection strengths based on the relative timing of a particular neuron's output and input action potentials (or spikes). The STDP process partially explains the activity-dependent development of nervous systems, especially with regard to long-term potentiation and long-term depression. Process. Under the STDP process, if an input spike to a neuron tends, on average, to occur immediately "before" that neuron's output spike, then that particular input is made somewhat stronger. If an input spike tends, on average, to occur immediately "after" an output spike, then that particular input is made somewhat weaker hence: "spike-timing-dependent plasticity". Thus, inputs that might be the cause of the post-synaptic neuron's excitation are made even more likely to contribute in the future, whereas inputs that are not the cause of the post-synaptic spike are made less likely to contribute in the future. The process continues until a subset of the initial set of connections remain, while the influence of all others is reduced to 0. Since a neuron produces an output spike when many of its inputs occur within a brief period, the subset of inputs that remain are those that tended to be correlated in time. In addition, since the inputs that occur before the output are strengthened, the inputs that provide the earliest indication of correlation will eventually become the final input to the neuron. History. In 1973, M. M. Taylor suggested that if synapses were strengthened for which a presynaptic spike occurred just before a postsynaptic spike more often than the reverse (Hebbian learning), while with the opposite timing or in the absence of a closely timed presynaptic spike, synapses were weakened (anti-Hebbian learning), the result would be an informationally efficient recoding of input patterns. This proposal apparently passed unnoticed in the neuroscientific community, and subsequent experimentation was conceived independently of these early suggestions. Early experiments on associative plasticity were carried out by W. B. Levy and O. Steward in 1983 and examined the effect of relative timing of pre- and postsynaptic action potentials at millisecond level on plasticity. Bruce McNaughton contributed much to this area, too. In studies on neuromuscular synapses carried out by Y. Dan and Mu-ming Poo in 1992, and on the hippocampus by D. Debanne, B. Gähwiler, and S. Thompson in 1994, showed that asynchronous pairing of postsynaptic and synaptic activity induced long-term synaptic depression. However, STDP was more definitively demonstrated by Henry Markram in his postdoc period till 1993 in Bert Sakmann's lab (SFN and Phys Soc abstracts in 1994–1995) which was only published in 1997. C. Bell and co-workers also found a form of STDP in the cerebellum. Henry Markram used dual patch clamping techniques to repetitively activate pre-synaptic neurons 10 milliseconds before activating the post-synaptic target neurons, and found the strength of the synapse increased. When the activation order was reversed so that the pre-synaptic neuron was activated 10 milliseconds after its post-synaptic target neuron, the strength of the pre-to-post synaptic connection decreased. Further work, by Guoqiang Bi, Li Zhang, and Huizhong Tao in Mu-Ming Poo's lab in 1998, continued the mapping of the entire time course relating pre- and post-synaptic activity and synaptic change, to show that in their preparation synapses that are activated within 5–20 ms before a postsynaptic spike are strengthened, and those that are activated within a similar time window after the spike are transiently weakened. It has since been shown that the initially highly asymmetric STDP window turns into a more symmetric "LTP only" window three days after induction. Spike-timing-dependent plasticity is thought to be a substrate for Hebbian learning during development. As suggested by Taylor in 1973, Hebbian learning rules might create informationally efficient coding in bundles of related neurons. While STDP was first discovered in cultured neurons and brain slice preparations, it has also been demonstrated by sensory stimulation of intact animals. Biological mechanisms. Postsynaptic NMDA receptors (NMDARs) are highly sensitive to the membrane potential (see coincidence detection in neurobiology). Due to their high permeability for calcium, they generate a local chemical signal that is largest when the back-propagating action potential in the dendrite arrives shortly after the synapse was active ("pre-post spiking"), when NMDA and AMPA receptors are still bound to glutamate. Large postsynaptic calcium transients are known to trigger synaptic potentiation (long-term potentiation). The mechanism for spike-timing-dependent depression is less well understood, but often involves either postsynaptic voltage-dependent calcium entry/mGluR activation, or retrograde endocannabinoids and presynaptic NMDARs. From Hebbian rule to STDP. According to the Hebbian rule, synapses increase their efficiency if the synapse persistently takes part in firing the postsynaptic target neuron. Similarly, the efficiency of synapses decreases when the firing of their presynaptic targets is persistently independent of firing their postsynaptic ones. These principles are often simplified in the mnemonics: "those who fire together, wire together"; and "those who fire out of sync, lose their link". However, if two neurons fire exactly at the same time, then one cannot have caused, or taken part in firing the other. Instead, to take part in firing the postsynaptic neuron, the presynaptic neuron needs to fire just before the postsynaptic neuron. Experiments that stimulated two connected neurons with varying interstimulus asynchrony confirmed the importance of temporal relation implicit in Hebb's principle: for the synapse to be potentiated or depressed, the presynaptic neuron has to fire just before or just after the postsynaptic neuron, respectively. In addition, it has become evident that the presynaptic neural firing needs to consistently predict the postsynaptic firing for synaptic plasticity to occur robustly, mirroring at a synaptic level what is known about the importance of contingency in classical conditioning, where zero contingency procedures prevent the association between two stimuli. Role in hippocampal learning. For the most efficient STDP, the presynaptic and postsynaptic signal has to be separated by approximately a dozen milliseconds. However, events happening within a couple of minutes can typically be linked together by the hippocampus as episodic memories. To resolve this contradiction, a mechanism relying on the theta waves and the phase precession has been proposed: Representations of different memory entities (such as a place, face, person etc.) are repeated on each theta cycle at a given theta phase during the episode to be remembered. Expected, ongoing, and completed entities have early, intermediate and late theta phases, respectively. In the CA3 region of the hippocampus, the recurrent network turns entities with neighboring theta phases into coincident ones thereby allowing STDP to link them together. Experimentally detectable memory sequences are created this way by reinforcing the connection between subsequent (neighboring) representations. Computational models and applications. Training spiking neural networks. The principles of STDP can be utilized in the training of artificial spiking neural networks. Using this approach the weight of a connection between two neurons is increased if the time at which a presynaptic spike (formula_0) occurs is shortly before the time of a post synaptic spike(formula_1), ie. formula_2 and formula_3. The size of the weight increase is dependent on the value of formula_4 and decreases exponentially as the value of formula_4 increases given by the equation: formula_5 where formula_6 is the maximum possible change and formula_7 is the time constant. If the opposite scenario occurs ie a post synaptic spike occurs before a presynaptic spike then the weight is instead reduced according to the equation: formula_8 Where formula_9and formula_10 serve the same function of defining the maximum possible change and time constant as before respectively. The parameters that define the decay profile (formula_6,formula_9, etc.) do not necessarily have to be fixed across the entire network and different synapses may have different shapes associated with them. Biological evidence suggests that this pairwise STDP approach cannot give a complete description of a biological neuron and more advanced approaches which look at symmetric triplets of spikes (pre-post-pre, post-pre-post) have been developed and these are believed to be more biologically plausible. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t_{pre}" }, { "math_id": 1, "text": "t_{post}" }, { "math_id": 2, "text": "t = t_{post}-t_{pre}" }, { "math_id": 3, "text": "t > 0" }, { "math_id": 4, "text": "t" }, { "math_id": 5, "text": "A_+ \\exp{\\left(\\frac{-t}{\\tau_+}\\right)}" }, { "math_id": 6, "text": "A_+" }, { "math_id": 7, "text": "\\tau_+" }, { "math_id": 8, "text": "A_- \\exp{\\left(\\frac{-t}{\\tau_ -}\\right)}" }, { "math_id": 9, "text": "A_-" }, { "math_id": 10, "text": "\\tau_-" } ]
https://en.wikipedia.org/wiki?curid=613539
613557
Calculus of constructions
Type theory created by Thierry Coquand In mathematical logic and computer science, the calculus of constructions (CoC) is a type theory created by Thierry Coquand. It can serve as both a typed programming language and as constructive foundation for mathematics. For this second reason, the CoC and its variants have been the basis for Coq and other proof assistants. Some of its variants include the calculus of inductive constructions (which adds inductive types), the calculus of (co)inductive constructions (which adds coinduction), and the predicative calculus of inductive constructions (which removes some impredicativity). General traits. The CoC is a higher-order typed lambda calculus, initially developed by Thierry Coquand. It is well known for being at the top of Barendregt's lambda cube. It is possible within CoC to define functions from terms to terms, as well as terms to types, types to types, and types to terms. The CoC is strongly normalizing, and hence consistent. Usage. The CoC has been developed alongside the Coq proof assistant. As features were added (or possible liabilities removed) to the theory, they became available in Coq. Variants of the CoC are used in other proof assistants, such as Matita and Lean. The basics of the calculus of constructions. The calculus of constructions can be considered an extension of the Curry–Howard isomorphism. The Curry–Howard isomorphism associates a term in the simply typed lambda calculus with each natural-deduction proof in intuitionistic propositional logic. The calculus of constructions extends this isomorphism to proofs in the full intuitionistic predicate calculus, which includes proofs of quantified statements (which we will also call "propositions"). Terms. A "term" in the calculus of constructions is constructed using the following rules: In other words, the term syntax, in Backus–Naur form, is then: formula_9 The calculus of constructions has five kinds of objects: Judgments. The calculus of constructions allows proving typing judgments: formula_10, which can be read as the implication If variables formula_11 have, respectively, types formula_12, then term formula_13 has type formula_4. The valid judgments for the calculus of constructions are derivable from a set of inference rules. In the following, we use formula_14 to mean a sequence of type assignments formula_15; formula_16 to mean terms; and formula_17 to mean either formula_1 or formula_0. We shall write formula_18 to mean the result of substituting the term formula_19 for the free variable formula_6 in the term formula_4. An inference rule is written in the form formula_20, which means if formula_21 is a valid judgment, then so is formula_22. Inference rules for the calculus of constructions. 1. formula_23 2. formula_24 3. formula_25 4. formula_26 5. formula_27 6. formula_28 Defining logical operators. The calculus of constructions has very few basic operators: the only logical operator for forming propositions is formula_29. However, this one operator is sufficient to define all the other logical operators: formula_30 Defining data types. The basic data types used in computer science can be defined within the calculus of constructions: Note that Booleans and Naturals are defined in the same way as in Church encoding. However, additional problems arise from propositional extensionality and proof irrelevance. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{T}" }, { "math_id": 1, "text": "\\mathbf{P}" }, { "math_id": 2, "text": "x, y, \\ldots" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "B" }, { "math_id": 5, "text": "(A B)" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "(\\lambda x:A. B)" }, { "math_id": 8, "text": "(\\forall x:A. B)" }, { "math_id": 9, "text": "e ::= \\mathbf{T} \\mid \\mathbf{P} \\mid x \\mid e \\, e \\mid \\lambda x\\mathbin{:}e.e\\mid \\forall x\\mathbin{:}e.e" }, { "math_id": 10, "text": " x_1:A_1, x_2:A_2, \\ldots \\vdash t:B" }, { "math_id": 11, "text": "x_1, x_2, \\ldots" }, { "math_id": 12, "text": "A_1, A_2, \\ldots" }, { "math_id": 13, "text": "t" }, { "math_id": 14, "text": "\\Gamma" }, { "math_id": 15, "text": " x_1:A_1, x_2:A_2, \\ldots " }, { "math_id": 16, "text": "A, B, C, D" }, { "math_id": 17, "text": "K, L" }, { "math_id": 18, "text": "B[x:=N]" }, { "math_id": 19, "text": "N" }, { "math_id": 20, "text": "\\frac{\\Gamma \\vdash A:B}{\\Gamma' \\vdash C:D}" }, { "math_id": 21, "text": " \\Gamma \\vdash A:B " }, { "math_id": 22, "text": " \\Gamma' \\vdash C:D " }, { "math_id": 23, "text": " {{} \\over\n\\Gamma \\vdash \\mathbf{P} : \\mathbf{T}} " }, { "math_id": 24, "text": " {{} \\over \n{\\Gamma, x:A, \\Gamma' \\vdash x : A}} " }, { "math_id": 25, "text": " {\\Gamma \\vdash A : K \\qquad\\qquad \\Gamma, x:A \\vdash B : L \\over\n{\\Gamma \\vdash (\\forall x:A . B) : L}} " }, { "math_id": 26, "text": " {\\Gamma \\vdash A : K \\qquad\\qquad \\Gamma, x:A \\vdash N : B \\over \n{\\Gamma \\vdash (\\lambda x:A . N) : (\\forall x:A . B)}} " }, { "math_id": 27, "text": " {\\Gamma \\vdash M : (\\forall x:A . B) \\qquad\\qquad \\Gamma \\vdash N : A \\over \n{\\Gamma \\vdash M N : B[x := N]}} " }, { "math_id": 28, "text": " {\\Gamma \\vdash M : A \\qquad \\qquad A =_\\beta B \\qquad \\qquad \\Gamma \\vdash B : K \n\\over {\\Gamma \\vdash M : B}} " }, { "math_id": 29, "text": "\\forall" }, { "math_id": 30, "text": "\n\\begin{array}{ccll}\nA \\Rightarrow B & \\equiv & \\forall x:A . B & (x \\notin B) \\\\\nA \\wedge B & \\equiv & \\forall C:\\mathbf{P} . (A \\Rightarrow B \\Rightarrow C) \\Rightarrow C & \\\\\nA \\vee B & \\equiv & \\forall C:\\mathbf{P} . (A \\Rightarrow C) \\Rightarrow (B \\Rightarrow C) \\Rightarrow C & \\\\\n\\neg A & \\equiv & \\forall C:\\mathbf{P} . (A \\Rightarrow C) & \\\\\n\\exists x:A.B & \\equiv & \\forall C:\\mathbf{P} . (\\forall x:A.(B \\Rightarrow C)) \\Rightarrow C &\n\\end{array}\n" }, { "math_id": 31, "text": "\\forall A: \\mathbf{P} . A \\Rightarrow A \\Rightarrow A" }, { "math_id": 32, "text": "\\forall A: \\mathbf{P} . \n(A \\Rightarrow A) \\Rightarrow A \\Rightarrow A" }, { "math_id": 33, "text": "A \\times B" }, { "math_id": 34, "text": "A \\wedge B" }, { "math_id": 35, "text": "A + B" }, { "math_id": 36, "text": "A \\vee B" }, { "math_id": 37, "text": "\\forall x:A . B" } ]
https://en.wikipedia.org/wiki?curid=613557
61355981
Ecclesiastes 2
Second chapter of the biblical book Ecclesiastes Ecclesiastes 2 is the second chapter of the Book of Ecclesiastes in the Hebrew Bible or the Old Testament of the Christian Bible. The book contains philosophical speeches by a character called "Qoheleth" ("the Teacher"; "Koheleth" or "Kohelet"), composed probably between the 5th and 2nd centuries BCE. Peshitta, Targum, and Talmud attribute the authorship of the book to King Solomon. The chapter continues the presentation of memoir in verses 12-18 of the previous chapter, with more observations on human efforts in life, related to the question in , "What profit has a man from all his labor, in which he toils under the sun?", and on the sufferings and the enjoyment of life in light of a divine dispensation. Text. The original text was written in Hebrew. This chapter is divided into 26 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). The Greek text is probably derived from the work of Aquila of Sinope or his followers. ""Laughter", I said, "is madness. And what does pleasure accomplish?"" The failure of pleasure-seeking (2:1–11). Verse 2. There is a similar sentiment in Proverbs 14:13: "Even in laughter the heart may ache, and rejoicing may end in grief." "Then I looked on all the works that my hands had wrought, and on the labour that I had laboured to do: and, behold, all was vanity and vexation of spirit, and there was no profit under the sun." Verse 11. This conclusion is an echo from the statements in . A sure fate for all (2:12–23). The question in this part – 'is there any preference between wisdom and pleasure-seeking?' – comes out of the problem of life () and two failed remedies ( and 2:1–11). The answer is given in verse 13–14 where on one hand, wisdom is better than pleasure-seeking, but on the other hand both are equally unable to deal with the problem of death. The Apostle Paul offers an answer and consolation in the New Testament: "your labour in the Lord is not in vain" (). The generous God (2:24–26). So far God is only mentioned in , but in this part God is acknowledged as the 'controller of his world, creator of beauty, judge of injustices'. Therefore, the ability to perceive that one should enjoy life is 'a divine dispensation' given only to the righteous people who please God, whereas the remainders have to work on behalf of the righteous. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61355981
6135790
Cevian
Line intersecting both a vertex and opposite edge of a triangle In geometry, a cevian is a line segment which joins a vertex of a triangle to a point on the opposite side of the triangle. Medians and angle bisectors are special cases of cevians. The name "cevian" comes from the Italian mathematician Giovanni Ceva, who proved a well-known theorem about cevians which also bears his name. Length. Stewart's theorem. The length of a cevian can be determined by Stewart's theorem: in the diagram, the cevian length d is given by the formula formula_0 Less commonly, this is also represented (with some rearrangement) by the following mnemonic: formula_1 Median. If the cevian happens to be a median (thus bisecting a side), its length can be determined from the formula formula_2 or formula_3 since formula_4 Hence in this case formula_5 Angle bisector. If the cevian happens to be an angle bisector, its length obeys the formulas formula_6 and formula_7 and formula_8 where the semiperimeter formula_9 The side of length "a" is divided in the proportion "b" : "c". Altitude. If the cevian happens to be an altitude and thus perpendicular to a side, its length obeys the formulas formula_10 and formula_11 where the semiperimeter formula_9 Ratio properties. There are various properties of the ratios of lengths formed by three cevians all passing through the same arbitrary interior point: Referring to the diagram at right, formula_12 The first property is known as Ceva's theorem. The last two properties are equivalent because summing the two equations gives the identity 1 + 1 + 1 = 3. Splitter. A splitter of a triangle is a cevian that bisects the perimeter. The three splitters concur at the Nagel point of the triangle. Area bisectors. Three of the area bisectors of a triangle are its medians, which connect the vertices to the opposite side midpoints. Thus a uniform-density triangle would in principle balance on a razor supporting any of the medians. Angle trisectors. If from each vertex of a triangle two cevians are drawn so as to trisect the angle (divide it into three equal angles), then the six cevians intersect in pairs to form an equilateral triangle, called the Morley triangle. Area of inner triangle formed by cevians. Routh's theorem determines the ratio of the area of a given triangle to that of a triangle formed by the pairwise intersections of three cevians, one from each vertex. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\,b^2m + c^2n = a(d^2 + mn)." }, { "math_id": 1, "text": "\\underset{\\text{A }man\\text{ and his }dad}{man\\ +\\ dad} = \\!\\!\\!\\!\\!\\! \\underset{\\text{put a }bomb\\text{ in the }sink.}{bmb\\ +\\ cnc}" }, { "math_id": 2, "text": "\\,m(b^2 + c^2) = a(d^2 + m^2)" }, { "math_id": 3, "text": "\\,2(b^2 + c^2) = 4d^2 + a^2" }, { "math_id": 4, "text": "\\,a = 2m." }, { "math_id": 5, "text": "d= \\frac\\sqrt{2 b^2 + 2 c^2 - a^2}2 ." }, { "math_id": 6, "text": "\\,(b + c)^2 = a^2 \\left( \\frac{d^2}{mn} + 1 \\right)," }, { "math_id": 7, "text": "d^2+mn = bc" }, { "math_id": 8, "text": "d= \\frac{2 \\sqrt{bcs(s-a)}}{b+c}" }, { "math_id": 9, "text": "s = \\tfrac{a+b+c}{2}." }, { "math_id": 10, "text": "\\,d^2 = b^2 - n^2 = c^2 - m^2" }, { "math_id": 11, "text": "d=\\frac{2\\sqrt{s(s-a)(s-b)(s-c)}}{a}," }, { "math_id": 12, "text": "\\begin{align}\n& \\frac{\\overline{AF}}{\\overline{FB}} \\cdot \\frac{\\overline{BD}}{\\overline{DC}} \\cdot \\frac{\\overline{CE}}{\\overline{EA}} = 1 \\\\\n& \\\\\n& \\frac{\\overline{AO}}{\\overline{OD}} = \\frac{\\overline{AE}}{\\overline{EC}} + \\frac{\\overline{AF}}{\\overline{FB}}; \\\\\n& \\\\\n& \\frac{\\overline{OD}}{\\overline{AD}} + \\frac{\\overline{OE}}{\\overline{BE}} + \\frac{\\overline{OF}}{\\overline{CF}} = 1; \\\\\n& \\\\\n& \\frac{\\overline{AO}}{\\overline{AD}} + \\frac{\\overline{BO}}{\\overline{BE}} + \\frac{\\overline{CO}}{\\overline{CF}} = 2.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=6135790
61360134
Babai's problem
&lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Which finite groups are BI-groups? Babai's problem is a problem in algebraic graph theory first proposed in 1979 by László Babai. Babai's problem. Let formula_0 be a finite group, let formula_1 be the set of all irreducible characters of formula_0, let formula_2 be the Cayley graph (or directed Cayley graph) corresponding to a generating subset formula_3 of formula_4, and let formula_5 be a positive integer. Is the set formula_6 an "invariant" of the graph formula_7? In other words, does formula_8 imply that formula_9? BI-group. A finite group formula_0 is called a BI-group (Babai Invariant group) if formula_10 for some inverse closed subsets formula_3 and formula_11 of formula_4 implies that formula_12 for all positive integers formula_5. Open problem. Which finite groups are BI-groups? References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "\\operatorname{Irr}(G)" }, { "math_id": 2, "text": "\\Gamma=\\operatorname{Cay}(G,S)" }, { "math_id": 3, "text": "S" }, { "math_id": 4, "text": "G\\setminus \\{1\\}" }, { "math_id": 5, "text": "\\nu" }, { "math_id": 6, "text": "M_\\nu^S=\\left\\{\\sum_{s\\in S} \\chi(s)\\;|\\; \\chi\\in \\operatorname{Irr}(G),\\; \\chi(1)=\\nu \\right\\}" }, { "math_id": 7, "text": "\\Gamma" }, { "math_id": 8, "text": "\\operatorname{Cay}(G,S)\\cong \\operatorname{Cay}(G,S')" }, { "math_id": 9, "text": "M_\\nu^S=M_\\nu^{S'}" }, { "math_id": 10, "text": "\\operatorname{Cay}(G,S)\\cong \\operatorname{Cay}(G,T)" }, { "math_id": 11, "text": "T" }, { "math_id": 12, "text": "M_\\nu^S=M_\\nu^T" } ]
https://en.wikipedia.org/wiki?curid=61360134
61361
Boy's surface
Self-intersecting compact surface, an immersion of the real projective plane In geometry, Boy's surface is an immersion of the real projective plane in 3-dimensional space found by Werner Boy in 1901. He discovered it on assignment from David Hilbert to prove that the projective plane "could not" be immersed in 3-space. Boy's surface was first parametrized explicitly by Bernard Morin in 1978. Another parametrization was discovered by Rob Kusner and Robert Bryant. Boy's surface is one of the two possible immersions of the real projective plane which have only a single triple point. Unlike the Roman surface and the cross-cap, it has no other singularities than self-intersections (that is, it has no pinch-points). Parametrization. Boy's surface can be parametrized in several ways. One parametrization, discovered by Rob Kusner and Robert Bryant, is the following: given a complex number "w" whose magnitude is less than or equal to one (formula_0), let formula_1 and then set formula_2 we then obtain the Cartesian coordinates "x", "y", and "z" of a point on the Boy's surface. If one performs an inversion of this parametrization centered on the triple point, one obtains a complete minimal surface with three ends (that's how this parametrization was discovered naturally). This implies that the Bryant–Kusner parametrization of Boy's surfaces is "optimal" in the sense that it is the "least bent" immersion of a projective plane into three-space. Property of Bryant–Kusner parametrization. If "w" is replaced by the negative reciprocal of its complex conjugate, formula_3 then the functions "g"1, "g"2, and "g"3 of "w" are left unchanged. By replacing "w" in terms of its real and imaginary parts "w" = "s" + "it", and expanding resulting parameterization, one may obtain a parameterization of Boy's surface in terms of rational functions of "s" and "t". This shows that Boy's surface is not only an algebraic surface, but even a rational surface. The remark of the preceding paragraph shows that the generic fiber of this parameterization consists of two points (that is that almost every point of Boy's surface may be obtained by two parameters values). Relation to the real projective plane. Let formula_4 be the Bryant–Kusner parametrization of Boy's surface. Then formula_5 This explains the condition formula_6 on the parameter: if formula_7 then formula_8 However, things are slightly more complicated for formula_9 In this case, one has formula_10 This means that, if formula_11 the point of the Boy's surface is obtained from two parameter values: formula_12 In other words, the Boy's surface has been parametrized by a disk such that pairs of diametrically opposite points on the perimeter of the disk are equivalent. This shows that the Boy's surface is the image of the real projective plane, RP2 by a smooth map. That is, the parametrization of the Boy's surface is an immersion of the real projective plane into the Euclidean space. Symmetries. Boy's surface has 3-fold symmetry. This means that it has an axis of discrete rotational symmetry: any 120° turn about this axis will leave the surface looking exactly the same. The Boy's surface can be cut into three mutually congruent pieces. Applications. Boy's surface can be used in sphere eversion, as a half-way model. A half-way model is an immersion of the sphere with the property that a rotation interchanges inside and outside, and so can be employed to evert (turn inside-out) a sphere. Boy's (the case p = 3) and Morin's (the case p = 2) surfaces begin a sequence of half-way models with higher symmetry first proposed by George Francis, indexed by the even integers 2p (for p odd, these immersions can be factored through a projective plane). Kusner's parametrization yields all these. Models. Model at Oberwolfach. The Mathematical Research Institute of Oberwolfach has a large model of a Boy's surface outside the entrance, constructed and donated by Mercedes-Benz in January 1991. This model has 3-fold rotational symmetry and minimizes the Willmore energy of the surface. It consists of steel strips which represent the image of a polar coordinate grid under a parameterization given by Robert Bryant and Rob Kusner. The meridians (rays) become ordinary Möbius strips, i.e. twisted by 180 degrees. All but one of the strips corresponding to circles of latitude (radial circles around the origin) are untwisted, while the one corresponding to the boundary of the unit circle is a Möbius strip twisted by three times 180 degrees — as is the emblem of the institute . Model made for Clifford Stoll. A model was made in glass by glassblower Lucas Clarke, with the cooperation of Adam Savage, for presentation to Clifford Stoll, It was featured on Adam Savage's YouTube channel, Tested. All three appeared in the video discussing it. References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\| w \\| \\le 1" }, { "math_id": 1, "text": "\\begin{align}\n g_1 &= -{3 \\over 2} \\operatorname{Im} \\left[ {w \\left(1 - w^4\\right) \\over w^6 + \\sqrt{5} w^3 - 1} \\right]\\\\[4pt]\n g_2 &= -{3 \\over 2} \\operatorname{Re} \\left[ {w \\left(1 + w^4\\right) \\over w^6 + \\sqrt{5} w^3 - 1} \\right]\\\\[4pt]\n g_3 &= \\operatorname{Im} \\left[ {1 + w^6 \\over w^6 + \\sqrt{5} w^3 - 1} \\right] - {1 \\over 2}\\\\\n\\end{align}" }, { "math_id": 2, "text": "\\begin{pmatrix}x\\\\ y\\\\ z\\end{pmatrix} = \\frac{1}{g_1^2 + g_2^2 + g_3^2} \\begin{pmatrix}g_1\\\\ g_2\\\\ g_3\\end{pmatrix}" }, { "math_id": 3, "text": "-{1 \\over w^\\star}," }, { "math_id": 4, "text": "P(w) = (x(w), y(w), z(w))" }, { "math_id": 5, "text": " P(w) = P\\left(-{1 \\over w^\\star} \\right). " }, { "math_id": 6, "text": "\\left\\| w \\right\\| \\le 1" }, { "math_id": 7, "text": "\\left\\| w \\right\\| < 1," }, { "math_id": 8, "text": " \\left\\| - {1 \\over w^\\star} \\right\\| > 1 ." }, { "math_id": 9, "text": " \\left\\| w \\right\\| = 1." }, { "math_id": 10, "text": "-{1 \\over w^\\star} = -w ." }, { "math_id": 11, "text": " \\left \\| w \\right\\| = 1, " }, { "math_id": 12, "text": "P(w) = P(-w)." } ]
https://en.wikipedia.org/wiki?curid=61361
61369588
Topswops
Mathematical problems devised by John Conway Topswops (and the variants Topdrops, Bottomswops and Bottomdrops) are mathematical problems devised and analysed by the British mathematician John Conway in 1973. Contrary to other games and problems introduced by Conway, these problems have not received much attention from the scientific community. Two famous mathematicians who have contributed to the problem are Martin Gardner and Donald Knuth. Formulation. In each variant of the problem, Conway uses a deck of playing cards. Since the numerical values of the deck are only relevant, only one suit is used. This is mathematically equivalent to a row of integers from formula_0 to formula_1. A shuffled pile of cards is written as formula_2. Topswops. For "topswops" the following algorithm is applied: The final configuration of the row always starts with formula_0. The "topswops" problem is occasionally named differently, with naming including "deterministic pancake problem", "topswops", "topswaps", "reverse card shuffle" and "fannkuch". The problem formulated by Conway is the following: Which initial configuration leads to the maximum number of 'swops' before the algorithm terminates? In literature there are some attempts to find lower and upper bounds for the number of iterations formula_4. Theorem: formula_4 is bounded by formula_5. Proof by Herbert S. Wilf: Consider a permutation formula_3 to formula_6 of the row formula_0 to formula_1. As an example, we consider formula_7. We are specifically interested in numbers which are at 'the correct position'. These are: 2, 5, 9, 10, 12. We define the Wilf number as formula_8. Claim: after each iteration of the algorithm, the Wilf number increases. Proof: We perform one iteration of the algorithm. Every number at 'the correct position' and larger than formula_3, leaves the Wilf number unchanged. The remaining numbers at 'the correct position' will in general not be at 'the correct position' anymore. Nevertheless, the formula_3's number is at the correct position. And since the sum of the first formula_9 Wilf numbers is always smaller than the Wilf number of formula_3, the total Wilf number always increases (with at least 1 per iteration of the algorithm). formula_10 The maximal Wilf number is found when each number is at the correct position. So the maximal Wilf number is formula_11. By refining the proof, the given upper bound can be proven to be a real upper bound for the number of iterations. formula_10 Theorem: formula_4 is bounded by the formula_12th Fibonacci number. Proof by Murray S. Klamkin: Suppose that during the algorithm, the first number formula_3 takes on in total formula_13 distinct values. Claim: formula_14. Proof: We prove the claim by mathematical induction. For formula_15, the algorithm directly terminates, hence, formula_16. Thus formula_17 and since formula_18 the claim is proven. We now take some formula_19. All formula_13 values that formula_3 takes on, are ordered and can be written as: formula_20. Suppose that the largest value of these values, which is formula_21, occurs for the first time at position formula_0 during iteration formula_22 of the algorithm. Denote formula_23. During the formula_24'th iteration, we know formula_25 and formula_26. The remaining iterations will always retain formula_26. Hence formula_3 can now take on at most formula_27 values. Using induction for formula_13, it follows that formula_28 and also that formula_29. formula_10 Suppose we would exchange formula_29 and formula_21 in iteration formula_22 Then formula_17 and the algorithm terminates; formula_30. During the algorithm, we are sure that both formula_21 and formula_31 have never been at position formula_3, unless formula_32. Suppose formula_32. Then formula_33 since formula_3 takes on at most formula_27 distinct values. So it follows that formula_34. Suppose formula_35. Then formula_36 since formula_3 takes on at most formula_37 distinct values. Using the claim, it follows that formula_38. This proves the theorem. formula_10 Besides these results, Morales and Sudborough have recently proven that the lower bound for formula_4 is a quadratic function in formula_1. The optimal values are, however, still unknown. There have been several attempts to find the optimal values, for example by A. Pepperdine. For rows with 19 or fewer numbers, the exact solution is known. Larger rows only have lower bounds, which is shown on the right. It is yet unknown whether this problem is NP-hard. Topdrops. A similar problem is "topdrops", where the same playing cards are used. In this problem, the first card of the pile is shown (and has value formula_3). Take the first formula_3 cards of the pile, change the order and place them back on the bottom of the pile (which contrasts "topswops", where the cards are placed at the top). This problem allows for infinite loops. As an example, we consider the row 2,1,3,4. By applying the algorithm, the following sequence is obtained: whereafter the original row is found again. Botswops. In this variant, the bottom card of the pile is taken (and again named formula_3). Then the first formula_3 cards of the pile are swapped. Unless the bottom card is the highest card in the pile, nothing happens. This makes the problem uninteresting due to the limited behaviour. Botdrops. The final variant is "botdrops" where the bottom card of the pile is taken (again formula_3). In this variant, the bottom formula_3 cards are swapped. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "A[1], ..., A[N]" }, { "math_id": 3, "text": "A[1]" }, { "math_id": 4, "text": "f(N)" }, { "math_id": 5, "text": "2^{N-1}" }, { "math_id": 6, "text": "A[N]" }, { "math_id": 7, "text": " 7,2,11,8,5,13,6,1,9,10,3,12,4 " }, { "math_id": 8, "text": "2^{(2-1)}+2^{(5-1)}+2^{(9-1)}+2^{(10-1)}+2^{(12-1)} = 2833" }, { "math_id": 9, "text": "A[1]-1" }, { "math_id": 10, "text": "\\square" }, { "math_id": 11, "text": "2^{N+1}-1" }, { "math_id": 12, "text": "(N+1)" }, { "math_id": 13, "text": "k" }, { "math_id": 14, "text": "f(N) \\leq F_{k+1}" }, { "math_id": 15, "text": "k=1" }, { "math_id": 16, "text": "f(N)=0" }, { "math_id": 17, "text": "A[1]=1" }, { "math_id": 18, "text": "F_2 = 1" }, { "math_id": 19, "text": "k \\geq 2" }, { "math_id": 20, "text": "d_1 < ...< d_k" }, { "math_id": 21, "text": "d_k" }, { "math_id": 22, "text": "r" }, { "math_id": 23, "text": "t=A[d_k]" }, { "math_id": 24, "text": "(r+1)" }, { "math_id": 25, "text": "A[1]=t" }, { "math_id": 26, "text": "A[d_k]=d_k" }, { "math_id": 27, "text": "k-1" }, { "math_id": 28, "text": "f(N) - r \\leq F_k" }, { "math_id": 29, "text": "d_1=1" }, { "math_id": 30, "text": "f(N)=r" }, { "math_id": 31, "text": "t" }, { "math_id": 32, "text": "t=1" }, { "math_id": 33, "text": "r \\leq F_k" }, { "math_id": 34, "text": "f(N) \\leq r+1 \\leq F_{k+1}" }, { "math_id": 35, "text": "t>1" }, { "math_id": 36, "text": "r \\leq F_{k-1}" }, { "math_id": 37, "text": "k-2" }, { "math_id": 38, "text": "f(N) \\leq F_k + r \\leq F_{k+1}" } ]
https://en.wikipedia.org/wiki?curid=61369588
61373
Golden ratio base
Positional numeral system Golden ratio base is a non-integer positional numeral system that uses the golden ratio (the irrational number  ≈ 1.61803399 symbolized by the Greek letter φ) as its base. It is sometimes referred to as base-φ, golden mean base, phi-base, or, colloquially, phinary. Any non-negative real number can be represented as a base-φ numeral using only the digits 0 and 1, and avoiding the digit sequence "11" – this is called a "standard form". A base-φ numeral that includes the digit sequence "11" can always be rewritten in standard form, using the algebraic properties of the base φ — most notably that φ1 + φ0 = φ2. For instance, 11φ = 100φ. Despite using an irrational number base, when using standard form, all non-negative integers have a unique representation as a terminating (finite) base-φ expansion. The set of numbers which possess a finite base-φ representation is the ring Z[]; it plays the same role in this numeral systems as dyadic rationals play in binary numbers, providing a possibility to multiply. Other numbers have standard representations in base-φ, with rational numbers having recurring representations. These representations are unique, except that numbers with a terminating expansion also have a non-terminating expansion. For example, 1 = 0.1010101… in base-φ just as 1 = 0.99999… in base-10. Writing golden ratio base numbers in standard form. In the following example of conversion from non-standard to standard form, the notation 1 is used to represent the signed digit -1. 211.01φ is not a standard base-φ numeral, since it contains a "11" and additionally a "2" and a "1" = −1, which are not "0" or "1". To put a numeral in standard form, we may use the following substitutions: formula_0, formula_1, formula_2, formula_3. The substitutions may be applied in any order we like, as the result will be the same. Below, the substitutions applied to the number on the previous line are on the right, the resulting number on the left. formula_4 Any positive number with a non-standard terminating base-φ representation can be uniquely standardized in this manner. If we get to a point where all digits are "0" or "1", except for the first digit being negative, then the number is negative. (The exception to this is when the first digit is negative one and the next two digits are one, like 1111.001=1.001.) This can be converted to the negative of a base-φ representation by negating every digit, standardizing the result, and then marking it as negative. For example, use a minus sign, or some other significance to denote negative numbers. Representing integers as golden ratio base numbers. We can either consider our integer to be the (only) digit of a nonstandard base-φ numeral, and standardize it, or do the following: 1 × 1 = 1, φ × φ = 1 + φ and = −1 + φ. Therefore, we can compute ("a" + "b"φ) + ("c" + "d"φ) = (("a" + "c") + ("b" + "d")φ), ("a" + "b"φ) − ("c" + "d"φ) = (("a" − "c") + ("b" − "d")φ) and ("a" + "b"φ) × ("c" + "d"φ) = (("ac" + "bd") + ("ad" + "bc" + "bd")φ). So, using integer values only, we can add, subtract and multiply numbers of the form ("a" + "b"φ), and even represent positive and negative integer powers of φ. ("a" + "b"φ) &gt; ("c" + "d"φ) if and only if 2("a" − "c") − ("d" − "b") &gt; ("d" − "b") × √5. If one side is negative, the other positive, the comparison is trivial. Otherwise, square both sides, to get an integer comparison, reversing the comparison direction if both sides were negative. On squaring both sides, the √5 is replaced with the integer 5. So, using integer values only, we can also compare numbers of the form ("a" + "b"φ). The above procedure will never result in the sequence "11", since 11φ = 100φ, so getting a "11" would mean we missed a "1" prior to the sequence "11". Start, e.g., with integer = 5, with the result so far being ...00000.00000...φ Highest power of φ ≤ 5 is φ3 = 1 + 2φ ≈ 4.236067977 Subtracting this from 5, we have 5 − (1 + 2φ) = 4 − 2φ ≈ 0.763932023..., the result so far being 1000.00000...φ Highest power of φ ≤ 4 − 2φ ≈ 0.763932023... is φ−1 = −1 + 1φ ≈ 0.618033989... Subtracting this from 4 − 2φ ≈ 0.763932023..., we have 4 − 2φ − (−1 + 1φ) = 5 − 3φ ≈ 0.145898034..., the result so far being 1000.10000...φ Highest power of φ ≤ 5 − 3φ ≈ 0.145898034... is φ−4 = 5 − 3φ ≈ 0.145898034... Subtracting this from 5 − 3φ ≈ 0.145898034..., we have 5 − 3φ − (5 − 3φ) = 0 + 0φ = 0, with the final result being 1000.1001φ. Non-uniqueness. Just as with any base-n system, numbers with a terminating representation have an alternative recurring representation. In base-10, this relies on the observation that 0.999...=1. In base-φ, the numeral 0.1010101... can be seen to be equal to 1 in several ways: formula_5 This non-uniqueness is a feature of the numeration system, since both 1.0000 and 0.101010... are in standard form. In general, the final 1 of any number in base-φ can be replaced with a recurring 01 without changing the value of that number. Representing rational numbers as golden ratio base numbers. Every non-negative rational number can be represented as a recurring base-φ expansion, as can any non-negative element of the field Q[√5] = Q + √5Q, the field generated by the rational numbers and √5. Conversely any recurring (or terminating) base-φ expansion is a non-negative element of Q[√5]. For recurring decimals, the recurring part has been overlined: The justification that a rational gives a recurring expansion is analogous to the equivalent proof for a base-"n" numeration system ("n" = 2,3,4...). Essentially in base-φ long division there are only a finite number of possible remainders, and so once there must be a recurring pattern. For example, with = = long division looks like this (note that base-φ subtraction may be hard to follow at first): The converse is also true, in that a number with a recurring base-φ; representation is an element of the field Q[√5]. This follows from the observation that a recurring representation with period k involves a geometric series with ratio φ−k, which will sum to an element of Q[√5]. Representing irrational numbers of note as golden ratio base numbers. The base-φ representations of some interesting numbers: Addition, subtraction, and multiplication. It is possible to adapt all the standard algorithms of base-10 arithmetic to base-φ arithmetic. There are two approaches to this: Calculate, then convert to standard form. For addition of two base-φ numbers, add each pair of digits, without carry, and then convert the numeral to standard form. For subtraction, subtract each pair of digits without borrow (borrow is a negative amount of carry), and then convert the numeral to standard form. For multiplication, multiply in the typical base-10 manner, without carry, then convert the numeral to standard form. For example, Avoid digits other than 0 and 1. A more "native" approach is to avoid having to add digits 1+1 or to subtract 0 – 1. This is done by reorganising the operands into nonstandard form so that these combinations do not occur. For example, The subtraction seen here uses a modified form of the standard "trading" algorithm for subtraction. Division. No non-integer rational number can be represented as a finite base-φ number. In other words, all finitely representable base-φ numbers are either integers or (more likely) an irrational in a quadratic field Q[√5]. Due to long division having only a finite number of possible remainders, a division of two integers (or other numbers with finite base-φ representation) will have a recurring expansion, as demonstrated above. Relationship with Fibonacci coding. Fibonacci coding is a closely related numeration system used for integers. In this system, only digits 0 and 1 are used and the place values of the digits are the Fibonacci numbers. As with base-φ, the digit sequence "11" is avoided by rearranging to a standard form, using the Fibonacci recurrence relation "F""k"+1 = "F""k" + "F""k"−1. For example, 30 = 1×21 + 0×13 + 1×8 + 0×5 + 0×3 + 0×2 + 1×1 + 0×1 = 10100010fib. Practical usage. It is possible to mix base-φ arithmetic with Fibonacci integer sequences. The sum of numbers in a General Fibonacci integer sequence that correspond with the nonzero digits in the base-φ number, is the multiplication of the base-φ number and the element at the zero-position in the sequence. For example: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0\\underline{1}0_\\phi=\\underline{1}01_\\phi" }, { "math_id": 1, "text": "1\\underline{1}0_\\phi=001_\\phi" }, { "math_id": 2, "text": "200_\\phi=1001_\\phi" }, { "math_id": 3, "text": "011_\\phi=100_\\phi" }, { "math_id": 4, "text": "\n\\begin{align}\n211.0\\underline{1}0_\\phi = 211&.\\underline{1}01_\\phi &0\\underline{1}0\\rightarrow\\underline{1}01 \\\\\n= 210&.011_\\phi &1\\underline{1}0\\rightarrow001 \\\\\n= 1011&.011_\\phi &200\\rightarrow1001 \\\\\n= 1100&.100_\\phi &011\\rightarrow100 \\\\\n= 10000&.1_\\phi &011\\rightarrow100\\\\\n\\end{align}\n" }, { "math_id": 5, "text": "\\sum_{k=0}^\\infty \\varphi^{-2k}=\\frac{1}{1-\\varphi^{-2}} = \\varphi" } ]
https://en.wikipedia.org/wiki?curid=61373
61379828
Berlekamp–Rabin algorithm
Method in number theory In number theory, Berlekamp's root finding algorithm, also called the Berlekamp–Rabin algorithm, is the probabilistic method of finding roots of polynomials over the field formula_0 with formula_1 elements. The method was discovered by Elwyn Berlekamp in 1970 as an auxiliary to the algorithm for polynomial factorization over finite fields. The algorithm was later modified by Rabin for arbitrary finite fields in 1979. The method was also independently discovered before Berlekamp by other researchers. History. The method was proposed by Elwyn Berlekamp in his 1970 work on polynomial factorization over finite fields. His original work lacked a formal correctness proof and was later refined and modified for arbitrary finite fields by Michael Rabin. In 1986 René Peralta proposed a similar algorithm for finding square roots in formula_0. In 2000 Peralta's method was generalized for cubic equations. Statement of problem. Let formula_1 be an odd prime number. Consider the polynomial formula_2 over the field formula_3 of remainders modulo formula_1. The algorithm should find all formula_4 in formula_0 such that formula_5 in formula_0. Algorithm. Randomization. Let formula_6. Finding all roots of this polynomial is equivalent to finding its factorization into linear factors. To find such factorization it is sufficient to split the polynomial into any two non-trivial divisors and factorize them recursively. To do this, consider the polynomial formula_7 where formula_8 is some element of formula_0. If one can represent this polynomial as the product formula_9 then in terms of the initial polynomial it means that formula_10, which provides needed factorization of formula_11. Classification of formula_0 elements. Due to Euler's criterion, for every monomial formula_12 exactly one of following properties holds: Thus if formula_17 is not divisible by formula_13, which may be checked separately, then formula_17 is equal to the product of greatest common divisors formula_18 and formula_19. Berlekamp's method. The property above leads to the following algorithm: If formula_11 is divisible by some non-linear primitive polynomial formula_26 over formula_0 then when calculating formula_25 with formula_27 and formula_28 one will obtain a non-trivial factorization of formula_29, thus algorithm allows to find all roots of arbitrary polynomials over formula_0. Modular square root. Consider equation formula_30 having elements formula_31 and formula_32 as its roots. Solution of this equation is equivalent to factorization of polynomial formula_33 over formula_0. In this particular case problem it is sufficient to calculate only formula_34. For this polynomial exactly one of the following properties will hold: In the third case GCD is equal to either formula_39 or formula_40. It allows to write the solution as formula_41. Example. Assume we need to solve the equation formula_42. For this we need to factorize formula_43. Consider some possible values of formula_8: A manual check shows that, indeed, formula_54 and formula_55. Correctness proof. The algorithm finds factorization of formula_17 in all cases except for ones when all numbers formula_56 are quadratic residues or non-residues simultaneously. According to theory of cyclotomy, the probability of such an event for the case when formula_57 are all residues or non-residues simultaneously (that is, when formula_58 would fail) may be estimated as formula_59 where formula_60 is the number of distinct values in formula_57. In this way even for the worst case of formula_61 and formula_62, the probability of error may be estimated as formula_63 and for modular square root case error probability is at most formula_64. Complexity. Let a polynomial have degree formula_65. We derive the algorithm's complexity as follows: Thus the whole procedure may be done in formula_72. Using the fast Fourier transform and Half-GCD algorithm, the algorithm's complexity may be improved to formula_73. For the modular square root case, the degree is formula_74, thus the whole complexity of algorithm in such case is bounded by formula_75 per iteration. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb F_p" }, { "math_id": 1, "text": "p" }, { "math_id": 2, "text": "f(x) = a_0 + a_1 x + \\cdots + a_n x^n" }, { "math_id": 3, "text": "\\mathbb F_p\\simeq \\mathbb Z/p\\mathbb Z" }, { "math_id": 4, "text": "\\lambda" }, { "math_id": 5, "text": "f(\\lambda)= 0" }, { "math_id": 6, "text": "f(x) = (x-\\lambda_1)(x-\\lambda_2)\\cdots(x-\\lambda_n)" }, { "math_id": 7, "text": "f_z(x)=f(x-z) = (x-\\lambda_1 - z)(x-\\lambda_2 - z) \\cdots (x-\\lambda_n-z)" }, { "math_id": 8, "text": "z" }, { "math_id": 9, "text": "f_z(x)=p_0(x)p_1(x)" }, { "math_id": 10, "text": "f(x) =p_0(x+z)p_1(x+z)" }, { "math_id": 11, "text": "f(x)" }, { "math_id": 12, "text": "(x-\\lambda)" }, { "math_id": 13, "text": "x" }, { "math_id": 14, "text": "\\lambda = 0" }, { "math_id": 15, "text": "g_0(x)=(x^{(p-1)/2}-1)" }, { "math_id": 16, "text": "g_1(x)=(x^{(p-1)/2}+1)" }, { "math_id": 17, "text": "f_z(x)" }, { "math_id": 18, "text": "\\gcd(f_z(x);g_0(x))" }, { "math_id": 19, "text": "\\gcd(f_z(x);g_1(x))" }, { "math_id": 20, "text": "f_z(x) = f(x-z)" }, { "math_id": 21, "text": "x,x^2, x^{2^2},x^{2^3}, x^{2^4}, \\ldots, x^{2^{\\lfloor \\log_2 p \\rfloor}}" }, { "math_id": 22, "text": "x^{(p-1)/2}" }, { "math_id": 23, "text": "f_z(x)" }, { "math_id": 24, "text": "x^{(p-1)/2} \\not \\equiv \\pm 1 \\pmod{f_z(x)}" }, { "math_id": 25, "text": "\\gcd" }, { "math_id": 26, "text": "g(x)" }, { "math_id": 27, "text": "g_0(x)" }, { "math_id": 28, "text": "g_1(x)" }, { "math_id": 29, "text": "f_z(x)/g_z(x)" }, { "math_id": 30, "text": "x^2 \\equiv a \\pmod{p}" }, { "math_id": 31, "text": "\\beta" }, { "math_id": 32, "text": "-\\beta" }, { "math_id": 33, "text": "f(x) = x^2-a=(x-\\beta)(x+\\beta)" }, { "math_id": 34, "text": "\\gcd(f_z(x); g_0(x))" }, { "math_id": 35, "text": "1" }, { "math_id": 36, "text": "z+\\beta" }, { "math_id": 37, "text": "z-\\beta" }, { "math_id": 38, "text": "(x-t)" }, { "math_id": 39, "text": "(x-z-\\beta)" }, { "math_id": 40, "text": "(x-z+\\beta)" }, { "math_id": 41, "text": "\\beta = (t - z) \\pmod{p}" }, { "math_id": 42, "text": "x^2 \\equiv 5\\pmod{11}" }, { "math_id": 43, "text": "f(x)=x^2-5=(x-\\beta)(x+\\beta)" }, { "math_id": 44, "text": "z=3" }, { "math_id": 45, "text": "f_z(x) = (x-3)^2 - 5 = x^2 - 6x + 4" }, { "math_id": 46, "text": "\\gcd(x^2 - 6x + 4 ; x^5 - 1) = 1" }, { "math_id": 47, "text": "3 \\pm \\beta" }, { "math_id": 48, "text": "z=2" }, { "math_id": 49, "text": "f_z(x) = (x-2)^2 - 5 = x^2 - 4x - 1" }, { "math_id": 50, "text": "\\gcd( x^2 - 4x - 1 ; x^5 - 1)\\equiv x - 9 \\pmod{11}" }, { "math_id": 51, "text": "x - 9 = x - 2 - \\beta" }, { "math_id": 52, "text": "\\beta \\equiv 7 \\pmod{11}" }, { "math_id": 53, "text": "-\\beta \\equiv -7 \\equiv 4 \\pmod{11}" }, { "math_id": 54, "text": "7^2 \\equiv 49 \\equiv 5\\pmod{11}" }, { "math_id": 55, "text": "4^2\\equiv 16 \\equiv 5\\pmod{11}" }, { "math_id": 56, "text": "z+\\lambda_1, z+\\lambda_2, \\ldots, z+\\lambda_n" }, { "math_id": 57, "text": "\\lambda_1, \\ldots, \\lambda_n" }, { "math_id": 58, "text": "z=0" }, { "math_id": 59, "text": "2^{-k}" }, { "math_id": 60, "text": "k" }, { "math_id": 61, "text": "k=1" }, { "math_id": 62, "text": "f(x)=(x-\\lambda)^n" }, { "math_id": 63, "text": "1/2" }, { "math_id": 64, "text": "1/4" }, { "math_id": 65, "text": "n" }, { "math_id": 66, "text": "(x-z)^k = \\sum\\limits_{i=0}^k \\binom{k}{i} (-z)^{k-i}x^i" }, { "math_id": 67, "text": "f(x-z)" }, { "math_id": 68, "text": "O(n^2)" }, { "math_id": 69, "text": "O(n^2)" }, { "math_id": 70, "text": "x^{2^k} \\bmod f_z(x)" }, { "math_id": 71, "text": "O(n^2 \\log p)" }, { "math_id": 72, "text": "O(n^2 \\log p)" }, { "math_id": 73, "text": "O(n \\log n \\log pn)" }, { "math_id": 74, "text": "n = 2" }, { "math_id": 75, "text": "O(\\log p)" } ]
https://en.wikipedia.org/wiki?curid=61379828
6138
Conjecture
Proposition in mathematics that is unproven In mathematics, a conjecture is a conclusion or a proposition that is proffered on a tentative basis without proof. Some conjectures, such as the Riemann hypothesis or Fermat's conjecture (now a theorem, proven in 1995 by Andrew Wiles), have shaped much of mathematical history as new areas of mathematics are developed in order to prove them. Resolution of conjectures. Proof. Formal mathematics is based on "provable" truth. In mathematics, any number of cases supporting a universally quantified conjecture, no matter how large, is insufficient for establishing the conjecture's veracity, since a single counterexample could immediately bring down the conjecture. Mathematical journals sometimes publish the minor results of research teams having extended the search for a counterexample farther than previously done. For instance, the Collatz conjecture, which concerns whether or not certain sequences of integers terminate, has been tested for all integers up to 1.2 × 1012 (over a trillion). However, the failure to find a counterexample after extensive search does not constitute a proof that the conjecture is true—because the conjecture might be false but with a very large minimal counterexample. Nevertheless, mathematicians often regard a conjecture as strongly supported by evidence even though not yet proved. That evidence may be of various kinds, such as verification of consequences of it or strong interconnections with known results. A conjecture is considered proven only when it has been shown that it is logically impossible for it to be false. There are various methods of doing so; see methods of mathematical proof for more details. One method of proof, applicable when there are only a finite number of cases that could lead to counterexamples, is known as "brute force": in this approach, all possible cases are considered and shown not to give counterexamples. In some occasions, the number of cases is quite large, in which case a brute-force proof may require as a practical matter the use of a computer algorithm to check all the cases. For example, the validity of the 1976 and 1997 brute-force proofs of the four color theorem by computer was initially doubted, but was eventually confirmed in 2005 by theorem-proving software. When a conjecture has been proven, it is no longer a conjecture but a theorem. Many important theorems were once conjectures, such as the Geometrization theorem (which resolved the Poincaré conjecture), Fermat's Last Theorem, and others. Disproof. Conjectures disproven through counterexample are sometimes referred to as "false conjectures" (cf. the Pólya conjecture and Euler's sum of powers conjecture). In the case of the latter, the first counterexample found for the n=4 case involved numbers in the millions, although it has been subsequently found that the minimal counterexample is actually smaller. Independent conjectures. Not every conjecture ends up being proven true or false. The continuum hypothesis, which tries to ascertain the relative cardinality of certain infinite sets, was eventually shown to be independent from the generally accepted set of Zermelo–Fraenkel axioms of set theory. It is therefore possible to adopt this statement, or its negation, as a new axiom in a consistent manner (much as Euclid's parallel postulate can be taken either as true or false in an axiomatic system for geometry). In this case, if a proof uses this statement, researchers will often look for a new proof that "does not" require the hypothesis (in the same way that it is desirable that statements in Euclidean geometry be proved using only the axioms of neutral geometry, i.e. without the parallel postulate). The one major exception to this in practice is the axiom of choice, as the majority of researchers usually do not worry whether a result requires it—unless they are studying this axiom in particular. Conditional proofs. Sometimes, a conjecture is called a "hypothesis" when it is used frequently and repeatedly as an assumption in proofs of other results. For example, the Riemann hypothesis is a conjecture from number theory that — amongst other things — makes predictions about the distribution of prime numbers. Few number theorists doubt that the Riemann hypothesis is true. In fact, in anticipation of its eventual proof, some have even proceeded to develop further proofs which are contingent on the truth of this conjecture. These are called "conditional proofs": the conjectures assumed appear in the hypotheses of the theorem, for the time being. These "proofs", however, would fall apart if it turned out that the hypothesis was false, so there is considerable interest in verifying the truth or falsity of conjectures of this type. Important examples. Fermat's Last Theorem. In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers formula_0, "formula_1", and "formula_2" can satisfy the equation "formula_3" for any integer value of "formula_4" greater than two. This theorem was first conjectured by Pierre de Fermat in 1637 in the margin of a copy of "Arithmetica", where he claimed that he had a proof that was too large to fit in the margin. The first successful proof was released in 1994 by Andrew Wiles, and formally published in 1995, after 358 years of effort by mathematicians. The unsolved problem stimulated the development of algebraic number theory in the 19th century, and the proof of the modularity theorem in the 20th century. It is among the most notable theorems in the history of mathematics, and prior to its proof it was in the "Guinness Book of World Records" for "most difficult mathematical problems". Four color theorem. In mathematics, the four color theorem, or the four color map theorem, states that given any separation of a plane into contiguous regions, producing a figure called a "map", no more than four colors are required to color the regions of the map—so that no two adjacent regions have the same color. Two regions are called "adjacent" if they share a common boundary that is not a corner, where corners are the points shared by three or more regions. For example, in the map of the United States of America, Utah and Arizona are adjacent, but Utah and New Mexico, which only share a point that also belongs to Arizona and Colorado, are not. Möbius mentioned the problem in his lectures as early as 1840. The conjecture was first proposed on October 23, 1852 when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. The five color theorem, which has a short elementary proof, states that five colors suffice to color a map and was proven in the late 19th century; however, proving that four colors suffice turned out to be significantly harder. A number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852. The four color theorem was ultimately proven in 1976 by Kenneth Appel and Wolfgang Haken. It was the first major theorem to be proved using a computer. Appel and Haken's approach started by showing that there is a particular set of 1,936 maps, each of which cannot be part of a smallest-sized counterexample to the four color theorem (i.e., if they did appear, one could make a smaller counter-example). Appel and Haken used a special-purpose computer program to confirm that each of these maps had this property. Additionally, any map that could potentially be a counterexample must have a portion that looks like one of these 1,936 maps. Showing this with hundreds of pages of hand analysis, Appel and Haken concluded that no smallest counterexample exists because any must contain, yet do not contain, one of these 1,936 maps. This contradiction means there are no counterexamples at all and that the theorem is therefore true. Initially, their proof was not accepted by mathematicians at all because the computer-assisted proof was infeasible for a human to check by hand. However, the proof has since then gained wider acceptance, although doubts still remain. Hauptvermutung. The Hauptvermutung (German for main conjecture) of geometric topology is the conjecture that any two triangulations of a triangulable space have a common refinement, a single triangulation that is a subdivision of both of them. It was originally formulated in 1908, by Steinitz and Tietze. This conjecture is now known to be false. The non-manifold version was disproved by John Milnor in 1961 using Reidemeister torsion. The manifold version is true in dimensions "m" ≤ 3. The cases "m" = 2 and 3 were proved by Tibor Radó and Edwin E. Moise in the 1920s and 1950s, respectively. Weil conjectures. In mathematics, the Weil conjectures were some highly influential proposals by André Weil (1949) on the generating functions (known as local zeta-functions) derived from counting the number of points on algebraic varieties over finite fields. A variety "V" over a finite field with "q" elements has a finite number of rational points, as well as points over every finite field with "q""k" elements containing that field. The generating function has coefficients derived from the numbers "N""k" of points over the (essentially unique) field with "q""k" elements. Weil conjectured that such "zeta-functions" should be rational functions, should satisfy a form of functional equation, and should have their zeroes in restricted places. The last two parts were quite consciously modeled on the Riemann zeta function and Riemann hypothesis. The rationality was proved by , the functional equation by , and the analogue of the Riemann hypothesis was proved by . Poincaré conjecture. In mathematics, the Poincaré conjecture is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space. The conjecture states that: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Every simply connected, closed 3-manifold is homeomorphic to the 3-sphere. An equivalent form of the conjecture involves a coarser form of equivalence than homeomorphism called homotopy equivalence: if a 3-manifold is "homotopy equivalent" to the 3-sphere, then it is necessarily "homeomorphic" to it. Originally conjectured by Henri Poincaré in 1904, the theorem concerns a space that locally looks like ordinary three-dimensional space but is connected, finite in size, and lacks any boundary (a closed 3-manifold). The Poincaré conjecture claims that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. An analogous result has been known in higher dimensions for some time. After nearly a century of effort by mathematicians, Grigori Perelman presented a proof of the conjecture in three papers made available in 2002 and 2003 on arXiv. The proof followed on from the program of Richard S. Hamilton to use the Ricci flow to attempt to solve the problem. Hamilton later introduced a modification of the standard Ricci flow, called "Ricci flow with surgery" to systematically excise singular regions as they develop, in a controlled way, but was unable to prove this method "converged" in three dimensions. Perelman completed this portion of the proof. Several teams of mathematicians have verified that Perelman's proof is correct. The Poincaré conjecture, before being proven, was one of the most important open questions in topology. Riemann hypothesis. In mathematics, the Riemann hypothesis, proposed by Bernhard Riemann (1859), is a conjecture that the non-trivial zeros of the Riemann zeta function all have real part 1/2. The name is also used for some closely related analogues, such as the Riemann hypothesis for curves over finite fields. The Riemann hypothesis implies results about the distribution of prime numbers. Along with suitable generalizations, some mathematicians consider it the most important unresolved problem in pure mathematics. The Riemann hypothesis, along with the Goldbach conjecture, is part of Hilbert's eighth problem in David Hilbert's list of 23 unsolved problems; it is also one of the Clay Mathematics Institute Millennium Prize Problems. P versus NP problem. The P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer; it is widely conjectured that the answer is no. It was essentially first mentioned in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time. The precise statement of the P=NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" and is considered by many to be the most important open problem in the field. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution. In other sciences. Karl Popper pioneered the use of the term "conjecture" in scientific philosophy. Conjecture is related to hypothesis, which in science refers to a testable conjecture. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a" }, { "math_id": 1, "text": "b" }, { "math_id": 2, "text": "c" }, { "math_id": 3, "text": "a^n + b^n = c^n" }, { "math_id": 4, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=6138
61383013
MRF optimization via dual decomposition
In dual decomposition a problem is broken into smaller subproblems and a solution to the relaxed problem is found. This method can be employed for MRF optimization. Dual decomposition is applied to markov logic programs as an inference technique. Background. Discrete MRF Optimization (inference) is very important in Machine Learning and Computer vision, which is realized on CUDA graphical processing units. Consider a graph formula_0with nodes formula_1and Edges formula_2. The goal is to assign a label formula_3to each formula_4so that the MRF Energy is minimized: (1) formula_5 Major MRF Optimization methods are based on Graph cuts or Message passing. They rely on the following integer linear programming formulation (2) formula_6 In many applications, the MRF-variables are {0,1}-variables that satisfy: formula_7formula_8label formula_9 is assigned to formula_10, while formula_11 , labels formula_12 are assigned to formula_13. Dual Decomposition. The main idea behind decomposition is surprisingly simple: A sample problem to decompose: formula_14where formula_15 In this problem, separately minimizing every single formula_16over formula_17 is easy; but minimizing their sum is a complex problem. So the problem needs to get decomposed using auxiliary variables formula_18and the problem will be as follows: formula_19where formula_20 Now we can relax the constraints by multipliers formula_21 which gives us the following Lagrangian dual function: formula_22 Now we eliminate formula_17 from the dual function by minimizing over formula_17 and dual function becomes: formula_23 We can set up a Lagrangian dual problem: (3) formula_24 The Master problem (4) formula_25where formula_26The Slave problems MRF optimization via Dual Decomposition. The original MRF optimization problem is NP-hard and we need to transform it into something easier. formula_27is a set of sub-trees of graph formula_28where its trees cover all nodes and edges of the main graph. And MRFs defined for every tree formula_29 in formula_27will be smaller. The vector of MRF parameters is formula_30and the vector of MRF variables is formula_31, these two are just smaller in comparison with original MRF vectors formula_32. For all vectors formula_30we'll have the following: (5) formula_33 Where formula_34and formula_35denote all trees of formula_27than contain node formula_10and edge formula_36respectively. We simply can write: (6) formula_37 And our constraints will be: (7) formula_38 Our original MRF problem will become: (8) formula_39where formula_40and formula_41 And we'll have the dual problem we were seeking: (9) formula_42The Master problem where each function formula_43is defined as: (10) formula_44where formula_45The Slave problems Theoretical Properties. Theorem 1. Lagrangian relaxation (9) is equivalent to the LP relaxation of (2). formula_46 Theorem 2. If the sequence of multipliers formula_47satisfies formula_48then the algorithm converges to the optimal solution of (9). Theorem 3. The distance of the current solution formula_49to the optimal solution formula_50, which decreases at every iteration. Theorem 4. Any solution obtained by the method satisfies the WTA (weak tree agreement) condition. Theorem 5. For binary MRFs with sub-modular energies, the method computes a globally optimal solution. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G = (V,E)" }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "E" }, { "math_id": 3, "text": "l_p" }, { "math_id": 4, "text": "p \\in V" }, { "math_id": 5, "text": "\\min \\Sigma_{p\\in V} \\theta_p(l_p) + \\Sigma_{pq\\in \\varepsilon} \\theta_{pq}(l_p)(l_q)" }, { "math_id": 6, "text": "\\min_x E(\\theta, x)= \\theta.x = \\sum_{p \\in V} \\theta_p.x_p+\\sum_{pq \\in \\varepsilon} \\theta_{pq}.x_{pq}" }, { "math_id": 7, "text": "x_p(l)=1" }, { "math_id": 8, "text": "\\Leftrightarrow" }, { "math_id": 9, "text": "l" }, { "math_id": 10, "text": "p" }, { "math_id": 11, "text": "x_{pq}(l,l^\\prime) =1" }, { "math_id": 12, "text": "l,l^\\prime" }, { "math_id": 13, "text": "p,q" }, { "math_id": 14, "text": "\\min_x \\Sigma_{i} f^i(x)" }, { "math_id": 15, "text": "x \\in C" }, { "math_id": 16, "text": "f^i(x)" }, { "math_id": 17, "text": "x" }, { "math_id": 18, "text": "\\{x^i\\}" }, { "math_id": 19, "text": "\\min_{\\{x^i\\},x} \\Sigma_i f^i(x^i)" }, { "math_id": 20, "text": "x^i \\in C, x^i=x" }, { "math_id": 21, "text": "\\{\\lambda^i\\}" }, { "math_id": 22, "text": "g(\\{\\lambda^i\\})\n=\\min_{\\{x^i \\in C\\},x} \\Sigma_i f^i(x^i) + \\Sigma_i \\lambda^i.(x^i-x)=\\min_{\\{x^i \\in C\\},x} \\Sigma_i [f^i(x^i)+\\lambda^i.x^i]-(\\Sigma_i \\lambda^i)x" }, { "math_id": 23, "text": "g(\\{\\lambda^i\\})=\\min_{\\{x^i \\in C\\}}\\Sigma_i[f^i(x^i) + \\lambda^i.x^i]" }, { "math_id": 24, "text": "\\max_{\\{\\lambda^i\\}\\in \\Lambda} g({\\lambda^i})=\\Sigma_i g^i(x^i)," }, { "math_id": 25, "text": "g^i(x^i)=min_{x^i} f^i(x^i) +\\lambda^i.x^i" }, { "math_id": 26, "text": "x^i \\in C" }, { "math_id": 27, "text": "\\tau" }, { "math_id": 28, "text": "G" }, { "math_id": 29, "text": "T" }, { "math_id": 30, "text": "\\theta^T" }, { "math_id": 31, "text": "x^T" }, { "math_id": 32, "text": "\\theta, x" }, { "math_id": 33, "text": "\\sum_{T \\in \\tau(p)} \\theta_p^T= \\theta_p, \\sum_{T \\in \\tau(pq)} \\theta_{pq}^T= \\theta_{pq}." }, { "math_id": 34, "text": "\\tau(p)" }, { "math_id": 35, "text": "\\tau(pq)" }, { "math_id": 36, "text": "pq" }, { "math_id": 37, "text": "E(\\theta,x)=\\sum_{T \\in \\tau} E(\\theta^T, x^T)" }, { "math_id": 38, "text": "x^T \\in \\chi^T, x^T=x_{|T},\\forall T \\in \\tau" }, { "math_id": 39, "text": "\\min_{\\{x^T\\},x} \\Sigma_{T \\in \\tau} E(\\theta^T, x^T)\n" }, { "math_id": 40, "text": "x^T \\in \\chi^T, \\forall T \\in \\tau" }, { "math_id": 41, "text": "x^T \\in x_{|T}, \\forall T \\in \\tau" }, { "math_id": 42, "text": "\\max_{\\{\\lambda^T\\} \\in \\Lambda} g(\\{\\lambda^T\\})= \\sum_{T \\in \\tau} g^T(\\lambda^T)," }, { "math_id": 43, "text": "g^T(.)" }, { "math_id": 44, "text": "g^T(\\lambda^T)=\\min_{x^T} E(\\theta^T+ \\lambda^T, x^T)" }, { "math_id": 45, "text": "x^T \\in \\chi^T" }, { "math_id": 46, "text": "\\min_{\\{x^T\\},x} \\{E(x, \\theta)|x_p^T=s_p,x^T \\in \\text{CONVEXHULL}(\\chi^T)\\}" }, { "math_id": 47, "text": "\\{\\alpha_t\\}" }, { "math_id": 48, "text": "\\alpha_t \\geq 0, \\lim_{t \\to \\infin} \\alpha_t = 0, \\sum_{t=0}^\\infin \\alpha_t=\\infin" }, { "math_id": 49, "text": "\\{\\theta^T\\}" }, { "math_id": 50, "text": "\\{\\bar \\theta^T\\}" } ]
https://en.wikipedia.org/wiki?curid=61383013
61384892
Spectral imaging (radiography)
Spectral imaging is an umbrella term for energy-resolved X-ray imaging in medicine. The technique makes use of the energy dependence of X-ray attenuation to either increase the contrast-to-noise ratio, or to provide quantitative image data and reduce image artefacts by so-called material decomposition. Dual-energy imaging, i.e. imaging at two energy levels, is a special case of spectral imaging and is still the most widely used terminology, but the terms "spectral imaging" and "spectral CT" have been coined to acknowledge the fact that photon-counting detectors have the potential for measurements at a larger number of energy levels. Background. The first medical application of spectral imaging appeared in 1953 when B. Jacobson at the Karolinska University Hospital, inspired by X-ray absorption spectroscopy, presented a method called "dichromography" to measure the concentration of iodine in X-ray images. In the 70's, spectral computed tomography (CT) with exposures at two different voltage levels was proposed by G.N. Hounsfield in his landmark CT paper. The technology evolved rapidly during the 70's and 80's, but technical limitations, such as motion artifacts, for long held back widespread clinical use. In recent years, however, two fields of technological breakthrough have spurred a renewed interest in energy-resolved imaging. Firstly, single-scan energy-resolved CT was introduced for routine clinical use in 2006 and is now available by several major manufacturers, which has resulted in a large and expanding number of clinical applications. Secondly, energy-resolving photon-counting detectors start to become available for clinical practice; the first commercial photon-counting system was introduced for mammography in 2003, and CT systems are at the verge of being feasible for routine clinical use. Spectral image acquisition. An energy-resolved imaging system probes the object at two or more photon energy levels. In a generic imaging system, the projected signal in a detector element at energy level formula_0 is where formula_1 is the number of incident photons, formula_2 is the normalized incident energy spectrum, and formula_3 is the detector response function. Linear attenuation coefficients and integrated thicknesses for materials that make up the object are denoted formula_4 and formula_5 (attenuation according to Lambert–Beers law). Two conceivable ways of acquiring spectral information are to either vary formula_6 with formula_7, or to have formula_7-specific formula_3, here denoted incidence-based and detection-based methods, respectively. Most elements appearing naturally in human bodies are of low atomic number and lack absorption edges in the diagnostic X-ray energy range. The two dominating X-ray interaction effects are then Compton scattering and the photo-electric effect, which can be assumed to be smooth and with separable and independent material and energy dependences. The linear attenuation coefficients can hence be expanded as In contrast-enhanced imaging, high-atomic-number contrast agents with K absorption edges in the diagnostic energy range may be present in the body. K-edge energies are material specific, which means that the energy dependence of the photo-electric effect is no longer separable from the material properties, and an additional term can be added to Eq. (2) according to where formula_8 and formula_9 are the material coefficient and energy dependency of contrast-agent material formula_10. Energy weighting. Summing the energy bins in Eq. (1) (formula_11) yields a conventional non-energy-resolved image, but because X-ray contrast varies with energy, a weighted sum (formula_12) optimizes the contrast-to-noise-ratio (CNR) and enables a higher CNR at a constant patient dose or a lower dose at a constant CNR. The benefit of energy weighting is highest where the photo-electric effect dominates and lower in high-energy regions dominated by Compton scattering (with weaker energy dependence). Energy weighting was pioneered by Tapiovaara and Wagner and has subsequently been refined for projection imaging and CT with CNR improvements ranging from a few percent up to tenth of percent for heavier elements and an ideal CT detector. An example with a realistic detector was presented by Berglund et al. who modified a photon-counting mammography system and raised the CNR of clinical images by 2.2–5.2%. Material decomposition. Equation (1) can be treated as a system of equations with material thicknesses as unknowns, a technique broadly referred to as material decomposition. System properties and linear attenuation coefficients need to be known, either explicitly (by modelling) or implicitly (by calibration). In CT, implementing material decomposition post reconstruction (image-based decomposition) does not require coinciding projection data, but the decomposed images may suffer from beam-hardening artefacts because the reconstruction algorithm is generally non-reversible. Applying material decomposition directly in projection space instead (projection-based decomposition), can in principle eliminate beam-hardening artefacts because the decomposed projections are quantitative, but the technique requires coinciding projection data such as from a detection-based method. In the absence of K-edge contrast agents and any other information about the object (e.g. thickness), the limited number of independent energy dependences according to Eq. (2) means that the system of equations can only be solved for two unknowns, and measurements at two energies (formula_13) are necessary and sufficient for a unique solution of formula_14 and formula_15. Materials 1 and 2 are referred to as basis materials and are assumed to make up the object; any other material present in the object will be represented by a linear combination of the two basis materials. Material-decomposed images can be used to differentiate between healthy and malignant tissue, such as micro calcifications in the breast, ribs and pulmonary nodules, cysts, solid tumors and normal breast tissue, posttraumatic bone bruises (bone marrow edema) and the bone itself, different types of renal calculi (stones), and gout in the joints. The technique can also be used to characterize healthy tissue, such as the composition of breast tissue (an independent risk factor for breast cancer) and bone-mineral density (an independent risk factor for fractures and all-cause mortality). Finally, virtual autopsies with spectral imaging can facilitate detection and characterization of bullets, knife tips, glass or shell fragments etc. The basis-material representation can be readily converted to images showing the amounts of photoelectric and Compton interactions by invoking Eq. (2), and to images of effective-atomic-number and electron density distributions. As the basis-material representation is sufficient to describe the linear attenuation of the object, it is possible to calculate virtual monochromatic images, which is useful for optimizing the CNR to a certain imaging task, analogous to energy weighting. For instance, the CNR between grey and white brain matter is maximized at medium energies, whereas artefacts caused by photon starvation are minimized at higher virtual energies. K-edge imaging. In contrast-enhanced imaging, additional unknowns may be added to the system of equations according to Eq. (3) if one or several K absorption edges are present in the imaged energy range, a technique often referred to as K-edge imaging. With one K-edge contrast agent, measurements at three energies (formula_16) are necessary and sufficient for a unique solution, two contrast agents can be differentiated with four energy bins (formula_17), etc. K-edge imaging can be used to either enhance and quantify, or to suppress a contrast agent. Enhancement of contrast agents can be used for improved detection and diagnosis of tumors, which exhibit increased retention of contrast agents. Further, differentiation between iodine and calcium is often challenging in conventional CT, but energy-resolved imaging can facilitate many procedures by, for instance, suppressing bone contrast and improving characterization of atherosclerotic plaque. Suppression of contrast agents is employed in so-called virtual unenhanced or virtual non-contrast (VNC) images. VNC images are free from iodine staining (contrast-agent residuals), can save dose to the patient by reducing the need for an additional non-contrast acquisition, can improve radiotherapy dose calculations from CT images, and can help in distinguishing between contrast agent and foreign objects. Most studies of contrast-enhanced spectral imaging have used iodine, which is a well-established contrast agent, but the K edge of iodine at 33.2 keV is not optimal for all applications and some patients are hypersensitive to iodine. Other contrast agents have therefore been proposed, such as gadolinium (K edge at 50.2 keV), nanoparticle silver (K edge at 25.5 keV), zirconium (K edge at 18.0 keV), and gold (K edge at 80.7 keV). Some contrast agents can be targeted, which opens up possibilities for molecular imaging, and using several contrast agents with different K-edge energies in combination with photon-counting detectors with a corresponding number of energy thresholds enable multi-agent imaging. Technologies and methods. Incidence-based methods obtain spectral information by acquiring several images at different tube voltage settings, possibly in combination with different filtering. Temporal differences between the exposures (e.g. patient motion, variation in contrast-agent concentration) for long limited practical implementations, but dual-source CT and subsequently rapid kV switching have now virtually eliminated the time between exposures. Splitting the incident radiation of a scanning system into two beams with different filtration is another way to quasi-simultaneously acquire data at two energy levels. Detection-based methods instead obtain spectral information by splitting the spectrum after interaction in the object. So-called sandwich detectors consist of two (or more) detector layers, where the top layer preferentially detects low-energy photons and the bottom layer detects a harder spectrum. Detection-based methods enable projection-based material decomposition because the two energy levels measured by the detector represent identical ray paths. Further, spectral information is available from every scan, which has work-flow advantages. The currently most advanced detection-based method is based on photon-counting detectors. As opposed to conventional detectors, which integrate all photon interactions over the exposure time, photon-counting detectors are fast enough to register and measure the energy of single photon events. Hence, the number of energy bins and the spectral separation are not determined by physical properties of the system (detector layers, source / filtration etc.), but by the detector electronics, which increases efficiency and the degrees of freedom, and enable elimination of electronic noise. The first commercial photon-counting application was the MicroDose mammography system, introduced by Sectra Mamea in 2003 (later acquired by Philips), and spectral imaging was launched on this platform in 2013. The MicroDose system was based on silicon strip detectors, a technology that has subsequently been refined for CT with up to eight energy bins. Silicon as sensor material benefit from high charge-collection efficiency, ready availability of high-quality high-purity silicon crystals, and established methods for test and assembly. The relatively low photo-electric cross section can be compensated for by arranging the silicon wafers edge on, which also enables depth segments. Cadmium telluride (CdTe) and cadmium–zinc telluride (CZT) are also being investigated as sensor materials. The higher atomic number of these materials result in a higher photo-electric cross section, which is advantageous, but the higher fluorescent yield degrades spectral response and induces cross talk. Manufacturing of macro-sized crystals of these materials have so far posed practical challenges and leads to charge trapping and long-term polarization effects (build-up of space charge). Other solid-state materials, such as gallium arsenide and mercuric iodide, as well as gas detectors, are currently quite far from clinical implementation. The main intrinsic challenge of photon-counting detectors for medical imaging is pulse pileup, which results in lost counts and reduced energy resolution because several pulses are counted as one. Pileup will always be present in photon-counting detectors because of the Poisson distribution of incident photons, but detector speeds are now so high that acceptable pileup levels at CT count rates begin to come within reach. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Omega \\in \\{E_1, E_2, E_3,\\ldots\\}" }, { "math_id": 1, "text": "q" }, { "math_id": 2, "text": "\\Phi" }, { "math_id": 3, "text": "\\Gamma" }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "t" }, { "math_id": 6, "text": "q\\times\\Phi" }, { "math_id": 7, "text": "\\Omega" }, { "math_id": 8, "text": "a_K" }, { "math_id": 9, "text": "f_K" }, { "math_id": 10, "text": "K" }, { "math_id": 11, "text": "n=\\sum n_\\Omega" }, { "math_id": 12, "text": "n=\\sum w_\\Omega\\times n_\\Omega" }, { "math_id": 13, "text": "|\\Omega|=2" }, { "math_id": 14, "text": "t_1" }, { "math_id": 15, "text": "t_2" }, { "math_id": 16, "text": "|\\Omega|=3" }, { "math_id": 17, "text": "|\\Omega|=4" } ]
https://en.wikipedia.org/wiki?curid=61384892
61385726
Hurdle model
Class of statistical models A hurdle model is a class of statistical models where a random variable is modelled using two parts, the first which is the probability of attaining value 0, and the second part models the probability of the non-zero values. The use of hurdle models are often motivated by an excess of zeroes in the data, that is not sufficiently accounted for in more standard statistical models. In a hurdle model, a random variable "x" is modelled as formula_0 formula_1 where formula_2 is a truncated probability distribution function, truncated at 0. Hurdle models were introduced by John G. Cragg in 1971, where the non-zero values of "x" were modelled using a normal model, and a probit model was used to model the zeros. The probit part of the model was said to model the presence of "hurdles" that must be overcome for the values of x to attain non-zero values, hence the designation "hurdle model". Hurdle models were later developed for count data, with Poisson, geometric, and negative binomial models for the non-zero counts . Relationship with zero-inflated models. Hurdle models differ from zero-inflated models in that zero-inflated models model the zeros using a two-component mixture model. With a mixture model, the probability of the variable being zero is determined by both the main distribution function formula_3 and the mixture weight formula_4. Specifically, a zero-inflated model for a random variable "x" is formula_5 formula_6 where formula_4 is the mixture weight that determines the amount of zero-inflation. A zero-inflated model can only increase the probability of formula_7, but this is not a restriction in hurdle models. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\Pr (x = 0) = \\theta " }, { "math_id": 1, "text": " \\Pr (x \\ne 0) = p_{x \\ne 0}(x) " }, { "math_id": 2, "text": "p_{x \\ne 0}(x)" }, { "math_id": 3, "text": "p(x = 0)" }, { "math_id": 4, "text": "\\pi" }, { "math_id": 5, "text": " \\Pr (x = 0) = \\pi + (1 - \\pi) \\times p(x = 0) " }, { "math_id": 6, "text": " \\Pr (x = h_i) = (1 - \\pi) \\times p(x = h_i) " }, { "math_id": 7, "text": " \\Pr (x = 0)" } ]
https://en.wikipedia.org/wiki?curid=61385726
6138641
Blast wave
Increased fluid pressure and flow from an explosion In fluid dynamics, a blast wave is the increased pressure and flow resulting from the deposition of a large amount of energy in a small, very localised volume. The flow field can be approximated as a lead shock wave, followed by a similar subsonic flow field. In simpler terms, a blast wave is an area of pressure expanding supersonically outward from an explosive core. It has a leading shock front of compressed gases. The blast wave is followed by a blast wind of negative gauge pressure, which sucks items back in towards the center. The blast wave is harmful especially to objects very close to the center or at a location of constructive interference. High explosives that detonate generate blast waves. Sources. High-order explosives (HE) are more powerful than low-order explosives (LE). HE detonate to produce a defining supersonic over-pressurization shock wave. Sources of HE include trinitrotoluene (TNT), C-4, Semtex, nitroglycerin, and ammonium nitrate fuel oil (ANFO). LE deflagrate to create a subsonic explosion and lack HE's over-pressurization wave. Sources of LE include pipe bombs, gunpowder, and most pure petroleum-based incendiary bombs such as Molotov cocktails or aircraft improvised as guided missiles. HE and LE induce different injury patterns. Only HE produce true blast waves. History. The classic flow solution—the so-called Taylor–von Neumann–Sedov blast wave solution—was independently devised by John von Neumann and British mathematician Geoffrey Ingram Taylor during World War II. After the war, the similarity solution was published by three other authors—L. I. Sedov, R. Latter, and J. Lockwood-Taylor—who had discovered it independently. Since the early theoretical work, both theoretical and experimental studies of blast waves have been ongoing. Characteristics and properties. The simplest form of a blast wave has been described and termed the Friedlander waveform. It occurs when a high explosive detonates in a free field: that is, with no surfaces nearby with which it can interact. Blast waves have properties predicted by the physics of waves. For example, they can diffract through a narrow opening and refract as they pass through materials. Like light or sound waves, when a blast wave reaches a boundary between two materials, part of it is transmitted, part of it is absorbed, and part of it is reflected. The impedances of the two materials determine how much of each occurs. The equation for a Friedlander waveform describes the pressure of the blast wave as a function of time: formula_0 where Ps is the peak pressure and t* is the time at which the pressure first crosses the horizontal axis (before the negative phase). Blast waves will wrap around objects and buildings. Therefore, persons or objects behind a large building are not necessarily protected from a blast that starts on the opposite side of the building. Scientists use sophisticated mathematical models to predict how objects will respond to a blast in order to design effective barriers and safer buildings. Mach stem formation. Mach stem formation occurs when a blast wave reflects off the ground and the reflection catches up with the original shock front, therefore creating a high pressure zone that extends from the ground up to a certain point called the triple point at the edge of the blast wave. Anything in this area experiences peak pressures that can be several times higher than the peak pressure of the original shock front. Constructive and destructive interference. In physics, interference is the meeting of two correlated waves and either increasing or lowering the net amplitude, depending on whether it is constructive or destructive interference. If a crest of a wave meets a crest of another wave at the same point then the crests interfere constructively and the resultant crest wave amplitude is increased, forming a much more powerful wave than either of the beginning waves. Similarly two troughs make a trough of increased amplitude. If a crest of a wave meets a trough of another wave then they interfere destructively, and the overall amplitude is decreased, thus making a wave that is much smaller than either of the parent waves. The formation of a mach stem is one example of constructive interference. Whenever a blast wave reflects off of a surface, such as a building wall or the inside of a vehicle, different reflected waves can interact with each other to cause an increase in pressure at a certain point (constructive interference) or a decrease (destructive interference). In this way the interaction of blast waves is similar to that of sound waves or water waves. Damage. Blast waves cause damage by a combination of the significant compression of the air in front of the wave (forming a shock front) and the subsequent wind that follows. A blast wave travels faster than the speed of sound, and the passage of the shock wave usually lasts only a few milliseconds. Like other types of explosions, a blast wave can also cause damage to things and people by the blast wind, debris, and fires. The original explosion will send out fragments that travel very fast. Debris and sometimes even people can get swept up into a blast wave, causing more injuries such as penetrating wounds, impalement and broken bones. The blast wind is the area of low pressure that causes debris and fragments to rush back towards the original explosions. The blast wave can also cause fires or secondary explosions by a combination of the high temperatures that result from detonation and the physical destruction of fuel-containing objects. Applications. Bombs. In response to an inquiry from the British MAUD Committee, G. I. Taylor estimated the amount of energy that would be released by the explosion of an atomic bomb in air. He postulated that for an idealized point source of energy, the spatial distributions of the flow variables would have the same form during a given time interval, the variables differing only in scale (thus the name of the "similarity solution.") This hypothesis allows the partial differential equations in terms of r (the radius of the blast wave) and t (time) to be transformed into an ordinary differential equation in terms of the similarity variable: formula_1 where formula_2 is the density of the air and formula_3 is the energy released by the explosion. This result allowed Taylor to estimate the nuclear yield of the Trinity test in New Mexico in 1945 using only photographs of the blast, which had been published in newspapers and magazines. The yield of the explosion was determined by using the equation: formula_4 where formula_5 is a dimensionless constant that is a function of the ratio of the specific heat of air at constant pressure to the specific heat of air at constant volume. The value of C is also affected by radiative losses, but for air, values of C of 1.00-1.10 generally give reasonable results. In 1950, Taylor published two articles in which he revealed the yield E of the first atomic explosion, which had previously been classified and whose publication was therefore a source of controversy. While nuclear explosions are among the clearest examples of the destructive power of blast waves, blast waves generated by exploding conventional bombs and other weapons made from high explosives have been used as weapons of war because of their effectiveness at creating polytraumatic injury. During World War II and the Vietnam War, blast lung was a common and often deadly injury. Improvements in vehicular and personal protective equipment have helped to reduce the incidence of blast lung. However, as soldiers are better protected from penetrating injury and surviving previously lethal exposures, limb, eye, ear, and brain injuries have become more prevalent. Effects of blast loads on buildings. Structural behaviour during an explosion depends on the materials used in the construction of the building. Upon hitting the face of a building, the shock front from an explosion is reflected. This impact with the structure imparts momentum to exterior components of the building. The associated kinetic energy of the moving components must be absorbed or dissipated in order for them to survive. Generally, this is achieved by converting the kinetic energy of the moving component to strain energy in resisting elements. Typically the resisting elements—such as windows, building facades and support columns—fail, causing partial damage through to progressive collapse of the building. Astronomy. The so-called Sedov-Taylor solution &lt;templatestyles src="Crossreference/styles.css" /&gt; has become useful in astrophysics. For example, it can be applied to quantify an estimate for the outcome from supernova-explosions. The Sedov-Taylor expansion is also known as the "blast wave" phase, which is an adiabatic expansion phase in the life cycle of supernova. The temperature of the material in a supernova shell decreases with time, but the internal energy of the material is always 72% of E0, the initial energy released. This is helpful for astrophysicists interested in predicting the behavior of supernova remnants. Research. Blast waves are generated in research environments using explosive or compressed-gas driven shock tubes in an effort to replicate the environment of a military conflict to better understand the physics of blasts and injuries that may result, and to develop better protection against blast exposure. Blast waves are directed against structures (such as vehicles), materials, and biological specimens or surrogates. High-speed pressure sensors and/or high speed cameras are often used to quantify the response to blast exposure. Anthropomorphic test devices (ATDs or test dummies) initially developed for the automotive industry are being used, sometimes with added instrumentation, to estimate the human response to blast events. For examples, personnel in vehicles and personnel on demining teams have been simulated using these ATDs. Combined with experiments, complex mathematical models have been made of the interaction of blast waves with inanimate and biological structures. Validated models are useful for "what if" experiments—predictions of outcomes for different scenarios. Depending on the system being modeled, it can be difficult to have accurate input parameters (for example, the material properties of a rate-sensitive material at blast rates of loading). Lack of experimental validation severely limits the usefulness of any numerical model. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(t)=P_se^{-\\frac{t}{t^*}}\\left(1-\\frac{t}{t^*}\\right)." }, { "math_id": 1, "text": "\\frac{r^{5}\\rho_{o}}{t^{2}E}" }, { "math_id": 2, "text": "\\rho_{o}" }, { "math_id": 3, "text": "E" }, { "math_id": 4, "text": "E = \\left(\\frac{\\rho_{o}}{t^2}\\right)\\left(\\frac{r}{C}\\right)^5" }, { "math_id": 5, "text": "C" } ]
https://en.wikipedia.org/wiki?curid=6138641
61387167
Ind-completion
In mathematics, process for extending a category In mathematics, the ind-completion or ind-construction is the process of freely adding filtered colimits to a given category "C". The objects in this ind-completed category, denoted Ind("C"), are known as direct systems, they are functors from a small filtered category "I" to "C". The dual concept is the pro-completion, Pro("C"). Definitions. Filtered categories. Direct systems depend on the notion of "filtered categories". For example, the category N, whose objects are natural numbers, and with exactly one morphism from "n" to "m" whenever formula_0, is a filtered category. Direct systems. A "direct system" or an "ind-object" in a category "C" is defined to be a functor formula_1 from a small filtered category "I" to "C". For example, if "I" is the category N mentioned above, this datum is equivalent to a sequence formula_2 of objects in "C" together with morphisms as displayed. The ind-completion. Ind-objects in "C" form a category ind-"C". Two ind-objects formula_3 and formula_4 determine a functor "I"op x "J" formula_5 "Sets", namely the functor formula_6 The set of morphisms between "F" and "G" in Ind("C") is defined to be the colimit of this functor in the second variable, followed by the limit in the first variable: formula_7 More colloquially, this means that a morphism consists of a collection of maps formula_8 for each "i", where formula_9 is (depending on "i") large enough. Relation between "C" and Ind("C"). The final category I = {*} consisting of a single object * and only its identity morphism is an example of a filtered category. In particular, any object "X" in "C" gives rise to a functor formula_10 and therefore to a functor formula_11 This functor is, as a direct consequence of the definitions, fully faithful. Therefore Ind("C") can be regarded as a larger category than "C". Conversely, there need not in general be a natural functor formula_12 However, if "C" possesses all filtered colimits (also known as direct limits), then sending an ind-object formula_13 (for some filtered category "I") to its colimit formula_14 does give such a functor, which however is not in general an equivalence. Thus, even if "C" already has all filtered colimits, Ind("C") is a strictly larger category than "C". Objects in Ind("C") can be thought of as formal direct limits, so that some authors also denote such objects by formula_15 This notation is due to Pierre Deligne. Universal property of the ind-completion. The passage from a category "C" to Ind("C") amounts to freely adding filtered colimits to the category. This is why the construction is also referred to as the "ind-completion" of "C". This is made precise by the following assertion: any functor formula_16 taking values in a category "D" that has all filtered colimits extends to a functor formula_17 that is uniquely determined by the requirements that its value on "C" is the original functor "F" and such that it preserves all filtered colimits. Basic properties of ind-categories. Compact objects. Essentially by design of the morphisms in Ind("C"), any object "X" of "C" is compact when regarded as an object of Ind("C"), i.e., the corepresentable functor formula_18 preserves filtered colimits. This holds true no matter what "C" or the object "X" is, in contrast to the fact that "X" need not be compact in "C". Conversely, any compact object in Ind("C") arises as the image of an object in "X". A category "C" is called compactly generated, if it is equivalent to formula_19 for some small category formula_20. The ind-completion of the category FinSet of "finite" sets is the category of "all" sets. Similarly, if "C" is the category of finitely generated groups, "ind-C" is equivalent to the category of all groups. Recognizing ind-completions. These identifications rely on the following facts: as was mentioned above, any functor formula_16 taking values in a category "D" that has all filtered colimits, has an extension formula_21 that preserves filtered colimits. This extension is unique up to equivalence. First, this functor formula_22 is essentially surjective if any object in "D" can be expressed as a filtered colimits of objects of the form formula_23 for appropriate objects "c" in "C". Second, formula_22 is fully faithful if and only if the original functor "F" is fully faithful and if "F" sends arbitrary objects in "C" to "compact" objects in "D". Applying these facts to, say, the inclusion functor formula_24 the equivalence formula_25 expresses the fact that any set is the filtered colimit of finite sets (for example, any set is the union of its finite subsets, which is a filtered system) and moreover, that any finite set is compact when regarded as an object of "Set". The pro-completion. Like other categorical notions and constructions, the ind-completion admits a dual known as the pro-completion: the category Pro("C") is defined in terms of ind-object as formula_26 Therefore, the objects of Pro("C") are inverse systems or pro-objects in "C". By definition, these are direct system in the opposite category formula_27 or, equivalently, functors formula_13 from a small cofiltered category "I". Examples of pro-categories. While Pro("C") exists for any category "C", several special cases are noteworthy because of connections to other mathematical notions. The appearance of topological notions in these pro-categories can be traced to the equivalence, which is itself a special case of Stone duality, formula_30 which sends a finite set to the power set (regarded as a finite Boolean algebra). The duality between pro- and ind-objects and known description of ind-completions also give rise to descriptions of certain opposite categories. For example, such considerations can be used to show that the opposite category of the category of vector spaces (over a fixed field) is equivalent to the category of linearly compact vector spaces and continuous linear maps between them. Applications. Pro-completions are less prominent than ind-completions, but applications include shape theory. Pro-objects also arise via their connection to pro-representable functors, for example in Grothendieck's Galois theory, and also in Schlessinger's criterion in deformation theory. Related notions. Tate objects are a mixture of ind- and pro-objects. Infinity-categorical variants. The ind-completion (and, dually, the pro-completion) has been extended to ∞-categories by . Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "n \\le m" }, { "math_id": 1, "text": "F : I \\to C" }, { "math_id": 2, "text": "X_0 \\to X_1 \\to \\cdots" }, { "math_id": 3, "text": " F:I\\to C " }, { "math_id": 4, "text": "G:J\\to C " }, { "math_id": 5, "text": "\\to" }, { "math_id": 6, "text": "\\operatorname{Hom}_C(F(i),G(j))." }, { "math_id": 7, "text": "\\operatorname{Hom}_{\\operatorname{Ind}\\text{-}C}(F,G) = \\lim_i \\operatorname{colim}_j \\operatorname{Hom}_C(F(i), G(j))." }, { "math_id": 8, "text": "F(i) \\to G(j_i)" }, { "math_id": 9, "text": "j_i" }, { "math_id": 10, "text": "\\{*\\} \\to C, * \\mapsto X" }, { "math_id": 11, "text": "C \\to \\operatorname{Ind}(C), X \\mapsto (* \\mapsto X)." }, { "math_id": 12, "text": "\\operatorname{Ind}(C) \\to C." }, { "math_id": 13, "text": "F: I \\to C" }, { "math_id": 14, "text": "\\operatorname {colim}_I F(i)" }, { "math_id": 15, "text": "\\text{“}\\varinjlim_{i \\in I} \\text{'' } F(i). " }, { "math_id": 16, "text": "F: C \\to D" }, { "math_id": 17, "text": "Ind(C) \\to D" }, { "math_id": 18, "text": "\\operatorname{Hom}_{\\operatorname{Ind}(C)}(X, -)" }, { "math_id": 19, "text": "\\operatorname{Ind}(C_0)" }, { "math_id": 20, "text": "C_0" }, { "math_id": 21, "text": "\\tilde F: \\operatorname{Ind}(C) \\to D, " }, { "math_id": 22, "text": "\\tilde F" }, { "math_id": 23, "text": "F(c)" }, { "math_id": 24, "text": "F: \\operatorname{FinSet} \\subset \\operatorname{Set}," }, { "math_id": 25, "text": "\\operatorname{Ind}(\\operatorname{FinSet}) \\cong \\operatorname{Set}" }, { "math_id": 26, "text": " \\operatorname{Pro}(C) := \\operatorname{Ind}(C^{op})^{op}." }, { "math_id": 27, "text": "C^{op}" }, { "math_id": 28, "text": "\\operatorname{Pro}(\\operatorname{PoSet}^\\text{fin})" }, { "math_id": 29, "text": "\\operatorname{Pro}(\\operatorname{FinSet})" }, { "math_id": 30, "text": "\\operatorname{FinSet}^{op} = \\operatorname{FinBool}" } ]
https://en.wikipedia.org/wiki?curid=61387167
61387397
Codensity monad
In mathematics, especially in category theory, the codensity monad is a fundamental construction associating a monad to a wide class of functors. Definition. The codensity monad of a functor formula_0 is defined to be the right Kan extension of formula_1 along itself, provided that this Kan extension exists. Thus, by definition it is in particular a functor formula_2 The monad structure on formula_3 stems from the universal property of the right Kan extension. The codensity monad exists whenever formula_4 is a small category (has only a set, as opposed to a proper class, of morphisms) and formula_5 possesses all (small, i.e., set-indexed) limits. It also exists whenever formula_1 has a left adjoint. By the general formula computing right Kan extensions in terms of ends, the codensity monad is given by the following formula: formula_6 where formula_7 denotes the set of morphisms in formula_5 between the indicated objects and the integral denotes the end. The codensity monad therefore amounts to considering maps from formula_8 to an object in the image of formula_9 and maps from the set of such morphisms to formula_10 compatible for all the possible formula_11 Thus, as is noted by Avery, codensity monads share some kinship with the concept of integration and double dualization. Examples. Codensity monads of right adjoints. If the functor formula_1 admits a left adjoint formula_12 the codensity monad is given by the composite formula_13 together with the standard unit and multiplication maps. Concrete examples for functors not admitting a left adjoint. In several interesting cases, the functor formula_1 is an inclusion of a full subcategory not admitting a left adjoint. For example, the codensity monad of the inclusion of FinSet into Set is the ultrafilter monad associating to any set formula_14 the set of ultrafilters on formula_15 This was proven by Kennison and Gildenhuys, though without using the term "codensity". In this formulation, the statement is reviewed by Leinster. A related example is discussed by Leinster: the codensity monad of the inclusion of finite-dimensional vector spaces (over a fixed field formula_16) into all vector spaces is the double dualization monad given by sending a vector space formula_17 to its double dual formula_18 Thus, in this example, the end formula mentioned above simplifies to considering (in the notation above) only one object formula_19 namely a one-dimensional vector space, as opposed to considering all objects in formula_20 Adámek and Sousa show that, in a number of situations, the codensity monad of the inclusion formula_21 of finitely presented objects (also known as compact objects) is a double dualization monad with respect to a sufficiently nice cogenerating object. This recovers both the inclusion of finite sets in sets (where a cogenerator is the set of two elements), and also the inclusion of finite-dimensional vector spaces in vector spaces (where the cogenerator is the ground field). Sipoş showed that the algebras over the codensity monad of the inclusion of finite sets (regarded as discrete topological spaces) into topological spaces are equivalent to Stone spaces. Avery shows that the Giry monad arises as the codensity monad of natural forgetful functors between certain categories of convex vector spaces to measurable spaces. Relation to Isbell duality. Di Liberti shows that the codensity monad is closely related to Isbell duality: for a given small category formula_22 Isbell duality refers to the adjunction formula_23 between the category of presheaves on formula_5 (that is, functors from the opposite category of formula_5 to sets) and the opposite category of copresheaves on formula_24 The monad formula_25 induced by this adjunction is shown to be the codensity monad of the Yoneda embedding formula_26 Conversely, the codensity monad of a full small dense subcategory formula_27 in a cocomplete category formula_5 is shown to be induced by Isbell duality. References. &lt;templatestyles src="Refbegin/styles.css" /&gt; Footnotes &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G: D \\to C" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "T^G : C \\to C." }, { "math_id": 3, "text": "T^G" }, { "math_id": 4, "text": "D" }, { "math_id": 5, "text": "C" }, { "math_id": 6, "text": "T^G(c) = \\int_{d \\in D} G(d)^{C(c, G(d))}," }, { "math_id": 7, "text": "C(c, G(d))" }, { "math_id": 8, "text": "c" }, { "math_id": 9, "text": "G," }, { "math_id": 10, "text": "G(d)," }, { "math_id": 11, "text": "d." }, { "math_id": 12, "text": "F," }, { "math_id": 13, "text": "G \\circ F," }, { "math_id": 14, "text": "M" }, { "math_id": 15, "text": "M." }, { "math_id": 16, "text": "k" }, { "math_id": 17, "text": "V" }, { "math_id": 18, "text": "V^{**} = \\operatorname{Hom}(\\operatorname{Hom}(V, k), k)." }, { "math_id": 19, "text": "d," }, { "math_id": 20, "text": "D." }, { "math_id": 21, "text": "D := C^{fp} \\subseteq C" }, { "math_id": 22, "text": "C," }, { "math_id": 23, "text": "\\mathcal O : Set^{C^{op}} \\rightleftarrows (Set^C)^{op} : Spec" }, { "math_id": 24, "text": "C." }, { "math_id": 25, "text": "Spec \\circ \\mathcal O" }, { "math_id": 26, "text": "y: C \\to Set^{C^{op}}." }, { "math_id": 27, "text": "K" } ]
https://en.wikipedia.org/wiki?curid=61387397
6139533
Slip ratio
Representation of the slipping behavior of an automobile wheel Slip ratio is a means of calculating and expressing the slipping behavior of the wheel of an automobile. It is of fundamental importance in the field of vehicle dynamics, as it allows to understand the relationship between the deformation of the tire and the longitudinal forces (i.e. the forces responsible for forward acceleration and braking) acting upon it. Furthermore, it is essential to the effectiveness of any anti-lock braking system. When accelerating or braking a vehicle equipped with tires, the observed angular velocity of the tire does not match the expected velocity for pure rolling motion, which means there appears to be apparent sliding between outer surface of the rim and the road in addition to rolling due to deformation of the part of tire above the area in contact with the road. When driving on dry pavement the fraction of slip that is caused by actual sliding taking place between road and tire contact patch is negligible in magnitude and thus does not in practice make slip ratio dependent on speed. It is only relevant in soft or slippery surfaces, like snow, mud, ice, etc and results constant speed difference in same road and load conditions independently of speed, and thus fraction of slip ratio due to that cause is inversely related to speed of the vehicle. The difference between theoretically calculated forward speed based on angular speed of the rim and rolling radius, and actual speed of the vehicle, expressed as a percentage of the latter, is called ‘slip ratio’. This slippage is caused by the forces at the contact patch of the tire, not the opposite way, and is thus of fundamental importance to determine the accelerations a vehicle can produce. There is no universally agreed upon definition of slip ratio. The SAE J670 definition is, for tires pointing straight ahead: formula_0 Where formula_1 is the angular velocity of the wheel, formula_2 is the effective radius of the corresponding free-rolling tire, which can be calculated from the revolutions per kilometer, and formula_3 is the forward velocity of the vehicle. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{slip ratio}\\ \\% = \\left ( \\frac{\\Omega\\,R_C}{V} - 1 \\right ) \\times 100\\%" }, { "math_id": 1, "text": "\\Omega" }, { "math_id": 2, "text": "R_C" }, { "math_id": 3, "text": "V" } ]
https://en.wikipedia.org/wiki?curid=6139533
6139788
Bending (metalworking)
Metalworking to produce a V-, U- or channel shape Bending is a manufacturing process that produces a V-shape, U-shape, or channel shape along a straight axis in ductile materials, most commonly sheet metal. Commonly used equipment include box and pan brakes, brake presses, and other specialized machine presses. Typical products that are made like this are boxes such as electrical enclosures and rectangular ductwork. Process. In press brake forming, the work piece is positioned over a die block and a punch then presses the sheet into the die block to form a shape. Usually bending has to overcome both tensile stresses and compressive stresses. When bending is done, the residual stresses cause the material to "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;spring back" towards its original position, so the sheet must be over-bent to achieve the proper bend angle. The amount of spring back is dependent on the material, and the type of forming. When sheet metal is bent, it stretches in length. The "bend deduction" is the amount the sheet metal will stretch when bent as measured from the outside edges of the bend. The "bend radius" refers to the inside radius. The formed bend radius is dependent upon the dies used, the material properties, and the material thickness. The U-punch forms a U-shape with a single punch. Types. There are three basic types of bending on a press brake, each is defined by the relationship of the end tool position to the thickness of the material. These three are Air Bending, Bottoming and Coining. The configuration of the tools for these three types of bending are nearly identical. A die with a long rail form tool with a radiused tip that locates the inside profile of the bend is called a punch. Punches are usually attached to the ram of the machine by clamps and move to produce the bending force. A die with a long rail form tool that has concave or V-shaped lengthwise channel that locate the outside profile of the form is called a die. Dies are usually stationary and located under the material on the bed of the machine. Note that some locations do not differentiate between the two different kinds of dies (punches and dies). The other types of bending listed use specially designed tools or machines to perform the work. Air bending. This bending method forms material by pressing a punch (also called the upper or top die) into the material, forcing it into a bottom V-die, which is mounted on the press. The punch forms the bend so that the distance between the punch and the side wall of the V is greater than the material thickness (T). Either a V-shaped or square opening may be used in the bottom die (dies are frequently referred to as tools or tooling). Because it requires less bend force, air bending tends to use smaller tools than other methods. Some of the newer bottom tools are adjustable, so, by using a single set of top and bottom tools and varying press-stroke depth, different profiles and products can be produced. Different materials and thicknesses can be bent in varying bend angles, adding the advantage of flexibility to air bending. There are also fewer tool changes, thus, higher productivity. A disadvantage of air bending is that, because the sheet does not stay in full contact with the dies, it is not as precise as some other methods, and stroke depth must be kept very accurate. Variations in the thickness of the material and wear on the tools can result in defects in parts produced. Thus, the use of adequate process models is important. Air bending's angle accuracy is approximately ±0.5 deg. Angle accuracy is ensured by applying a value to the width of the V opening, ranging from 6 T (six times material thickness) for sheets to 3 mm thick to 12 T for sheets more than 10 mm thick. Springback depends on material properties, influencing the resulting bend angle. Depending on material properties, the sheet may be overbent to compensate for springback. Air bending does not require the bottom tool to have the same radius as the punch. Bend radius is determined by material elasticity rather than tool shape. The flexibility and relatively low tonnage required by air bending are helping to make it a popular choice. Quality problems associated with this method are countered by angle-measuring systems, clamps and crowning systems adjustable along the x and y axes, and wear-resistant tools. The K-factor approximations given below are more likely to be accurate for air bending than the other types of bending due to the lower forces involved in the forming process. Bottoming. In bottoming, the sheet is forced against the V opening in the bottom tool. U-shaped openings cannot be used. Space is left between the sheet and the bottom of the V opening. The optimum width of the V opening is 6 T (T stands for material thickness) for sheets about 3 mm thick, up to about 12 T for 12 mm thick sheets. The bending radius must be at least 0.8 T to 2 T for sheet steel. Larger bend radii require about the same force for bottoming as they do for air bending, however, smaller radii require greater force—up to five times as much—than air bending. Advantages of bottoming include greater accuracy and less springback. A disadvantage is that a different tool set is needed for each bend angle, sheet thickness, and material. In general, air bending is the preferred technique. Coining. In coining, the top tool forces the material into the bottom die with 5 to 30 times the force of air bending, causing permanent deformation through the sheet. There is little, if any, spring back. Coining can produce an inside radius as low as 0.4 T, with a 5 T width of the V opening. While coining can attain high precision, higher costs mean that it is not often used. Three-point bending. Three-point bending is a newer process that uses a die with an adjustable-height bottom tool, moved by a servo motor. The height can be set within 0.01 mm. Adjustments between the ram and the upper tool are made using a hydraulic cushion, which accommodates deviations in sheet thickness. Three-point bending can achieve bend angles with 0.25 deg. precision. While three-point bending permits high flexibility and precision, it also entails high costs and there are fewer tools readily available. It is being used mostly in high-value niche markets. Folding. In folding, clamping beams hold the longer side of the sheet. The beam rises and folds the sheet around a bend profile. The bend beam can move the sheet up or down, permitting the fabricating of parts with positive and negative bend angles. The resulting bend angle is influenced by the folding angle of the beam, tool geometry, and material properties. Large sheets can be handled in this process, making the operation easily automated. There is little risk of surface damage to the sheet. Wiping. In wiping, the longest end of the sheet is clamped, then the tool moves up and down, bending the sheet around the bend profile. Though faster than folding, wiping has a higher risk of producing scratches or otherwise damaging the sheet, because the tool is moving over the sheet surface. The risk increases if sharp angles are being produced. This method will typically bottom or coin the material to set the edge to help overcome springback. In this bending method, the radius of the bottom die determines the final bending radius. Rotary bending. Rotary bending is similar to wiping but the top die is made of a freely rotating cylinder with the final formed shape cut into it and a matching bottom die. On contact with the sheet, the roll contacts on two points and it rotates as the forming process bends the sheet. This bending method is typically considered a "non-marking" forming process suitable to pre-painted or easily marred surfaces. This bending process can produce angles greater than 90° in a single hit on standard press brakes process. Roll bending. The roll bending process induces a curve into bar or plate workpieces. There should be proper pre-punching allowance. Elastomer bending. In this method, the bottom V-die is replaced by a flat pad of urethane or rubber. As the punch forms the part, the urethane deflects and allows the material to form around the punch. This bending method has a number of advantages. The urethane will wrap the material around the punch and the end bend radius will be very close to the actual radius on the punch. It provides a non-marring bend and is suitable for pre-painted or sensitive materials. Using a special punch called a "radius ruler" with relieved areas on the urethane U-bends greater than 180° can be achieved in one hit, something that is not possible with conventional press tooling. Urethane tooling should be considered a consumable item and while they are not cheap, they are a fraction of the cost of dedicated steel. It also has some drawbacks, this method requires tonnage similar to bottoming and coining and does not do well on flanges that are irregular in shape, that is where the edge of the bent flange is not parallel to the bend and is short enough to engage the urethane pad. Joggling. Joggling, also known as joggle bending, is an offset bending process in which two opposite bends with equal angles are formed in a single action creating a small s-shape bend profile and an offset between the unbent face and the result flange that is typically less than 5 material thicknesses. Often the offset will be one material thickness, in order to allow a lap joint where the edge of one sheet of material is laid on top of the other. Calculations. Many variations of these formulas exist and are readily available online. These variations may often seem to be at odds with one another, but they are invariably the same formulas simplified or combined. What is presented here are the unsimplified formulas. All formulas use the following keys: The "neutral line" (also called the Neutral axis) is an imaginary profile that can be drawn through a cross-section of the workpiece that represents the locus where no tensile or compressive stress are present but shear stresses are at their maximum. In the bend region, the material between the neutral line and the "inside" radius will be under "compression" during the bend while the material between the neutral line and the "outside" radius will be under "tension" during the bend. Its location in the material is a function of the forces used to form the part and the material yield and tensile strengths. This theoretical definition also coincides with the geometric definition of the plane representing the unbent flat pattern shape within the cross-section of the bent part. Furthermore, the bend allowance (see below) in air bending depends primarily on the width of the opening of the bottom die. As a result, the bending process is more complicated than it appears to be at first sight. Both bend deduction and bend allowance represent the difference between the neutral line or unbent "flat pattern" (the required length of the material prior to bending) and the formed bend. Subtracting them from the combined length of both flanges gives the flat pattern length. The question of which to use is determined by the dimensioning method used to define the flanges as shown in the two diagrams below. The flat pattern length is always shorter in length than the sum of all the flange length dimensions due to the geometric transformation. This gives rise to the common perspective that that material is stretching during bending and the bend deduction and bend allowance are the distance that each bend stretches. While a helpful way to look at it, a careful examination of the formulas and stresses involved show this to be false. Most 3D Solid Modeling CAD software has sheet metal functions or add-ons that performs these calculations automatically. Bend allowance. The "bend allowance" (BA) is the length of the arc of the neutral line between the tangent points of a bend in any material. Adding the length of each flange as dimensioned by B in the diagram to the BA gives the Flat Pattern length. This bend allowance formula is used to determine the flat pattern length when a bend is dimensioned from 1) the center of the radius, 2) a tangent point of the radius (B) or 3) the outside tangent point of the radius on an acute angle bend (C). When dimensioned to the outside tangent, the material thickness and bend radius are subtracted from it to find the dimension to the tangent point of the radius before adding in the bend allowance. The BA can be estimated using the following formula, which incorporates the empirical K-factor: formula_0 Bend deduction. The bend deduction BD is defined as the difference between the sum of the flange lengths (from the edge to the apex) and the initial flat length. The "outside set back" (OSSB) is the length from the tangent point of the radius to the apex of the outside of the bend. The "bend deduction" (BD) is twice the outside setback minus the bend allowance. BD is calculated using the following formula, where A is the angle in radians (=degrees*π/180): formula_1 For bends at 90 degrees this formula can be simplified to: formula_2 K-factor. "K-factor" is a ratio of the location of the neutral line to the material thickness as defined by t/T where t = location of the neutral line and T = material thickness. The K-factor formula does not take the forming stresses into account but is simply a geometric calculation of the location of the neutral line after the forces are applied and is thus the roll-up of all the unknown (error) factors for a given setup. The K-factor depends on many variables including the material, the type of bending operation (coining, bottoming, air-bending, etc.) the tools, etc. and is typically between 0.3 and 0.5. The following equation relates the K-factor to the bend allowance: formula_3 The following table is a "rule of thumb". Actual results may vary remarkably. The following formula can be used in place of the table as a good "approximation" of the K-factor for air bending: formula_4 Advantages and disadvantages. Bending is a cost-effective near net shape process when used for low to medium quantities. Parts usually are lightweight with good mechanical properties. A disadvantage is that some process variants are sensitive to variations in material properties. For instance, differences in spring-back have a direct influence on the resulting bend angle. To mitigate this, various methods for in-process control have been developed. Other approaches include combining brakeforming with incremental forming. Broadly speaking, each bend corresponds with a set-up (although sometimes, multiple bends can be formed simultaneously). The relatively large number of set-ups and the geometrical changes during bending make it difficult to address tolerances and bending errors a priori during set-up planning, although some attempts have been made References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "BA = A \\left( \\frac{\\pi}{180} \\right) \\left( R + (K \\times T \\right))" }, { "math_id": 1, "text": "BD = 2 \\left(R + T \\right) \\tan{ \\frac{A}{2}} - BA" }, { "math_id": 2, "text": "BD = R \\left(2 - A \\right) + T \\left(2 - KA \\right)" }, { "math_id": 3, "text": "K = \\frac{-R + \\frac{BA}{\\pi A / 180}}{T}." }, { "math_id": 4, "text": "K = \\frac{\\log \\min\\left(100, \\frac{\\max(20 R, T)}{T}\\right)}{2 \\log 100}." } ]
https://en.wikipedia.org/wiki?curid=6139788
6139962
DFFITS
Diagnostics measure for statistical regression In statistics, DFFIT and DFFITS ("difference in fit(s)") are diagnostics meant to show how influential a point is in a linear regression, first proposed in 1980. DFFIT is the change in the predicted value for a point, obtained when that point is left out of the regression: formula_0 where formula_1 and formula_2 are the prediction for point "i" with and without point "i" included in the regression. DFFITS is the Studentized DFFIT, where Studentization is achieved by dividing by the estimated standard deviation of the fit at that point: formula_3 where formula_4 is the standard error estimated without the point in question, and formula_5 is the leverage for the point. DFFITS also equals the products of the externally Studentized residual (formula_6) and the leverage factor (formula_7): formula_8 Thus, for low leverage points, DFFITS is expected to be small, whereas as the leverage goes to 1 the distribution of the DFFITS value widens infinitely. For a perfectly balanced experimental design (such as a factorial design or balanced partial factorial design), the leverage for each point is p/n, the number of parameters divided by the number of points. This means that the DFFITS values will be distributed (in the Gaussian case) as formula_9 times a t variate. Therefore, the authors suggest investigating those points with DFFITS greater than formula_10. Although the raw values resulting from the equations are different, Cook's distance and DFFITS are conceptually identical and there is a closed-form formula to convert one value to the other. Development. Previously when assessing a dataset before running a linear regression, the possibility of outliers would be assessed using histograms and scatterplots. Both methods of assessing data points were subjective and there was little way of knowing how much leverage each potential outlier had on the results data. This led to a variety of quantitative measures, including DFFIT, DFBETA. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{DFFIT} = \\widehat{y}_i - \\widehat{y}_{i(i)} " }, { "math_id": 1, "text": "\\widehat{y}_i" }, { "math_id": 2, "text": "\\widehat{y}_{i(i)}" }, { "math_id": 3, "text": "\\text{DFFITS} = \\frac{\\text{DFFIT}}{s_{(i)} \\sqrt{h_{ii}}} " }, { "math_id": 4, "text": "s_{(i)}" }, { "math_id": 5, "text": "h_{ii}" }, { "math_id": 6, "text": "t_{i(i)}" }, { "math_id": 7, "text": "\\sqrt{h_{ii}/(1-h_{ii})}" }, { "math_id": 8, "text": "\\text{DFFITS} = t_{i(i)} \\sqrt{\\frac{h_{ii}}{1-h_{ii}}}" }, { "math_id": 9, "text": "\\sqrt{p \\over n-p} \\approx \\sqrt{p \\over n}" }, { "math_id": 10, "text": "2\\sqrt{p \\over n}" } ]
https://en.wikipedia.org/wiki?curid=6139962
614040
Intransitivity
Property of mathematical relations In mathematics, intransitivity (sometimes called nontransitivity) is a property of binary relations that are not transitive relations. This may include any relation that is not transitive, or the stronger property of antitransitivity, which describes a relation that is never transitive. Intransitivity. A relation is transitive if, whenever it relates some A to some B, and that B to some C, it also relates that A to that C. Some authors call a relation intransitive if it is not transitive, that is, (if the relation in question is named formula_0) formula_1 This statement is equivalent to formula_2 For example, consider the relation "R" on the integers such that "a R b" if and only if "a" is a multiple of "b" or a divisor of "b". This relation is intransitive since, for example, 2 "R" 6 (2 is a divisor of 6) and 6 "R" 3 (6 is a multiple of 3), but 2 is neither a multiple nor a divisor of 3. This does not imply that the relation is antitransitive (see below); for example, 2 "R" 6, 6 "R" 12, and 2 "R" 12 as well. As another example, in the food chain, wolves feed on deer, and deer feed on grass, but wolves do not feed on grass. Thus, the feed on relation among life forms is intransitive, in this sense. Another example that does not involve preference loops arises in freemasonry: in some instances lodge A recognizes lodge B, and lodge B recognizes lodge C, but lodge A does not recognize lodge C. Thus the recognition relation among Masonic lodges is intransitive. Antitransitivity. Often the term intransitive is used to refer to the stronger property of antitransitivity. In the example above, the feed on relation is not transitive, but it still contains some transitivity: for instance, humans feed on rabbits, rabbits feed on carrots, and humans also feed on carrots. A relation is antitransitive if this never occurs at all, i.e. formula_3 Many authors use the term intransitivity to mean antitransitivity. For example, the relation "R" on the integers, such that "a R b" if and only if "a + b" is odd, is intransitive. If "a R b" and "b R c", then either "a" and "c" are both odd and "b" is even, or vice-versa. In either case, "a + c" is even. A second example of an antitransitive relation: the "defeated" relation in knockout tournaments. If player A defeated player B and player B defeated player C, A can have never played C, and therefore, A has not defeated C. By transposition, each of the following formulas is equivalent to antitransitivity of "R": formula_4 Cycles. The term intransitivity is often used when speaking of scenarios in which a relation describes the relative preferences between pairs of options, and weighing several options produces a "loop" of preference: Rock, paper, scissors; intransitive dice; and Penney's game are examples. Real combative relations of competing species, strategies of individual animals, and fights of remote-controlled vehicles in BattleBots shows ("robot Darwinism") can be cyclic as well. Assuming no option is preferred to itself i.e. the relation is irreflexive, a preference relation with a loop is not transitive. For if it is, each option in the loop is preferred to each option, including itself. This can be illustrated for this example of a loop among A, B, and C. Assume the relation is transitive. Then, since A is preferred to B and B is preferred to C, also A is preferred to C. But then, since C is preferred to A, also A is preferred to A. Therefore such a preference loop (or cycle) is known as an intransitivity. Notice that a cycle is neither necessary nor sufficient for a binary relation to be not transitive. For example, an equivalence relation possesses cycles but is transitive. Now, consider the relation "is an enemy of" and suppose that the relation is symmetric and satisfies the condition that for any country, any enemy of an enemy of the country is not itself an enemy of the country. This is an example of an antitransitive relation that does not have any cycles. In particular, by virtue of being antitransitive the relation is not transitive. The game of rock, paper, scissors is an example. The relation over rock, paper, and scissors is "defeats", and the standard rules of the game are such that rock defeats scissors, scissors defeats paper, and paper defeats rock. Furthermore, it is also true that scissors does not defeat rock, paper does not defeat scissors, and rock does not defeat paper. Finally, it is also true that no option defeats itself. This information can be depicted in a table: The first argument of the relation is a row and the second one is a column. Ones indicate the relation holds, zero indicates that it does not hold. Now, notice that the following statement is true for any pair of elements x and y drawn (with replacement) from the set {rock, scissors, paper}: If x defeats y, and y defeats z, then x does not defeat z. Hence the relation is antitransitive. Thus, a cycle is neither necessary nor sufficient for a binary relation to be antitransitive. Likelihood. It has been suggested that Condorcet voting tends to eliminate "intransitive loops" when large numbers of voters participate because the overall assessment criteria for voters balances out. For instance, voters may prefer candidates on several different units of measure such as by order of social consciousness or by order of most fiscally conservative. In such cases intransitivity reduces to a broader equation of numbers of people and the weights of their units of measure in assessing candidates. Such as: While each voter may not assess the units of measure identically, the trend then becomes a single vector on which the consensus agrees is a preferred balance of candidate criteria. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "\\lnot\\left(\\forall a, b, c: a R b \\land b R c \\implies a R c\\right)." }, { "math_id": 2, "text": "\\exists a,b,c : a R b \\land b R c \\land \\lnot(a R c)." }, { "math_id": 3, "text": "\\forall a, b, c: a R b \\land b R c \\implies \\lnot (a R c)." }, { "math_id": 4, "text": "\\begin{align}\n &\\forall a, b, c: a R b \\land a R c \\implies \\lnot (b R c) \\\\[3pt]\n &\\forall a, b, c: a R c \\land b R c \\implies \\lnot (a R b)\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=614040
61404733
Greenhouse–Geisser correction
Correction for lack of sphericity The Greenhouse–Geisser correction formula_0 is a statistical method of adjusting for lack of sphericity in a repeated measures ANOVA. The correction functions as both an estimate of epsilon (sphericity) and a correction for lack of sphericity. The correction was proposed by Samuel Greenhouse and Seymour Geisser in 1959. The Greenhouse–Geisser correction is an estimate of sphericity (formula_0). If sphericity is met, then formula_1. If sphericity is not met, then epsilon will be less than 1 (and the degrees of freedom will be overestimated and the F-value will be inflated). To correct for this inflation, multiply the Greenhouse–Geisser estimate of epsilon to the degrees of freedom used to calculate the F critical value. An alternative correction that is believed to be less conservative is the Huynh–Feldt correction (1976). As a general rule of thumb, the Greenhouse–Geisser correction is the preferred correction method when the epsilon estimate is below 0.75. Otherwise, the Huynh–Feldt correction is preferred. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\widehat{\\varepsilon}" }, { "math_id": 1, "text": " \\varepsilon = 1 " } ]
https://en.wikipedia.org/wiki?curid=61404733
61406091
Decomposition of a module
Abstract algebra concept In abstract algebra, a decomposition of a module is a way to write a module as a direct sum of modules. A type of a decomposition is often used to define or characterize modules: for example, a semisimple module is a module that has a decomposition into simple modules. Given a ring, the types of decomposition of modules over the ring can also be used to define or characterize the ring: a ring is semisimple if and only if every module over it is a semisimple module. An indecomposable module is a module that is not a direct sum of two nonzero submodules. Azumaya's theorem states that if a module has an decomposition into modules with local endomorphism rings, then all decompositions into indecomposable modules are equivalent to each other; a special case of this, especially in group theory, is known as the Krull–Schmidt theorem. A special case of a decomposition of a module is a decomposition of a ring: for example, a ring is semisimple if and only if it is a direct sum (in fact a product) of matrix rings over division rings (this observation is known as the Artin–Wedderburn theorem). Idempotents and decompositions. To give a direct sum decomposition of a module into submodules is the same as to give orthogonal idempotents in the endomorphism ring of the module that sum up to the identity map. Indeed, if formula_0, then, for each formula_1, the linear endomorphism formula_2 given by the natural projection followed by the natural inclusion is an idempotent. They are clearly orthogonal to each other (formula_3 for formula_4) and they sum up to the identity map: formula_5 as endomorphisms (here the summation is well-defined since it is a finite sum at each element of the module). Conversely, each set of orthogonal idempotents formula_6 such that only finitely many formula_7 are nonzero for each formula_8 and formula_9 determine a direct sum decomposition by taking formula_10 to be the images of formula_11. This fact already puts some constraints on a possible decomposition of a ring: given a ring formula_12, suppose there is a decomposition formula_13 of formula_12 as a left module over itself, where formula_14 are left submodules; i.e., left ideals. Each endomorphism formula_15 can be identified with a right multiplication by an element of "R"; thus, formula_16 where formula_17 are idempotents of formula_18. The summation of idempotent endomorphisms corresponds to the decomposition of the unity of "R": formula_19, which is necessarily a finite sum; in particular, formula_20 must be a finite set. For example, take formula_21, the ring of "n"-by-"n" matrices over a division ring "D". Then formula_22 is the direct sum of "n" copies of formula_23, the columns; each column is a simple left "R"-submodule or, in other words, a minimal left ideal. Let "R" be a ring. Suppose there is a (necessarily finite) decomposition of it as a left module over itself formula_24 into "two-sided ideals" formula_25 of "R". As above, formula_26 for some orthogonal idempotents formula_11 such that formula_27. Since formula_25 is an ideal, formula_28 and so formula_29 for formula_4. Then, for each "i", formula_30 That is, the formula_11 are in the center; i.e., they are central idempotents. Clearly, the argument can be reversed and so there is a one-to-one correspondence between the direct sum decomposition into ideals and the orthogonal central idempotents summing up to the unity 1. Also, each formula_25 itself is a ring on its own right, the unity given by formula_11, and, as a ring, "R" is the product ring formula_31 For example, again take formula_21. This ring is a simple ring; in particular, it has no nontrivial decomposition into two-sided ideals. Types of decomposition. There are several types of direct sum decompositions that have been studied: Since a simple module is indecomposable, a semisimple decomposition is an indecomposable decomposition (but not conversely). If the endomorphism ring of a module is local, then, in particular, it cannot have a nontrivial idempotent: the module is indecomposable. Thus, a decomposition with local endomorphism rings is an indecomposable decomposition. A direct summand is said to be "maximal" if it admits an indecomposable complement. A decomposition formula_32 is said to "complement maximal direct summands" if for each maximal direct summand "L" of "M", there exists a subset formula_33 such that formula_34 Two decompositions formula_35 are said to be "equivalent" if there is a bijection formula_36 such that for each formula_1, formula_37. If a module admits an indecomposable decomposition complementing maximal direct summands, then any two indecomposable decompositions of the module are equivalent. Azumaya's theorem. In the simplest form, Azumaya's theorem states: given a decomposition formula_38 such that the endomorphism ring of each formula_10 is local (so the decomposition is indecomposable), each indecomposable decomposition of "M" is equivalent to this given decomposition. The more precise version of the theorem states: still given such a decomposition, if formula_39, then The endomorphism ring of an indecomposable module of finite length is local (e.g., by Fitting's lemma) and thus Azumaya's theorem applies to the setup of the Krull–Schmidt theorem. Indeed, if "M" is a module of finite length, then, by induction on length, it has a finite indecomposable decomposition formula_48, which is a decomposition with local endomorphism rings. Now, suppose we are given an indecomposable decomposition formula_49. Then it must be equivalent to the first one: so formula_50 and formula_51 for some permutation formula_52 of formula_53. More precisely, since formula_54 is indecomposable, formula_55 for some formula_56. Then, since formula_57 is indecomposable, formula_58 and so on; i.e., complements to each sum formula_59 can be taken to be direct sums of some formula_10's. Another application is the following statement (which is a key step in the proof of Kaplansky's theorem on projective modules): To see this, choose a finite set formula_64 such that formula_65. Then, writing formula_66, by Azumaya's theorem, formula_67 with some direct summands formula_68 of formula_69 and then, by modular law, formula_70 with formula_71. Then, since formula_72 is a direct summand of formula_73, we can write formula_74 and then formula_75, which implies, since "F" is finite, that formula_76 for some "J" by a repeated application of Azumaya's theorem. In the setup of Azumaya's theorem, if, in addition, each formula_10 is countably generated, then there is the following refinement (due originally to Crawley–Jónsson and later to Warfield): formula_40 is isomorphic to formula_77 for some subset formula_33. (In a sense, this is an extension of Kaplansky's theorem and is proved by the two lemmas used in the proof of the theorem.) According to , it is not known whether the assumption "formula_10 countably generated" can be dropped; i.e., this refined version is true in general. Decomposition of a ring. On the decomposition of a ring, the most basic but still important observation, known as the Wedderburn-Artin theorem is this: given a ring "R", the following are equivalent: To show 1. formula_85 2., first note that if formula_12 is semisimple then we have an isomorphism of left formula_12-modules formula_86 where formula_87 are mutually non-isomorphic minimal left ideals. Then, with the view that endomorphisms act from the right, formula_88 where each formula_89 can be viewed as the matrix ring over formula_90, which is a division ring by Schur's Lemma. The converse holds because the decomposition of 2. is equivalent to a decomposition into minimal left ideals = simple left submodules. The equivalence 1. formula_91 3. holds because every module is a quotient of a free module, and a quotient of a semisimple module is semisimple. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M = \\bigoplus_{i \\in I} M_i" }, { "math_id": 1, "text": "i \\in I" }, { "math_id": 2, "text": "e_i : M \\to M_i \\hookrightarrow M" }, { "math_id": 3, "text": "e_i e_j = 0" }, { "math_id": 4, "text": "i \\ne j" }, { "math_id": 5, "text": "1_{\\operatorname{M}} = \\sum_{i \\in I} e_i" }, { "math_id": 6, "text": "\\{ e_i \\}_{i \\in I}" }, { "math_id": 7, "text": "e_i(x)" }, { "math_id": 8, "text": "x \\in M" }, { "math_id": 9, "text": "\\sum e_i = 1_M" }, { "math_id": 10, "text": "M_i" }, { "math_id": 11, "text": "e_i" }, { "math_id": 12, "text": "R" }, { "math_id": 13, "text": "{}_R R = \\bigoplus_{a \\in A} I_a" }, { "math_id": 14, "text": "I_a" }, { "math_id": 15, "text": "{}_R R \\to {}_R R" }, { "math_id": 16, "text": "I_a = R e_a" }, { "math_id": 17, "text": "e_a" }, { "math_id": 18, "text": "\\operatorname{End}({}_R R) \\simeq R" }, { "math_id": 19, "text": "1_R = \\sum_{a \\in A} e_a \\in \\bigoplus_{a \\in A} I_a" }, { "math_id": 20, "text": "A" }, { "math_id": 21, "text": "R = \\operatorname{M}_n(D)" }, { "math_id": 22, "text": "{}_R R" }, { "math_id": 23, "text": "D^n" }, { "math_id": 24, "text": "{}_R R = R_1 \\oplus \\cdots \\oplus R_n" }, { "math_id": 25, "text": "R_i" }, { "math_id": 26, "text": "R_i = R e_i" }, { "math_id": 27, "text": "\\textstyle{1 = \\sum_1^n e_i}" }, { "math_id": 28, "text": "e_i R \\subset R_i" }, { "math_id": 29, "text": "e_i R e_j \\subset R_i \\cap R_j = 0" }, { "math_id": 30, "text": "e_i r = \\sum_j e_j r e_i = \\sum_j e_i r e_j = r e_i." }, { "math_id": 31, "text": "R_1 \\times \\cdots \\times R_n." }, { "math_id": 32, "text": "\\textstyle{M = \\bigoplus_{i \\in I} M_i}" }, { "math_id": 33, "text": "J \\subset I" }, { "math_id": 34, "text": "M = \\left(\\bigoplus_{j \\in J} M_j \\right) \\bigoplus L." }, { "math_id": 35, "text": "M = \\bigoplus_{i \\in I} M_i = \\bigoplus_{j \\in J} N_j" }, { "math_id": 36, "text": "\\varphi : I \\overset{\\sim}\\to J" }, { "math_id": 37, "text": "M_i \\simeq N_{\\varphi(i)}" }, { "math_id": 38, "text": "M = \\bigoplus_{i \\in I} M_i" }, { "math_id": 39, "text": "M = N \\oplus K" }, { "math_id": 40, "text": "N" }, { "math_id": 41, "text": "K" }, { "math_id": 42, "text": "M = M_j \\oplus K" }, { "math_id": 43, "text": "M_j \\simeq N" }, { "math_id": 44, "text": "j \\in I" }, { "math_id": 45, "text": "N'" }, { "math_id": 46, "text": "K'" }, { "math_id": 47, "text": "M = M_i \\oplus N' \\oplus K'" }, { "math_id": 48, "text": "M = \\bigoplus_{i=1}^n M_i" }, { "math_id": 49, "text": "M = \\bigoplus_{i=1}^m N_i" }, { "math_id": 50, "text": "m = n" }, { "math_id": 51, "text": "M_i \\simeq N_{\\sigma(i)}" }, { "math_id": 52, "text": "\\sigma" }, { "math_id": 53, "text": "\\{ 1, \\dots, n \\}" }, { "math_id": 54, "text": "N_1" }, { "math_id": 55, "text": "M = M_{i_1} \\bigoplus (\\bigoplus_{i=2}^n N_i)" }, { "math_id": 56, "text": "i_1" }, { "math_id": 57, "text": "N_2" }, { "math_id": 58, "text": "M = M_{i_1} \\bigoplus M_{i_2} \\bigoplus (\\bigoplus_{i=3}^n N_i)" }, { "math_id": 59, "text": "\\bigoplus_{i=l}^n N_i" }, { "math_id": 60, "text": "x \\in N" }, { "math_id": 61, "text": "H" }, { "math_id": 62, "text": "x \\in H" }, { "math_id": 63, "text": "H \\simeq \\bigoplus_{j \\in J} M_j" }, { "math_id": 64, "text": "F \\subset I" }, { "math_id": 65, "text": "x \\in \\bigoplus_{j \\in F} M_j" }, { "math_id": 66, "text": "M = N \\oplus L" }, { "math_id": 67, "text": "M = (\\oplus_{j \\in F} M_j) \\oplus N_1 \\oplus L_1" }, { "math_id": 68, "text": "N_1, L_1" }, { "math_id": 69, "text": "N, L" }, { "math_id": 70, "text": "N = H \\oplus N_1" }, { "math_id": 71, "text": "H = (\\oplus_{j \\in F} M_j \\oplus L_1) \\cap N" }, { "math_id": 72, "text": "L_1" }, { "math_id": 73, "text": "L" }, { "math_id": 74, "text": "L = L_1 \\oplus L_1'" }, { "math_id": 75, "text": "\\oplus_{j \\in F} M_j \\simeq H \\oplus L_1'" }, { "math_id": 76, "text": "H \\simeq \\oplus_{j \\in J} M_j" }, { "math_id": 77, "text": "\\bigoplus_{j \\in J} M_j" }, { "math_id": 78, "text": "R \\cong \\prod_{i=1}^r \\operatorname{M}_{m_i}(D_i)" }, { "math_id": 79, "text": "D_1, \\dots, D_r" }, { "math_id": 80, "text": "\\operatorname{M}_n(D_i)" }, { "math_id": 81, "text": "D_i" }, { "math_id": 82, "text": "r" }, { "math_id": 83, "text": "D_1, \\dots , D_r" }, { "math_id": 84, "text": "m_1, \\dots, m_r" }, { "math_id": 85, "text": "\\Rightarrow" }, { "math_id": 86, "text": "{}_R R \\cong \\bigoplus_{i=1}^r I_i^{\\oplus m_i}" }, { "math_id": 87, "text": "I_i" }, { "math_id": 88, "text": "R \\cong \\operatorname{End}({}_R R) \\cong \\bigoplus_{i=1}^r \\operatorname{End}(I_i^{\\oplus m_i})" }, { "math_id": 89, "text": "\\operatorname{End}(I_i^{\\oplus m_i})" }, { "math_id": 90, "text": "D_i = \\operatorname{End}(I_i)" }, { "math_id": 91, "text": "\\Leftrightarrow" } ]
https://en.wikipedia.org/wiki?curid=61406091
614080
Thompson sporadic group
Sporadic simple group In the area of modern algebra known as group theory, the Thompson group "Th" is a sporadic simple group of order    90,745,943,887,872,000 = 215 · 310 · 53 · 72 · 13 · 19 · 31 ≈ 9×1016. History. "Th" is one of the 26 sporadic groups and was found by John G. Thompson (1976) and constructed by . They constructed it as the automorphism group of a certain lattice in the 248-dimensional Lie algebra of E8. It does not preserve the Lie bracket of this lattice, but does preserve the Lie bracket mod 3, so is a subgroup of the Chevalley group E8(3). The subgroup preserving the Lie bracket (over the integers) is a maximal subgroup of the Thompson group called the Dempwolff group (which unlike the Thompson group is a subgroup of the compact Lie group E8). Representations. The centralizer of an element of order 3 of type 3C in the Monster group is a product of the Thompson group and a group of order 3, as a result of which the Thompson group acts on a vertex operator algebra over the field with 3 elements. This vertex operator algebra contains the E8 Lie algebra over F3, giving the embedding of "Th" into E8(3). The full normalizer of a 3C element in the Monster group is S3 × Th, so Th centralizes 3 involutions alongside the 3-cycle. These involutions are centralized by the Baby monster group, which therefore contains Th as a subgroup. The Schur multiplier and the outer automorphism group of the Thompson group are both trivial. Generalized monstrous moonshine. Conway and Norton suggested in their 1979 paper that monstrous moonshine is not limited to the monster, but that similar phenomena may be found for other groups. Larissa Queen and others subsequently found that one can construct the expansions of many Hauptmoduln from simple combinations of dimensions of sporadic groups. For "Th", the relevant McKay-Thompson series is formula_0 (OEIS: ), formula_1 and "j"("τ") is the j-invariant. Maximal subgroups. found the 16 conjugacy classes of maximal subgroups of "Th" as follows:
[ { "math_id": 0, "text": "T_{3C}(\\tau)" }, { "math_id": 1, "text": "T_{3C}(\\tau) = \\Big(j(3\\tau)\\Big)^{1/3} = \\frac{1}{q}\\,+\\,248q^2\\,+\\,4124q^5\\,+\\,34752q^8\\,+\\,213126q^{11}\\,+\\,1057504q^{14}+\\cdots\\," } ]
https://en.wikipedia.org/wiki?curid=614080
61408357
Intersection type discipline
Branch of type theory In mathematical logic, the intersection type discipline is a branch of type theory encompassing type systems that use the intersection type constructor formula_0 to assign multiple types to a single term. In particular, if a term formula_1 can be assigned "both" the type formula_2 and the type formula_3, then formula_1 can be assigned the intersection type formula_4 (and vice versa). Therefore, the intersection type constructor can be used to express finite heterogeneous ad hoc polymorphism (as opposed to parametric polymorphism). For example, the λ-term formula_5 can be assigned the type formula_6 in most intersection type systems, assuming for the term variable formula_7 both the function type formula_8 and the corresponding argument type formula_9. Prominent intersection type systems include the Coppo–Dezani type assignment system, the Barendregt-Coppo–Dezani type assignment system, and the essential intersection type assignment system. Most strikingly, intersection type systems are closely related to (and often exactly characterize) normalization properties of λ-terms under β-reduction. In programming languages, such as TypeScript and Scala, intersection types are used to express ad hoc polymorphism. History. The intersection type discipline was pioneered by Mario Coppo, Mariangiola Dezani-Ciancaglini, Patrick Sallé, and Garrel Pottinger. The underlying motivation was to study semantic properties (such as normalization) of the λ-calculus by means of type theory. While the initial work by Coppo and Dezani established a type theoretic characterization of strong normalization for the λI-calculus, Pottinger extended this characterization to the λK-calculus. In addition, Sallé contributed the notion of the universal type formula_10 that can be assigned to any λ-term, thereby corresponding to the empty intersection. Using the universal type formula_10 allowed for a fine-grained analysis of head normalization, normalization, and strong normalization. In collaboration with Henk Barendregt, a filter λ-model for an intersection type system was given, tying intersection types ever more closely to λ-calculus semantics. Due to the correspondence with normalization, typability in prominent intersection type systems (excluding the universal type) is undecidable. Complementarily, undecidability of the dual problem of type inhabitation in prominent intersection type systems was proven by Paweł Urzyczyn. Later, this result was refined showing exponential space completeness of rank 2 intersection type inhabitation and undecidability of rank 3 intersection type inhabitation. Remarkably, "principal" type inhabitation is decidable in polynomial time. Coppo–Dezani type assignment system. The Coppo–Dezani type assignment system formula_11 extends the simply typed λ-calculus by allowing multiple types to be assumed for a term variable. Term language. The term language of formula_12 is given by λ-terms (or, lambda expressions): formula_13 Type language. The type language of formula_12 is inductively defined by the following grammar: formula_14 The intersection type constructor (formula_15) is taken modulo associativity, commutativity and idempotence. Typing rules. The typing rules formula_16, formula_17, formula_18, and formula_19 of formula_12 are: formula_20 Properties. Typability and normalization are closely related in formula_11 by the following properties: If the type language is extended to contain the empty intersection, i.e. formula_26, then formula_11 is closed under β-equality and is sound and complete for inference semantics. Barendregt–Coppo–Dezani type assignment system. The Barendregt–Coppo–Dezani type assignment system formula_27 extends the Coppo–Dezani type assignment system in the following three aspects: Term language. The term language of formula_27 is given by λ-terms (or, lambda expressions): formula_13 Type language. The type language of formula_27 is inductively defined by the following grammar: formula_31 Intersection type subtyping. Intersection type subtyping formula_30 is defined as the smallest preorder (reflexive and transitive relation) over intersection types satisfying the following properties: formula_32 Intersection type subtyping is decidable in quadratic time. Typing rules. The typing rules formula_16, formula_17, formula_18, formula_30, formula_33, and formula_34 of formula_27 are: formula_35 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\cap)" }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "\\varphi_1" }, { "math_id": 3, "text": "\\varphi_2" }, { "math_id": 4, "text": "\\varphi_1 \\cap \\varphi_2" }, { "math_id": 5, "text": "\\lambda x.\\!(x\\;x)" }, { "math_id": 6, "text": "((\\alpha \\to \\beta) \\cap \\alpha) \\to \\beta" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "\\alpha \\to \\beta" }, { "math_id": 9, "text": "\\alpha" }, { "math_id": 10, "text": "\\omega" }, { "math_id": 11, "text": "(\\vdash_{\\text{CD}})" }, { "math_id": 12, "text": "(\\vdash_\\text{CD})" }, { "math_id": 13, "text": "\n\\begin{align}\nM, N & ::= x \\mid (\\lambda x.\\!M) \\mid (M\\;N) && \\text{ where } x \\text{ ranges over term variables}\\\\\n\\end{align}\n" }, { "math_id": 14, "text": "\n\\begin{align}\n\\varphi & ::= \\alpha \\mid \\sigma \\to \\varphi && \\text{ where } \\alpha \\text{ ranges over type variables}\\\\\n\\sigma & ::= \\varphi_1 \\cap \\cdots \\cap \\varphi_n && \\text{ where } n \\geq 1\n\\end{align}\n" }, { "math_id": 15, "text": "\\cap" }, { "math_id": 16, "text": "(\\to\\!\\!\\text{I})" }, { "math_id": 17, "text": "(\\to\\!\\!\\text{E})" }, { "math_id": 18, "text": "(\\cap\\text{I})" }, { "math_id": 19, "text": "(\\cap\\text{E})" }, { "math_id": 20, "text": "\n\\begin{array}{cc}\n\\dfrac{\\Gamma, x : \\sigma \\vdash_\\text{CD} M : \\varphi}{\\Gamma \\vdash_\\text{CD} \\lambda x.\\!M : \\sigma \\to \\varphi}(\\to\\!\\!\\text{I})\n&\\dfrac{\\Gamma \\vdash_\\text{CD} M : \\sigma \\to \\varphi \\quad \\Gamma \\vdash_\\text{CD} N : \\sigma}{\\Gamma \\vdash_\\text{CD} M\\;N : \\varphi}(\\to\\!\\!\\text{E})\\\\\\\\\n\\dfrac{\\Gamma \\vdash_\\text{CD} M : \\varphi_1 \\quad \\ldots \\quad \\Gamma \\vdash_\\text{CD} M : \\varphi_n}{\\Gamma \\vdash_\\text{CD} M : \\varphi_1 \\cap \\cdots \\cap \\varphi_n}(\\cap\\text{I})\n&\\dfrac{(1 \\leq i \\leq n)}{\\Gamma, x : \\varphi_1 \\cap \\cdots \\cap \\varphi_n \\vdash_\\text{CD} x : \\varphi_i}(\\cap\\text{E})\n\\end{array}\n" }, { "math_id": 21, "text": "\\Gamma \\vdash_{\\text{CD}} M : \\sigma" }, { "math_id": 22, "text": "M \\to_\\beta N" }, { "math_id": 23, "text": "\\Gamma \\vdash_{\\text{CD}} N : \\sigma" }, { "math_id": 24, "text": "\\Gamma" }, { "math_id": 25, "text": "\\sigma" }, { "math_id": 26, "text": "\\sigma = \\varphi_1 \\cap \\cdots \\cap \\varphi_n \\text{ where } n = 0" }, { "math_id": 27, "text": "(\\vdash_{\\text{BCD}})" }, { "math_id": 28, "text": "(\\vdash_\\text{BCD})" }, { "math_id": 29, "text": "(\\to)" }, { "math_id": 30, "text": "(\\leq)" }, { "math_id": 31, "text": "\n\\begin{align}\n\\sigma, \\tau & ::= \\alpha \\mid \\omega \\mid \\sigma \\to \\tau \\mid \\sigma \\cap \\tau && \\text{ where } \\alpha \\text{ ranges over type variables}\n\\end{align}\n" }, { "math_id": 32, "text": "\n\\begin{align}\n&\\sigma \\leq \\omega, \\quad \\omega \\leq \\omega\\to\\omega, \\quad \\sigma \\cap \\tau \\leq \\sigma, \\quad \\sigma \\cap \\tau \\leq \\tau, \\\\\n& (\\sigma\\to\\tau_1) \\cap (\\sigma\\to\\tau_2) \\leq \\sigma \\to \\tau_1 \\cap \\tau_2,\\\\\n&\\text{if }\\sigma \\leq \\tau_1 \\text{ and } \\sigma\\leq \\tau_2 \\text{, then } \\sigma \\leq \\tau_1 \\cap \\tau_2, \\\\\n&\\text{if } \\sigma_2 \\leq \\sigma_1 \\text{ and } \\tau_1 \\leq \\tau_2 \\text{, then } \\sigma_1\\to\\tau_1 \\leq \\sigma_2\\to\\tau_2\n\\end{align}\n" }, { "math_id": 33, "text": "(\\text{A})" }, { "math_id": 34, "text": "(\\omega)" }, { "math_id": 35, "text": "\n\\begin{array}{cc}\n\\dfrac{\\Gamma, x : \\sigma \\vdash_{\\text{BCD}} M : \\tau}{\\Gamma \\vdash_{\\text{BCD}} \\lambda x.\\!M : \\sigma \\to \\tau}(\\to\\!\\!\\text{I})\n&\\dfrac{\\Gamma \\vdash_{\\text{BCD}} M : \\sigma \\to \\tau \\quad \\Gamma \\vdash_{\\text{BCD}} N : \\sigma}{\\Gamma \\vdash_{\\text{BCD}} M\\;N : \\tau}(\\to\\!\\!\\text{E})\\\\\\\\\n\\dfrac{\\Gamma \\vdash_{\\text{BCD}} M : \\sigma \\quad \\Gamma \\vdash_{\\text{BCD}} M : \\tau}{\\Gamma \\vdash_{\\text{BCD}} M : \\sigma \\cap \\tau}(\\cap\\text{I})\n&\\dfrac{\\Gamma \\vdash_{\\text{BCD}} M : \\sigma \\quad (\\sigma \\leq \\tau)}{\\Gamma \\vdash_{\\text{BCD}} M : \\tau}(\\leq)\\\\\\\\\n\\dfrac{}{\\Gamma, x : \\sigma \\vdash_{\\text{BCD}} x : \\sigma}(\\text{A})\n&\\dfrac{}{\\Gamma \\vdash_{\\text{BCD}} M : \\omega}(\\omega)\n\\end{array}\n" }, { "math_id": 36, "text": "\\Gamma \\vdash_{\\text{BCD}} M : \\sigma" }, { "math_id": 37, "text": "M \\to_{\\beta} N" }, { "math_id": 38, "text": "\\Gamma \\vdash_{\\text{BCD}} N : \\sigma" }, { "math_id": 39, "text": "(\\Gamma, \\sigma)" }, { "math_id": 40, "text": "\\Gamma' \\vdash_{\\text{BCD}} M : \\sigma'" }, { "math_id": 41, "text": "(\\Gamma', \\sigma')" } ]
https://en.wikipedia.org/wiki?curid=61408357
61409853
External memory graph traversal
External memory graph traversal is a type of graph traversal optimized for accessing externally stored memory. Background. Graph traversal is a subroutine in most graph algorithms. The goal of a graph traversal algorithm is to visit (and / or process) every node of a graph. Graph traversal algorithms, like breadth-first search and depth-first search, are analyzed using the von Neumann model, which assumes uniform memory access cost. This view neglects the fact, that for huge instances part of the graph resides on disk rather than internal memory. Since accessing the disk is magnitudes slower than accessing internal memory, the need for efficient traversal of external memory exists. External memory model. For external memory algorithms the external memory model by Aggarwal and Vitter is used for analysis. A machine is specified by three parameters: "M", "B" and "D". "M" is the size of the internal memory, "B" is the block size of a disk and "D" is the number of parallel disks. The measure of performance for an external memory algorithm is the number of I/Os it performs. External memory breadth-first search. The breadth-first search algorithm starts at a root node and traverses every node with depth one. If there are no more unvisited nodes at the current depth, nodes at a higher depth are traversed. Eventually, every node of the graph has been visited. Munagala and Ranade. For an undirected graph formula_0, Munagala and Ranade proposed the following external memory algorithm: Let formula_1 denote the nodes in breadth-first search level t and let formula_2 be the multi-set of neighbors of level t-1. For every t, formula_1 can be constructed from formula_3 by transforming it into a set and excluding previously visited nodes from it. The overall number of I/Os of this algorithm follows with consideration that formula_11 and formula_12 and is formula_13. A visualization of the three described steps necessary to compute "L"("t") is depicted in the figure on the right. Mehlhorn and Meyer. Mehlhorn and Meyer proposed an algorithm that is based on the algorithm of Munagala and Ranade (MR) and improves their result. It consists of two phases. In the first phase the graph is preprocessed, the second phase performs a breadth-first search using the information gathered in phase one. During the preprocessing phase the graph is partitioned into disjointed subgraphs formula_14 with small diameter. It further partitions the adjacency lists accordingly, by constructing an external file formula_15, where formula_16 contains the adjacency list for all nodes in formula_17. The breadth-first search phase is similar to the MR algorithm. In addition the algorithm maintains a sorted external file H. This file is initialized with formula_18. Further, the nodes of any created breadth-first search level carry identifiers for the files formula_16 of their respective subgraphs formula_17. Instead of using random accesses to construct formula_1 the file H is used. Edges might be scanned more often in H, but unstructured I/Os in order to fetch adjacency lists are reduced. The overall number of I/Os for this algorithm is formula_20 External memory depth-first search. The depth-first search algorithm explores a graph along each branch as deep as possible, before backtracing. For "directed" graphs Buchsbaum, Goldwasser, Venkatasubramanian and Westbrook proposed an algorithm with formula_21 I/Os. This algorithm is based on a data structure called "buffered repository tree" (BRT). It stores a multi-set of items from an ordered universe. Items are identified by key. A BTR offers two operations: The algorithm simulates an internal depth-first search algorithm. A stack "S" of nodes is hold. During an iteration for the node "v" on top of "S" push an unvisited neighbor onto "S" and iterate. If there are no unvisited neighbors pop "v". The difficulty is to determine whether a node is unvisited without doing formula_24 I/Os per edge. To do this for a node "v" incoming edges &amp;NoBreak;&amp;NoBreak; are put into a BRT "D", when "v" is first discovered. Further, outgoing edges ("v","x") are put into a priority queue "P"("v"), keyed by the rank in the adjacency list. For vertex "u" on top of "S" all edges ("u","x") are extracted from "D". Such edges only exist if "x" has been discovered since the last time "u" was on top of "S" (or since the start of the algorithm if "u" is the first time on top of "S"). For every edge ("u","x") a delete("x") operation is performed on "P"("u"). Finally a "delete-min operation" on &amp;NoBreak;&amp;NoBreak; yields the next unvisited node. If "P"("u") is empty, "u" is popped from "S". Pseudocode for this algorithm is given below. 1 procedure BGVW-depth-first-search("G", "v"): 2 let "S" be a stack, "P"[] a priority queue for each node and "D" a BRT 3 "S".push("v") 4 while "S" is not empty: 5 "v" := "S".top() 6 if "v" is not marked: 7 mark("v") 8 extract all edges "(v, x)" from "D", ∀"x": "P"["v"].delete("x") 9 if ("u" := "P"["v"].delete-min()) is not null: 10 "S".push("u") 11 else: 12 "S".pop() 13 procedure mark("v") 14 put all edges ("x", "v") into "D" 15 ∀ ("v", "x"): put "x" into "P"["v"] References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "L(t)" }, { "math_id": 2, "text": "A(t):=N(L(t-1))" }, { "math_id": 3, "text": "A(t)" }, { "math_id": 4, "text": "L(t-1)" }, { "math_id": 5, "text": "O(|L(t-1)|+|A(t)|/(D\\cdot B))" }, { "math_id": 6, "text": "A'(t)" }, { "math_id": 7, "text": "O(\\operatorname{sort}(|A|))" }, { "math_id": 8, "text": "L(t):=A'(t)\\backslash\\{L(t-1)\\cup L(t-2)\\}" }, { "math_id": 9, "text": "L(t-2)" }, { "math_id": 10, "text": "O((|A(t)|+|L(t-1)|+|L(t-2)|)/(D\\cdot B))" }, { "math_id": 11, "text": "\\sum_t |A(t)|=O(m)" }, { "math_id": 12, "text": "\\sum_t |L(t)|=O(n)" }, { "math_id": 13, "text": "O(n+\\operatorname{sort}(n+m))" }, { "math_id": 14, "text": "S_i,\\,0\\leq i\\leq K" }, { "math_id": 15, "text": "F=F_0F_1\\dots F_{K-1}" }, { "math_id": 16, "text": "F_i" }, { "math_id": 17, "text": "S_i" }, { "math_id": 18, "text": "F_0" }, { "math_id": 19, "text": "v\\in L(t-1)" }, { "math_id": 20, "text": "O\\left(\\sqrt\\frac{n\\cdot(n+m)}{D\\cdot B}+\\operatorname{sort}(n+m)\\right)" }, { "math_id": 21, "text": "O((V+E/B)\\log_2 (V/B)+\\operatorname{sort}(E))" }, { "math_id": 22, "text": "O(1/B\\log_2 (N/B))" }, { "math_id": 23, "text": "O(\\log_2 (N/B)+S/B)" }, { "math_id": 24, "text": "\\Omega(1)" } ]
https://en.wikipedia.org/wiki?curid=61409853
614147
Knuth–Bendix completion algorithm
The Knuth–Bendix completion algorithm (named after Donald Knuth and Peter Bendix) is a semi-decision algorithm for transforming a set of equations (over terms) into a confluent term rewriting system. When the algorithm succeeds, it effectively solves the word problem for the specified algebra. Buchberger's algorithm for computing Gröbner bases is a very similar algorithm. Although developed independently, it may also be seen as the instantiation of Knuth–Bendix algorithm in the theory of polynomial rings. Introduction. For a set "E" of equations, its deductive closure (⁎⟷"E") is the set of all equations that can be derived by applying equations from "E" in any order. Formally, "E" is considered a binary relation, (⟶"E") is its rewrite closure, and (⁎⟷"E") is the equivalence closure of (⟶"E"). For a set "R" of rewrite rules, its deductive closure (⁎⟶"R" ∘ ⁎⟵"R") is the set of all equations that can be confirmed by applying rules from "R" left-to-right to both sides until they are literally equal. Formally, "R" is again viewed as a binary relation, (⟶"R") is its rewrite closure, (⟵"R") is its converse, and (⁎⟶"R" ∘ ⁎⟵"R") is the relation composition of their reflexive transitive closures (⁎⟶"R" and ⁎⟵"R"). For example, if "E" = {1⋅"x" = "x", "x"−1⋅"x" = 1, ("x"⋅"y")⋅"z" = "x"⋅("y"⋅"z")} are the group axioms, the derivation chain "a"−1⋅("a"⋅"b")   ⁎⟷"E"   ("a"−1⋅"a")⋅"b"   ⁎⟷"E"   1⋅"b"   ⁎⟷"E"   "b" demonstrates that "a"−1⋅("a"⋅"b") ⁎⟷"E" "b" is a member of "E"'s deductive closure. If "R" = {1⋅"x" → "x", "x"−1⋅"x" → 1, ("x"⋅"y")⋅"z" → "x"⋅("y"⋅"z")} is a "rewrite rule" version of "E", the derivation chains ("a"−1⋅"a")⋅"b"   ⁎⟶"R"   1⋅"b"   ⁎⟶"R"   "b"       and       "b"   ⁎⟵"R"   "b" demonstrate that ("a"−1⋅"a")⋅"b" ⁎⟶"R"∘⁎⟵"R" "b" is a member of "R"'s deductive closure. However, there is no way to derive "a"−1⋅("a"⋅"b") ⁎⟶"R"∘⁎⟵"R" "b" similar to above, since a right-to-left application of the rule ("x"⋅"y")⋅"z" → "x"⋅("y"⋅"z") is not allowed. The Knuth–Bendix algorithm takes a set "E" of equations between terms, and a reduction ordering (&gt;) on the set of all terms, and attempts to construct a confluent and terminating term rewriting system "R" that has the same deductive closure as "E". While proving consequences from "E" often requires human intuition, proving consequences from "R" does not. For more details, see Confluence (abstract rewriting)#Motivating examples, which gives an example proof from group theory, performed both using "E" and using "R". Rules. Given a set "E" of equations between terms, the following inference rules can be used to transform it into an equivalent convergent term rewrite system (if possible): They are based on a user-given reduction ordering (&gt;) on the set of all terms; it is lifted to a well-founded ordering (▻) on the set of rewrite rules by defining ("s" → "t") ▻ ("l" → "r") if Example. The following example run, obtained from the E theorem prover, computes a completion of the (additive) group axioms as in Knuth, Bendix (1970). It starts with the three initial equations for the group (neutral element 0, inverse elements, associativity), using codice_0 for "X"+"Y", and codice_1 for −"X". The 10 starred equations turn out to constitute the resulting convergent rewrite system. "pm" is short for "paramodulation", implementing "deduce". Critical pair computation is an instance of paramodulation for equational unit clauses. "rw" is rewriting, implementing "compose", "collapse", and "simplify". Orienting of equations is done implicitly and not recorded. See also Word problem (mathematics) for another presentation of this example. String rewriting systems in group theory. An important case in computational group theory are string rewriting systems which can be used to give canonical labels to elements or cosets of a finitely presented group as products of the generators. This special case is the focus of this section. Motivation in group theory. The critical pair lemma states that a term rewriting system is locally confluent (or weakly confluent) if and only if all its critical pairs are convergent. Furthermore, we have Newman's lemma which states that if an (abstract) rewriting system is strongly normalizing and weakly confluent, then the rewriting system is confluent. So, if we can add rules to the term rewriting system in order to force all critical pairs to be convergent while maintaining the strong normalizing property, then this will force the resultant rewriting system to be confluent. Consider a finitely presented monoid formula_0 where X is a finite set of generators and R is a set of defining relations on X. Let X* be the set of all words in X (i.e. the free monoid generated by X). Since the relations R generate an equivalence relation on X*, one can consider elements of M to be the equivalence classes of X* under R. For each class "{w1, w2, ... }" it is desirable to choose a standard representative "wk". This representative is called the canonical or normal form for each word "wk" in the class. If there is a computable method to determine for each "wk" its normal form "wi" then the word problem is easily solved. A confluent rewriting system allows one to do precisely this. Although the choice of a canonical form can theoretically be made in an arbitrary fashion this approach is generally not computable. (Consider that an equivalence relation on a language can produce an infinite number of infinite classes.) If the language is well-ordered then the order &lt; gives a consistent method for defining minimal representatives, however computing these representatives may still not be possible. In particular, if a rewriting system is used to calculate minimal representatives then the order &lt; should also have the property: A &lt; B → XAY &lt; XBY for all words A,B,X,Y This property is called translation invariance. An order that is both translation-invariant and a well-order is called a reduction order. From the presentation of the monoid it is possible to define a rewriting system given by the relations R. If A x B is in R then either A &lt; B in which case B → A is a rule in the rewriting system, otherwise A &gt; B and A → B. Since &lt; is a reduction order a given word W can be reduced W &gt; W_1 &gt; ... &gt; W_n where W_n is irreducible under the rewriting system. However, depending on the rules that are applied at each Wi → Wi+1 it is possible to end up with two different irreducible reductions Wn ≠ W'm of W. However, if the rewriting system given by the relations is converted to a confluent rewriting system via the Knuth–Bendix algorithm, then all reductions are guaranteed to produce the same irreducible word, namely the normal form for that word. Description of the algorithm for finitely presented monoids. Suppose we are given a presentation formula_1, where formula_2 is a set of generators and formula_3 is a set of relations giving the rewriting system. Suppose further that we have a reduction ordering formula_4 among the words generated by formula_2(e.g., shortlex order). For each relation formula_5 in formula_3, suppose formula_6. Thus we begin with the set of reductions formula_7. First, if any relation formula_5 can be reduced, replace formula_8 and formula_9 with the reductions. Next, we add more reductions (that is, rewriting rules) to eliminate possible exceptions of confluence. Suppose that formula_8 and formula_10 overlap. Reduce the word formula_20 using formula_8 first, then using formula_10 first. Call the results formula_21, respectively. If formula_22, then we have an instance where confluence could fail. Hence, add the reduction formula_23 to formula_3. After adding a rule to formula_3, remove any rules in formula_3 that might have reducible left sides (after checking if such rules have critical pairs with other rules). Repeat the procedure until all overlapping left sides have been checked. Examples. A terminating example. Consider the monoid: formula_24. We use the shortlex order. This is an infinite monoid but nevertheless, the Knuth–Bendix algorithm is able to solve the word problem. Our beginning three reductions are therefore A suffix of formula_25 (namely formula_26) is a prefix of formula_27, so consider the word formula_28. Reducing using (1), we get formula_29. Reducing using (3), we get formula_30. Hence, we get formula_31, giving the reduction rule Similarly, using formula_32 and reducing using (2) and (3), we get formula_33. Hence the reduction Both of these rules obsolete (3), so we remove it. Next, consider formula_34 by overlapping (1) and (5). Reducing we get formula_35, so we add the rule Considering formula_36 by overlapping (1) and (5), we get formula_37, so we add the rule These obsolete rules (4) and (5), so we remove them. Now, we are left with the rewriting system Checking the overlaps of these rules, we find no potential failures of confluence. Therefore, we have a confluent rewriting system, and the algorithm terminates successfully. A non-terminating example. The order of the generators may crucially affect whether the Knuth–Bendix completion terminates. As an example, consider the free Abelian group by the monoid presentation: formula_38 The Knuth–Bendix completion with respect to lexicographic order formula_39 finishes with a convergent system, however considering the length-lexicographic order formula_40 it does not finish for there are no finite convergent systems compatible with this latter order. Generalizations. If Knuth–Bendix does not succeed, it will either run forever and produce successive approximations to an infinite complete system, or fail when it encounters an unorientable equation (i.e. an equation that it cannot turn into a rewrite rule). An enhanced version will not fail on unorientable equations and produces a ground confluent system, providing a semi-algorithm for the word problem. The notion of logged rewriting discussed in the paper by Heyworth and Wensley listed below allows some recording or logging of the rewriting process as it proceeds. This is useful for computing identities among relations for presentations of groups. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M = \\langle X \\mid R \\rangle" }, { "math_id": 1, "text": " \\langle X \\mid R \\rangle " }, { "math_id": 2, "text": " X " }, { "math_id": 3, "text": " R " }, { "math_id": 4, "text": " < " }, { "math_id": 5, "text": " P_i = Q_i " }, { "math_id": 6, "text": " Q_i < P_i " }, { "math_id": 7, "text": " P_i \\rightarrow Q_i " }, { "math_id": 8, "text": " P_i " }, { "math_id": 9, "text": " Q_i " }, { "math_id": 10, "text": " P_j " }, { "math_id": 11, "text": " P_i" }, { "math_id": 12, "text": " P_i = BC " }, { "math_id": 13, "text": " P_j = AB " }, { "math_id": 14, "text": " P_i = AB " }, { "math_id": 15, "text": " P_j = BC " }, { "math_id": 16, "text": " P_i = B " }, { "math_id": 17, "text": " P_j = ABC " }, { "math_id": 18, "text": " P_i = ABC " }, { "math_id": 19, "text": " P_j = B " }, { "math_id": 20, "text": " ABC " }, { "math_id": 21, "text": " r_1, r_2 " }, { "math_id": 22, "text": " r_1 \\neq r_2 " }, { "math_id": 23, "text": " \\max r_1, r_2 \\rightarrow \\min r_1, r_2 " }, { "math_id": 24, "text": " \\langle x, y \\mid x^3 = y^3 = (xy)^3 = 1 \\rangle " }, { "math_id": 25, "text": "x^3" }, { "math_id": 26, "text": "x" }, { "math_id": 27, "text": "(xy)^3=xyxyxy" }, { "math_id": 28, "text": " x^3yxyxy " }, { "math_id": 29, "text": " yxyxy " }, { "math_id": 30, "text": " x^2 " }, { "math_id": 31, "text": " yxyxy = x^2 " }, { "math_id": 32, "text": " xyxyxy^3 " }, { "math_id": 33, "text": " xyxyx = y^2 " }, { "math_id": 34, "text": " x^3yxyx " }, { "math_id": 35, "text": " yxyx = x^2y^2 " }, { "math_id": 36, "text": " xyxyx^3 " }, { "math_id": 37, "text": " xyxy = y^2x^2 " }, { "math_id": 38, "text": "\\langle x, y, x^{-1}, y^{-1}\\, |\\, xy=yx, xx^{-1} = x^{-1}x = yy^{-1} = y^{-1}y = 1 \\rangle ." }, { "math_id": 39, "text": "x < x^{-1} < y < y^ {-1}" }, { "math_id": 40, "text": "x < y < x^{-1} < y^{-1}" } ]
https://en.wikipedia.org/wiki?curid=614147
61414800
Choi–Jamiołkowski isomorphism
Correspondence between quantum channels and quantum states In quantum information theory and operator theory, the Choi–Jamiołkowski isomorphism refers to the correspondence between quantum channels (described by completely positive maps) and quantum states (described by density matrices), this is introduced by Man-Duen Choi and Andrzej Jamiołkowski. It is also called channel-state duality by some authors in the quantum information area, but mathematically, this is a more general correspondence between positive operators and the complete positive superoperators. Definition. To study a quantum channel formula_0 from system formula_1 to formula_2, which is a trace-preserving completely positive map from operator spaces formula_3 to formula_4, we introduce an auxiliary system formula_5 with the same dimension as system formula_1. Consider the maximally entangled state: formula_6 in the space of formula_7. Since formula_0 is completely positive, formula_8 is a nonnegative operator. Conversely, for any nonnegative operator on formula_9, we can associate a completely positive map from formula_3to formula_4. This kind of correspondence is called Choi-Jamiołkowski isomorphism. The composition of Choi states. The Choi-Jamiołkowski isomorphism is a mathematical concept that connects quantum gates or operations to quantum states called Choi states. It allows us to represent a gate's properties and behavior as a Choi state. In the generalised gate teleportation scheme, we can teleport a quantum gate from one location to another using entangled states and local operations. Here's how it works: By combining the powers of entanglement, measurements, and local operations, the gate's effect is effectively transferred to the receiver's location. This process enables the teleportation of gate information or the application of the gate itself, making it a fascinating method to manipulate quantum gates in a distributed manner. Simulating composition of gates using generalized gate teleportation. Pure Choi state. Let's consider the unitary case first, where the Choi state is pure. Suppose we have two Choi states represented as formula_10, and formula_11 and the corresponding systems are labeled as A, B, C, and D. To simulate the composition of gates formula_12 or formula_13, we aim to obtain the state formula_14 or formula_15. The standard Bell scheme. The standard approach is to use the Bell scheme, where the gate formula_16 is teleported from site A to site C using a Bell measurement on sites B and C, resulting in the state formula_15 on sites A and D. To obtain formula_14, we would apply the Bell scheme on sites A and D. However, this can introduce Pauli byproduct operators, such as formula_17, between the two unitary gates, which are generally non-correctable, and can affect the desired gate composition. Indirect Bell measurement. To address this issue, an indirect Bell measurement is used instead of the standard Bell scheme. This measurement involves an extra qubit ancilla. The indirect Bell measurement is performed by applying a gate formula_18, which is the Toffoli gate with one-control qubit replaced by a zero-control qubit and the ancilla as the target. This measurement is expressed as formula_19, where formula_20 represents the reverse operation of preparing Bell states. Outcomes and resulting states. The outcome of the indirect Bell measurement corresponds to either the singlet or the triplet state. If the outcome is the singlet on sites B and C, the gate U on site C is teleported to site A, resulting in the state formula_15. On the other hand, if the outcome is the triplet, which has the full symmetry of the relevant unitary group, the gate V is modified by applying a rotation T on the triplet state, equivalent to the action of formula_21 on site C. This leads to the state formula_22, where t represents the adjoint operation. Achieving desired states. By applying the generalised gate teleportation scheme, the states formula_15 or formula_22 can be realised in a heralded manner, depending on the outcome from the qubit ancilla measurement. By combining this scheme with the POVM (Positive Operator-Valued Measure) scheme on site D, the gates formula_13 or formula_23 can be simulated, with the output on site A for final readout. Avoiding transposition issue. Although the generalised gate teleportation scheme enables the composition of Choi states and the simulation of desired gates, there is an apparent issue of transposition. However, this issue can be avoided by expressing any unitary operator as a product of two symmetric unitary operators. Therefore, for any unitary U, only two Choi program states, formula_24 and formula_25, are needed to deterministically teleport U. Mixed Choi state. In the case of channels whose Choi states are mixed states, the symmetry condition does not directly generalise as it does for unitary operators. However, a scheme based on direct-sum dilation can be employed to overcome this obstacle. For a channel E with a set of Kraus operators formula_26, each Kraus operator can be dilated to a unitary operator formula_27. The dilation is given by formula_28, where formula_27 acts on a space of dimension 2d. Dilation-based scheme. In this scheme, each Kraus operator is expanded to a larger unitary operator, allowing the use of symmetry-based techniques. By considering the larger unitary operators, the issue of dealing with mixed Choi states is circumvented, and the computations can proceed using unitary transformations. Unitary dilation. The channel formula_29 can be simulated by using a random-unitary channel, where the controlled-unitary gate U̘ acts on the joint system of the input state ρ and an ancilla qubit. The ancilla qubit, initially prepared in the state |e⟩, is later traced out. The state σ is a combination of ρ and 0, where 0 represents the state of the ancilla on the dilated subspace. The action E(ρ) is obtained by restricting the evolution to the system subspace. Simulation of the channel. In this scheme, the simulation of channel E involves applying the controlled-unitary gate U̘ to the input state ρ and the ancilla qubit prepared in the state |e⟩. The gate U̘ combines the Kraus operators formula_27 with the ancilla qubit. After tracing out the ancilla qubit, the resulting state σ is a combination of ρ and 0, with 0 representing the state of the ancilla on the dilated subspace. Finally, the action of the channel E on the input state ρ is obtained by considering the evolution restricted to the system subspace. Teleportation of controlled-unitary gates. In comparison to the unitary case, the task here is to teleport controlled-unitary gates instead of unitary gates. This can be achieved by extending the scheme used in the unitary case. For each formula_27 in U̘, there exists a gate formula_30 that can teleport it. The formula_30 gates are controlled by the same ancilla used for formula_27. When a singlet is obtained, the channel E is teleported. To avoid the issue of transposition, each formula_27 is decomposed as the product of two symmetric unitary matrices, formula_27 = formula_31. By using the same control wire for formula_32 and formula_33 and employing two program states, the gate U̘ can be teleported, thereby teleporting the channel E. POVM and channel design. To execute the action of the channel on a state, a POVM (Positive Operator-Valued Measure) and a channel based on the state formula_34 need to be designed. The channel formula_35, an extension of the channel R, contains three Kraus operators: formula_36 and formula_37. This channel requires a qutrit ancilla, and when the outcome is 2, indicating the occurrence of formula_38, which is equal to 1 due to the trace-preserving condition, the simulation has to be restarted. Special cases. For special types of channels, the scheme can be significantly simplified. Random unitary channels, which are a broad class of channels, can be realised using the controlled-unitary scheme mentioned earlier, without the need for direct-sum dilation. Unital channels, which preserve the identity, are random unitary channels for qubits and can be easily simulated. Another type of channel is the entanglement-breaking channel, characterised by bipartite separable Choi states. These channels and program states are trivial since there is no entanglement, and they can be simulated using a measurement-preparation scheme. Preparation of program states. Now we study the preparation of program states if they are not given for free. Choi states and program state preparation. A Choi state C is not easy to prepare on the first hand, namely, this may require the operation of E on the Bell state formula_39, and realising E itself (e.g., by a dilated unitary) is a nontrivial task. From Stinespring's dilation, we know that it requires the form of Kraus operators, which are not easy to find in general given a Choi state. Convexity and extreme channels. The set of qudit channels forms a convex body. This means that a convex sum of channels still leads to a channel, and there are extreme channels that are not convex sums of others. From Choi, a channel is extreme if there exists a Kraus representation formula_40 such that the set formula_41 is linearly independent. For a qudit, this means the rank of an extreme channel is at most formula_42. Channels of rank formula_43 are termed as generalised-extreme channels, here termed as 'gen-extreme channels.' Convex-sum decomposition and random bits. It is clear to see that a gen-extreme but not extreme channel is a convex sum of extreme channels of lower ranks. It has been conjectured and numerically supported that an arbitrary channel can be written as a convex sum of at most formula_42 gen-extreme channels formula_44. This requires a random dit. For the worst case, the upper bound for such a convex sum is formula_45 from Carathéodory's theorem on convex sets, which merely costs more random dits. Quantum circuit complexity reduction. Simulating composition of gen-extreme channels. To simulate the composition formula_46, with each formula_47 of rank greater than formula_42, hence permitting a convex-sum decomposition, one needs to sample the composition of gen-extreme channels. We find that there exists a concise form of Choi states for gen-extreme channels, which can be used to find the circuit and also Kraus operators directly. Quantum circuit realization of channels and Choi states for gen-extreme channels. The Choi state formula_48 for a gen-extreme channel formula_29 is of rank formula_43 and formula_49"." It turns out formula_50 for formula_51 for formula_52, and formula_53 . Observe that formula_54, for an isometry formula_55. Here formula_56 is an ancilla state. Now we show that formula_16 can be used to find a quantum circuit to realise formula_29. Given formula_16, we can find a unitary dilation formula_57 such that formula_58, and it relates to the channel by formula_59, while the final projection formula_60 is on the system. Define formula_61 for the swap gate between the system and ancilla, which are of the same dimension. Then we find formula_62, which means formula_63 is the circuit to realise the channel formula_29 as in the standard dilation scheme. The Kraus operators can be obtained from it as formula_64. Circuit complexity reduction. Compared with the standard (tensor-product) dilation method to simulate a general channel, which requires two qudit ancillas, the method above requires lower circuit cost since it only needs a single qudit ancilla instead of two. While the convex-sum decomposition, which is a sort of generalised eigenvalue decomposition since a gen-extreme Choi state can be mapped to a pure state, is difficult to solve for large-dimensional channels, it shall be comparable with the eigen-decomposition of the Choi state to find the set of Kraus operators. Both of the decompositions are solvable for smaller systems. It's important to note that this discussion focuses on the primary components of the model and does not address fault tolerance, as it is beyond the scope of this model. We assume fault-tolerant qubits, gates, and measurements, which can be achieved with quantum error-correcting codes. Additionally, we highlight two intriguing issues that establish connections with standard frameworks and results. Connections with standard frameworks and results. Teleportation of universal gate set. A computation is universal if the group formula_65 can be realised for any integer formula_66. The common approach to achieving universality is by gate-compiling based on universal gate sets. Our method can be used to teleport unitary universal gate sets. Consider the popular Hadamard gate formula_67, phase gate formula_1, the so-called formula_68 gate formula_69, CNOT, CZ, and Toffoli gate. One immediately notices that these gates are all symmetric matrices. We see above that symmetric unitary operators, for which formula_70, can be teleported deterministically, and the byproduct are Pauli operators. Note that a product of symmetric matrices is not symmetric in general. It is easy to check that the affine forms of formula_67, formula_1, CNOT, and CZ are (generalised) permutations since they are Clifford gates, which preserve the Pauli group. A generalised permutation is a permutation that also allows entry of modulus 1 besides 1 itself. The formula_69 gate and Toffoli gate are not Clifford gates, and their affine forms are not permutations. Instead, the affine forms of them contain a Hadamard-like gate as a sub-matrix, which means, in the Heisenberg picture, they are able to generate superpositions of Pauli operators. This fact also generalises to the qudit case, with Hadamard replaced by Fourier transform operators. This serves as an intriguing fact regarding the origin of the computational power of quantum computing. Stored-program quantum computing. In this approach, a modification is introduced to enable the simulation of the operation formula_71 using a generalised gate teleportation scheme. This proposed method allows for the unitary simulation of formula_71 by utilising a processor formula_72 that depends on the input program state. For symmetric matrices formula_57, the program state formula_73 is sufficient to achieve the desired results. However, in general cases whereformula_74, both program states formula_75 and formula_76 are required for deterministic teleportation and composition. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{E}" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "S'" }, { "math_id": 3, "text": "\\mathcal{L}(\\mathcal{H}_S)" }, { "math_id": 4, "text": "\\mathcal{L}(\\mathcal{H}_{S'})" }, { "math_id": 5, "text": "A" }, { "math_id": 6, "text": "|\\Phi^+\\rangle=\\frac{1}{\\sqrt{d}}\\sum_{i=0}^{d-1}|i\\rangle\\otimes|i\\rangle=\\frac{1}{\\sqrt{d}}(|0\\rangle\\otimes|0\\rangle+\\cdots+|d-1\\rangle\\otimes|d-1\\rangle)" }, { "math_id": 7, "text": "\\mathcal{H}_A\\otimes\\mathcal{H}_S" }, { "math_id": 8, "text": "(I_A\\otimes \\mathcal{E})(|\\Phi^+\\rangle\\langle\\Phi^+|)" }, { "math_id": 9, "text": "\\mathcal{H}_A\\otimes\\mathcal{H}_{S'}" }, { "math_id": 10, "text": "\\vert \\psi_U\\rangle_i " }, { "math_id": 11, "text": " \\vert \\psi_V\\rangle_i" }, { "math_id": 12, "text": "UV" }, { "math_id": 13, "text": "VU" }, { "math_id": 14, "text": "\\vert \\psi_{UV}\\rangle_i" }, { "math_id": 15, "text": "\\vert \\psi_{VU}\\rangle_i" }, { "math_id": 16, "text": "V" }, { "math_id": 17, "text": "UPV" }, { "math_id": 18, "text": "U_T" }, { "math_id": 19, "text": "G(\\sigma) = \\text{tr}[U_T \\circ U^\\dagger_B(\\sigma \\otimes \\vert 1\\rangle\\langle 1\\vert)]" }, { "math_id": 20, "text": "U^\\dagger_B" }, { "math_id": 21, "text": "V^\\dagger" }, { "math_id": 22, "text": "\\vert \\psi_{VtU}\\rangle_i" }, { "math_id": 23, "text": "VtU" }, { "math_id": 24, "text": "\\vert \\psi_{UL}\\rangle_i" }, { "math_id": 25, "text": "\\vert \\psi_{UR}\\rangle_i" }, { "math_id": 26, "text": "{K_i}" }, { "math_id": 27, "text": "U_Ki" }, { "math_id": 28, "text": "U_Ki = [[K_i, q], [1 - K_i^\\dagger K_i, -K_i]]" }, { "math_id": 29, "text": "E" }, { "math_id": 30, "text": "T_i" }, { "math_id": 31, "text": "U_LKi U_RKi" }, { "math_id": 32, "text": "U_LKi" }, { "math_id": 33, "text": "U_RKi" }, { "math_id": 34, "text": "\\rho \\oplus 0" }, { "math_id": 35, "text": "R_0" }, { "math_id": 36, "text": "K_0 = [p\\rho t, 0], K_1 = [p1-\\rho t, 0]" }, { "math_id": 37, "text": "K_2 = [0, 1]" }, { "math_id": 38, "text": "E^\\dagger(1)" }, { "math_id": 39, "text": "\\vert \\omega_i \\rangle" }, { "math_id": 40, "text": "{ K_i }" }, { "math_id": 41, "text": "{ K_i^\\dagger K_j }" }, { "math_id": 42, "text": "d" }, { "math_id": 43, "text": "r \\leq d" }, { "math_id": 44, "text": "E = \\sum_{i=1}^d p_i E_{\\text{g}_i}" }, { "math_id": 45, "text": "d \\frac{4-d}{2}" }, { "math_id": 46, "text": "Q_i E_i" }, { "math_id": 47, "text": "E_i" }, { "math_id": 48, "text": "C" }, { "math_id": 49, "text": "\\text{tr}A C = 1, \\text{tr}B C = E(1)" }, { "math_id": 50, "text": "C = \\sum{ij} \\vert ii \\rangle \\langle jj \\vert \\otimes C{ij}" }, { "math_id": 51, "text": "C_{ij} := E^\\dagger (\\vert ii \\rangle \\langle jj \\vert) = \\sum_i C_i U_i^\\dagger U_j C_j" }, { "math_id": 52, "text": "C_i \\equiv C_{ii}" }, { "math_id": 53, "text": "U_i, U_j \\in SU(d)" }, { "math_id": 54, "text": "E^\\dagger (\\rho) = \\sum_{ij} \\rho_{ij} C_{ij} = V^\\dagger (\\rho \\otimes 1)V" }, { "math_id": 55, "text": "V = \\sum_i \\vert ii \\rangle U_i \\sqrt{C_i}" }, { "math_id": 56, "text": "1" }, { "math_id": 57, "text": "U" }, { "math_id": 58, "text": "U \\vert 0 \\rangle = V" }, { "math_id": 59, "text": "E^\\dagger (\\rho) = \\langle 0 \\vert U^\\dagger (\\rho \\otimes 1)U \\vert 0 \\rangle" }, { "math_id": 60, "text": "\\vert 0 \\rangle \\langle 0 \\vert" }, { "math_id": 61, "text": "W = \\text{swap} \\cdot U^\\dagger" }, { "math_id": 62, "text": "E(\\rho) = \\text{tr}_W (\\rho \\otimes \\vert 0 \\rangle \\langle 0 \\vert)" }, { "math_id": 63, "text": "W" }, { "math_id": 64, "text": "K_i = \\langle i \\vert W \\vert 0 \\rangle" }, { "math_id": 65, "text": "SU(2^n)" }, { "math_id": 66, "text": "n" }, { "math_id": 67, "text": "H" }, { "math_id": 68, "text": "Z^{1/4}" }, { "math_id": 69, "text": "T" }, { "math_id": 70, "text": "U = U^\\dagger" }, { "math_id": 71, "text": "U\\vert d_i\\rangle" }, { "math_id": 72, "text": "G" }, { "math_id": 73, "text": "\\vert\\psi_U\\rangle_i" }, { "math_id": 74, "text": "U = U_LU_R" }, { "math_id": 75, "text": "\\vert\\psi_{UL}\\rangle_i" }, { "math_id": 76, "text": "\\vert\\psi_{UR}\\rangle_i" } ]
https://en.wikipedia.org/wiki?curid=61414800
614192
Paschen's law
Physical law about electrical discharge in gases Paschen's law is an equation that gives the breakdown voltage, that is, the voltage necessary to start a discharge or electric arc, between two electrodes in a gas as a function of pressure and gap length. It is named after Friedrich Paschen who discovered it empirically in 1889. Paschen studied the breakdown voltage of various gases between parallel metal plates as the gas pressure and gap distance were varied: For a given gas, the voltage is a function only of the product of the pressure and gap length. The curve he found of voltage versus the pressure-gap length product "(right)" is called Paschen's curve. He found an equation that fit these curves, which is now called Paschen's law. At higher pressures and gap lengths, the breakdown voltage is approximately "proportional" to the product of pressure and gap length, and the term Paschen's law is sometimes used to refer to this simpler relation. However, this is only roughly true, over a limited range of the curve. Paschen curve. Early vacuum experimenters found a rather surprising behavior. An arc would sometimes take place in a long irregular path rather than at the minimal distance between the electrodes. For example, in air, at a pressure of one atmosphere, the distance for minimal breakdown voltage is about 7.5 μm. The voltage required to arc this distance is 327 V, which is insufficient to ignite the arcs for gaps that are either wider or narrower. For a 3.5 μm gap, the required voltage is 533 V, nearly twice as much. If 500 V were applied, it would not be sufficient to arc at the 2.85 μm distance, but would arc at a 7.5 μm distance. Paschen found that breakdown voltage was described by the equation formula_0 where formula_1 is the breakdown voltage in volts, formula_2 is the pressure in pascals, formula_3 is the gap distance in meters, formula_4 is the secondary-electron-emission coefficient (the number of secondary electrons produced per incident positive ion), formula_5 is the saturation ionization in the gas at a particular formula_6 (electric field/pressure), and formula_7 is related to the excitation and ionization energies. The constants formula_5 and formula_7 interpolate the first Townsend coefficient formula_8. They are determined experimentally and found to be roughly constant over a restricted range of formula_6 for any given gas. For example, air with an formula_6 in the range of 450 to 7500 V/(kPa·cm), formula_5 = 112.50 (kPa·cm)−1 and formula_7 = 2737.50 V/(kPa·cm). The graph of this equation is the Paschen curve. By differentiating it with respect to formula_9 and setting the derivative to zero, the minimal voltage can be found. This yields formula_10 and predicts the occurrence of a minimal breakdown voltage for formula_9 = 7.5×10−6 m·atm. This is 327 V in air at standard atmospheric pressure at a distance of 7.5 μm. The composition of the gas determines both the minimal arc voltage and the distance at which it occurs. For argon, the minimal arc voltage is 137 V at a larger 12 μm. For sulfur dioxide, the minimal arc voltage is 457 V at only 4.4 μm. Long gaps. For air at standard conditions for temperature and pressure (STP), the voltage needed to arc a 1-metre gap is about 3.4 MV. The intensity of the electric field for this gap is therefore 3.4 MV/m. The electric field needed to arc across the minimal-voltage gap is much greater than what is necessary to arc a gap of one metre. At large gaps (or large pd) Paschen's Law is known to fail. The Meek Criteria for breakdown is usually used for large gaps. It takes into account non-uniformity in the electric field and formation of streamers due to the build up of charge within the gap that can occur over long distances. For a 7.5 μm gap the arc voltage is 327 V, which is 43 MV/m. This is about 14 times greater than the field strength for the 1.5-metre gap. The phenomenon is well verified experimentally and is referred to as the Paschen minimum. The equation loses accuracy for gaps under about 10 μm in air at one atmosphere and incorrectly predicts an infinite arc voltage at a gap of about 2.7 micrometres. Breakdown voltage can also differ from the Paschen curve prediction for very small electrode gaps, when field emission from the cathode surface becomes important. Physical mechanism. The mean free path of a molecule in a gas is the average distance between its collision with other molecules. This is inversely proportional to the pressure of the gas, given constant temperature. In air at STP the mean free path of molecules is about 96 nm. Since electrons are much smaller, their average distance between colliding with molecules is about 5.6 times longer, or about 0.5 μm. This is a substantial fraction of the 7.5 μm spacing between the electrodes for minimal arc voltage. If the electron is in an electric field of 43 MV/m, it will be accelerated and acquire 21.5 eV of energy in 0.5 μm of travel in the direction of the field. The first ionization energy needed to dislodge an electron from nitrogen molecule is about 15.6 eV. The accelerated electron will acquire more than enough energy to ionize a nitrogen molecule. This liberated electron will in turn be accelerated, which will lead to another collision. A chain reaction then leads to avalanche breakdown, and an arc takes place from the cascade of released electrons. More collisions will take place in the electron path between the electrodes in a higher-pressure gas. When the pressure–gap product formula_9 is high, an electron will collide with many different gas molecules as it travels from the cathode to the anode. Each of the collisions randomizes the electron direction, so the electron is not always being accelerated by the electric field—sometimes it travels back towards the cathode and is decelerated by the field. Collisions reduce the electron's energy and make it more difficult for it to ionize a molecule. Energy losses from a greater number of collisions require larger voltages for the electrons to accumulate sufficient energy to ionize many gas molecules, which is required to produce an avalanche breakdown. On the left side of the Paschen minimum, the formula_9 product is small. The electron mean free path can become long compared to the gap between the electrodes. In this case, the electrons might gain large amounts of energy, but have fewer ionizing collisions. A greater voltage is therefore required to assure ionization of enough gas molecules to start an avalanche. Derivation. Basics. To calculate the breakthrough voltage, a homogeneous electrical field is assumed. This is the case in a parallel-plate capacitor setup. The electrodes may have the distance formula_3. The cathode is located at the point formula_11. To get impact ionization, the electron energy formula_12 must become greater than the ionization energy formula_13 of the gas atoms between the plates. Per length of path formula_14 a number of formula_15 ionizations will occur. formula_15 is known as the first Townsend coefficient as it was introduced by Townsend. The increase of the electron current formula_16, can be described for the assumed setup as The number of created electrons is Neglecting possible multiple ionizations of the same atom, the number of created ions is the same as the number of created electrons: formula_17 is the ion current. To keep the discharge going on, free electrons must be created at the cathode surface. This is possible because the ions hitting the cathode release secondary electrons at the impact. (For very large applied voltages also field electron emission can occur.) Without field emission, we can write where formula_18 is the mean number of generated secondary electrons per ion. This is also known as the second Townsend coefficient. Assuming that formula_19, one gets the relation between the Townsend coefficients by putting (4) into (3) and transforming: Impact ionization. What is the amount of formula_15? The number of ionization depends upon the probability that an electron hits a gas molecule. This probability formula_20 is the relation of the cross-sectional area of a collision between electron and ion formula_21 in relation to the overall area formula_5 that is available for the electron to fly through: As expressed by the second part of the equation, it is also possible to express the probability as relation of the path traveled by the electron formula_14 to the mean free path formula_22 (distance at which another collision occurs). formula_23 is the number of molecules which electrons can hit. It can be calculated using the equation of state of the ideal gas The adjoining sketch illustrates that formula_24. As the radius of an electron can be neglected compared to the radius of an ion formula_25 it simplifies to formula_26. Using this relation, putting (7) into (6) and transforming to formula_22 one gets where the factor formula_27 was only introduced for a better overview. The alteration of the current of not yet collided electrons at every point in the path formula_14 can be expressed as This differential equation can easily be solved: The probability that formula_28 (that there was not yet a collision at the point formula_14) is According to its definition formula_15 is the number of ionizations per length of path and thus the relation of the probability that there was no collision in the mean free path of the ions, and the mean free path of the electrons: It was hereby considered that the energy formula_29 that a charged particle can get between a collision depends on the electric field strength formula_30 and the charge formula_31: Breakdown voltage. For the parallel-plate capacitor we have formula_32, where formula_33 is the applied voltage. As a single ionization was assumed formula_31 is the elementary charge formula_34. We can now put (13) and (8) into (12) and get Putting this into (5) and transforming to formula_33 we get the Paschen law for the breakdown voltage formula_35 that was first investigated by Paschen in and whose formula was first derived by Townsend in formula_36 Plasma ignition. Plasma ignition in the definition of Townsend (Townsend discharge) is a self-sustaining discharge, independent of an external source of free electrons. This means that electrons from the cathode can reach the anode in the distance formula_3 and ionize at least one atom on their way. So according to the definition of formula_15 this relation must be fulfilled: If formula_37 is used instead of (5) one gets for the breakdown voltage Conclusions, validity. Paschen's law requires that: Effects with different gases. Different gases will have different mean free paths for molecules and electrons. This is because different molecules have ionization cross sections, that is, different effective diameters. Noble gases like helium and argon are monatomic, which makes them harder to ionize and tend to have smaller effective diameters. This gives them greater mean free paths. Ionization potentials differ between molecules, as well as the speed that they recapture electrons after they have been knocked out of orbit. All three effects change the number of collisions needed to cause an exponential growth in free electrons. These free electrons are necessary to cause an arc. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_\\text{B} = \\frac{Bpd}{\\ln(Apd) - \\ln\\left[\\ln\\left(1 + \\frac{1}{\\gamma_\\text{se}}\\right)\\right]}" }, { "math_id": 1, "text": "V_\\text{B}" }, { "math_id": 2, "text": "p" }, { "math_id": 3, "text": "d" }, { "math_id": 4, "text": "\\gamma_\\text{se}" }, { "math_id": 5, "text": "A" }, { "math_id": 6, "text": "E/p" }, { "math_id": 7, "text": "B" }, { "math_id": 8, "text": "\\alpha=Ap e^{-Bp/E}" }, { "math_id": 9, "text": "pd" }, { "math_id": 10, "text": "pd = \\frac{e \\cdot \\ln\\left(1+\\frac{1}{{\\mathit{\\gamma}}_{se}}\\right)}{A}" }, { "math_id": 11, "text": "x = 0" }, { "math_id": 12, "text": "E_e" }, { "math_id": 13, "text": "E_\\text{I}" }, { "math_id": 14, "text": "x" }, { "math_id": 15, "text": "\\alpha" }, { "math_id": 16, "text": "\\Gamma_e" }, { "math_id": 17, "text": "\\Gamma_i" }, { "math_id": 18, "text": "\\gamma" }, { "math_id": 19, "text": "\\Gamma_i(d) = 0" }, { "math_id": 20, "text": "P" }, { "math_id": 21, "text": "\\sigma" }, { "math_id": 22, "text": "\\lambda" }, { "math_id": 23, "text": "N" }, { "math_id": 24, "text": "\\sigma = \\pi (r_a + r_b)^2" }, { "math_id": 25, "text": "r_I" }, { "math_id": 26, "text": "\\sigma = \\pi r_I^2" }, { "math_id": 27, "text": "L" }, { "math_id": 28, "text": "\\lambda > x" }, { "math_id": 29, "text": "E" }, { "math_id": 30, "text": "\\mathcal{E}" }, { "math_id": 31, "text": "Q" }, { "math_id": 32, "text": "\\mathcal{E} = \\frac{U}{d}" }, { "math_id": 33, "text": "U" }, { "math_id": 34, "text": "e" }, { "math_id": 35, "text": "U_{\\mathrm{breakdown}}" }, { "math_id": 36, "text": "L = \\frac{\\pi r_{I}^{2}}{k_{B}T}" }, { "math_id": 37, "text": "\\alpha d = 1" }, { "math_id": 38, "text": "\\Gamma_e(x=0) \\ne 0" } ]
https://en.wikipedia.org/wiki?curid=614192
61422206
Schrödinger–HJW theorem
Concept in quantum information theory In quantum information theory and quantum optics, the Schrödinger–HJW theorem is a result about the realization of a mixed state of a quantum system as an ensemble of pure quantum states and the relation between the corresponding purifications of the density operators. The theorem is named after physicists and mathematicians Erwin Schrödinger, Lane P. Hughston, Richard Jozsa and William Wootters. The result was also found independently (albeit partially) by Nicolas Gisin, and by Nicolas Hadjisavvas building upon work by Ed Jaynes, while a significant part of it was likewise independently discovered by N. David Mermin. Thanks to its complicated history, it is also known by various other names such as the GHJW theorem, the HJW theorem, and the purification theorem. Purification of a mixed quantum state. Let formula_0 be a finite-dimensional complex Hilbert space, and consider a generic (possibly mixed) quantum state formula_1 defined on formula_0 and admitting a decomposition of the form formula_2 for a collection of (not necessarily mutually orthogonal) states formula_3 and coefficients formula_4 such that formula_5. Note that any quantum state can be written in such a way for some formula_6 and formula_7. Any such formula_1 can be "purified", that is, represented as the partial trace of a pure state defined in a larger Hilbert space. More precisely, it is always possible to find a (finite-dimensional) Hilbert space formula_8 and a pure state formula_9 such that formula_10. Furthermore, the states formula_11 satisfying this are all and only those of the form formula_12 for some orthonormal basis formula_13. The state formula_11 is then referred to as the "purification of formula_1". Since the auxiliary space and the basis can be chosen arbitrarily, the purification of a mixed state is not unique; in fact, there are infinitely many purifications of a given mixed state. Because all of them admit a decomposition in the form given above, given any pair of purifications formula_14, there is always some unitary operation formula_15 such that formula_16 Theorem. Consider a mixed quantum state formula_1 with two different realizations as ensemble of pure states as formula_17 and formula_18. Here both formula_19and formula_20 are not assumed to be mutually orthogonal. There will be two corresponding purifications of the mixed state formula_1 reading as follows: Purification 1: formula_21; Purification 2: formula_22. The sets formula_23and formula_24 are two collections of orthonormal bases of the respective auxiliary spaces. These two purifications only differ by a unitary transformation acting on the auxiliary space, namely, there exists a unitary matrix formula_25 such that formula_26. Therefore, formula_27, which means that we can realize the different ensembles of a mixed state just by making different measurements on the purifying system. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal H_S" }, { "math_id": 1, "text": "\\rho" }, { "math_id": 2, "text": "\\rho = \\sum_i p_i|\\phi_i\\rangle\\langle\\phi_i|" }, { "math_id": 3, "text": "|\\phi_i\\rangle \\in \\mathcal H_S" }, { "math_id": 4, "text": "p_i \\ge 0" }, { "math_id": 5, "text": "\\sum_i p_i = 1" }, { "math_id": 6, "text": "\\{|\\phi_i\\rangle\\}_i" }, { "math_id": 7, "text": "\\{p_i\\}_i" }, { "math_id": 8, "text": "\\mathcal H_A" }, { "math_id": 9, "text": "|\\Psi_{SA}\\rangle \\in \\mathcal H_S \\otimes \\mathcal H_A" }, { "math_id": 10, "text": "\\rho = \\operatorname{Tr}_A\\big(|\\Psi_{SA}\\rangle\\langle\\Psi_{SA}|\\big)" }, { "math_id": 11, "text": "|\\Psi_{SA}\\rangle" }, { "math_id": 12, "text": "|\\Psi_{SA}\\rangle = \\sum_i \\sqrt{p_i} |\\phi_i\\rangle \\otimes |a_i\\rangle" }, { "math_id": 13, "text": "\\{|a_i\\rangle\\}_i \\subset \\mathcal H_A" }, { "math_id": 14, "text": "|\\Psi\\rangle, |\\Psi'\\rangle \\in \\mathcal H_S \\otimes \\mathcal H_A" }, { "math_id": 15, "text": "U : \\mathcal H_A \\to \\mathcal H_A" }, { "math_id": 16, "text": "|\\Psi'\\rangle = (I\\otimes U) |\\Psi\\rangle." }, { "math_id": 17, "text": "\\rho = \\sum_i p_i |\\phi_i\\rangle\\langle\\phi_i|" }, { "math_id": 18, "text": "\\rho = \\sum_j q_j |\\varphi_j\\rangle\\langle\\varphi_j|" }, { "math_id": 19, "text": "|\\phi_i\\rangle" }, { "math_id": 20, "text": "|\\varphi_j\\rangle" }, { "math_id": 21, "text": "|\\Psi_{SA}^1\\rangle=\\sum_i\\sqrt{p_i}|\\phi_i\\rangle \\otimes |a_i\\rangle" }, { "math_id": 22, "text": "|\\Psi_{SA}^2\\rangle=\\sum_j\\sqrt{q_j}|\\varphi_j\\rangle \\otimes |b_j\\rangle" }, { "math_id": 23, "text": "\\{|a_i\\rangle\\}" }, { "math_id": 24, "text": "\\{|b_j\\rangle\\}" }, { "math_id": 25, "text": "U_A" }, { "math_id": 26, "text": "|\\Psi^1_{SA}\\rangle = (I\\otimes U_A)|\\Psi^2_{SA}\\rangle" }, { "math_id": 27, "text": "|\\Psi_{SA}^1\\rangle = \\sum_j \\sqrt{q_j}|\\varphi_j\\rangle\\otimes U_A|b_j\\rangle" } ]
https://en.wikipedia.org/wiki?curid=61422206
614230
Event-related potential
Brain response that is the direct result of a specific sensory, cognitive, or motor event An event-related potential (ERP) is the measured brain response that is the direct result of a specific sensory, cognitive, or motor event. More formally, it is any stereotyped electrophysiological response to a stimulus. The study of the brain in this way provides a noninvasive means of evaluating brain functioning. ERPs are measured by means of electroencephalography (EEG). The magnetoencephalography (MEG) equivalent of ERP is the ERF, or event-related field. Evoked potentials and induced potentials are subtypes of ERPs. History. With the discovery of the electroencephalogram (EEG) in 1924, Hans Berger revealed that one could measure the electrical activity of the human brain by placing electrodes on the scalp and amplifying the signal. Changes in voltage can then be plotted over a period of time. He observed that the voltages could be influenced by external events that stimulated the senses. The EEG proved to be a useful source in recording brain activity over the ensuing decades. However, it tended to be very difficult to assess the highly specific neural process that are the focus of cognitive neuroscience because using pure EEG data made it difficult to isolate individual neurocognitive processes. Event-related potentials (ERPs) offered a more sophisticated method of extracting more specific sensory, cognitive, and motor events by using simple averaging techniques. In 1935–1936, Pauline and Hallowell Davis recorded the first known ERPs on awake humans and their findings were published a few years later, in 1939. Due to World War II not much research was conducted in the 1940s, but research focusing on sensory issues picked back up again in the 1950s. In 1964, research by Grey Walter and colleagues began the modern era of ERP component discoveries when they reported the first cognitive ERP component, called the contingent negative variation (CNV). Sutton, Braren, and Zubin (1965) made another advancement with the discovery of the P3 component. Over the next fifteen years, ERP component research became increasingly popular. The 1980s, with the introduction of inexpensive computers, opened up a new door for cognitive neuroscience research. Currently, ERP is one of the most widely used methods in cognitive neuroscience research to study the physiological correlates of sensory, perceptual and cognitive activity associated with processing information. Calculation. ERPs can be reliably measured using electroencephalography (EEG), a procedure that measures electrical activity of the brain over time using electrodes placed on the scalp. The EEG reflects thousands of simultaneously ongoing brain processes. This means that the brain response to a single stimulus or event of interest is not usually visible in the EEG recording of a single trial. To see the brain's response to a stimulus, the experimenter must conduct many trials and average the results together, causing random brain activity to be averaged out and the relevant waveform to remain, called the ERP. The random (background) brain activity together with other bio-signals (e.g., EOG, EMG, EKG) and electromagnetic interference (e.g., line noise, fluorescent lamps) constitute the noise contribution to the recorded ERP. This noise obscures the signal of interest, which is the sequence of underlying ERPs under study. From an engineering point of view it is possible to define the signal-to-noise ratio (SNR) of the recorded ERPs. Averaging increases the SNR of the recorded ERPs making them discernible and allowing for their interpretation. This has a simple mathematical explanation provided that some simplifying assumptions are made. These assumptions are: Having defined formula_1, the trial number, and formula_2, the time elapsed after the formula_1th event, each recorded trial can be written as formula_3 where formula_4 is the signal and formula_5 is the noise (Under the assumptions above, the signal does not depend on the specific trial while the noise does). The average of formula_6 trials is formula_7 . The expected value of formula_8 is (as hoped) the signal itself, formula_9. Its variance is formula_10. For this reason the noise amplitude of the average of formula_6 trials is expected to deviate from the mean (which is formula_4) by less or equal than formula_11 in 68% of the cases. In particular, the deviation wherein 68% of the noise amplitudes lie is formula_12 times that of a single trial. A larger deviation of formula_13 can already be expected to encompass 95% of all noise amplitudes. Wide amplitude noise (such as eye blinks or movement artifacts) are often several orders of magnitude larger than the underlying ERPs. Therefore, trials containing such artifacts should be removed before averaging. Artifact rejection can be performed manually by visual inspection or using an automated procedure based on predefined fixed thresholds (limiting the maximum EEG amplitude or slope) or on time-varying thresholds derived from the statistics of the set of trials. Nomenclature. ERP waveforms consist of a series of positive and negative voltage deflections, which are related to a set of underlying components. Though some ERP components are referred to with acronyms (e.g., contingent negative variation – CNV, error-related negativity – ERN), most components are referred to by a letter (N/P) indicating polarity (negative/positive), followed by a number indicating either the latency in milliseconds or the component's ordinal position in the waveform. For instance, a negative-going peak that is the first substantial peak in the waveform and often occurs about 100 milliseconds after a stimulus is presented is often called the N100 (indicating its latency is 100 ms after the stimulus and that it is negative) or N1 (indicating that it is the first peak and is negative); it is often followed by a positive peak, usually called the P200 or P2. The stated latencies for ERP components are often quite variable, particularly so for the later components that are related to the cognitive processing of the stimulus. For example, the P300 component may exhibit a peak anywhere between 250 ms – 700 ms. Advantages and disadvantages. Relative to behavioral measures. Compared with behavioral procedures, ERPs provide a continuous measure of processing between a stimulus and a response, making it possible to determine which stage(s) are being affected by a specific experimental manipulation. Another advantage over behavioral measures is that they can provide a measure of processing of stimuli even when there is no behavioral change. However, because of the significantly small size of an ERP, it usually takes a large number of trials to accurately measure it correctly. Relative to other neurophysiological measures. Invasiveness. Unlike microelectrodes, which require an electrode to be inserted into the brain, and PET scans that expose humans to radiation, ERPs use EEG, a non-invasive procedure. Spatial and temporal resolution. ERPs provide excellent temporal resolution—as the speed of ERP recording is only constrained by the sampling rate that the recording equipment can feasibly support, whereas hemodynamic measures (such as fMRI, PET, and fNIRS) are inherently limited by the slow speed of the BOLD response. The spatial resolution of an ERP, however, is much poorer than that of hemodynamic methods—in fact, the location of ERP sources is an inverse problem that cannot be exactly solved, only estimated. Thus, ERPs are well suited to research questions about the speed of neural activity, and are less well suited to research questions about the location of such activity. Cost. ERP research is much cheaper to do than other imaging techniques such as fMRI, PET, and MEG. This is because purchasing and maintaining an EEG system is less expensive than the other systems. Clinical. Physicians and neurologists will sometimes use a flashing visual checkerboard stimulus to test for any damage or trauma in the visual system. In a healthy person, this stimulus will elicit a strong response over the primary visual cortex located in the occipital lobe, in the back of the brain. ERP component abnormalities in clinical research have been shown in neurological conditions such as: Research. ERPs are used extensively in neuroscience, cognitive psychology, cognitive science, and psycho-physiological research. Experimental psychologists and neuroscientists have discovered many different stimuli that elicit reliable ERPs from participants. The timing of these responses is thought to provide a measure of the timing of the brain's communication or timing of information processing. For example, in the checkerboard paradigm described above, healthy participants' first response of the visual cortex is around 50–70 ms. This would seem to indicate that this is the amount of time it takes for the transduced visual stimulus to reach the cortex after light first enters the eye. Alternatively, the P300 response occurs at around 300ms in the oddball paradigm, for example, regardless of the type of stimulus presented: visual, tactile, auditory, olfactory, gustatory, etc. Because of this general invariance with regard to stimulus type, the P300 component is understood to reflect a higher cognitive response to unexpected and/or cognitively salient stimuli. The P300 response has also been studied in the context of information and memory detection. In addition, there are studies on abnormalities of P300 in depression. Depressed patients tend to have a reduced P200 and P300 amplitude and a prolonged P300 latency. Due to the consistency of the P300 response to novel stimuli, a brain–computer interface can be constructed which relies on it. By arranging many signals in a grid, randomly flashing the rows of the grid as in the previous paradigm, and observing the P300 responses of a subject staring at the grid, the subject may communicate which stimulus he is looking at, and thus slowly "type" words. Another area of research in the field of ERP lies in the efference copy. This predictive mechanism plays a central role in for example human verbalization. Efference copies, however, do not only occur with spoken words, but also with inner language - i.e. the quiet production of words - which has also been proven by event-related potentials. Other ERPs used frequently in research, especially neurolinguistics research, include the ELAN, the N400, and the P600/SPS. The analysis of ERP data is also increasingly supported by machine learning algorithms. Number of trials. A common issue in ERP studies is whether the observed data have a sufficient number of trials to support statistical analysis. The background noise in any ERP for any individual can vary. Therefore simply characterizing the number of ERP trials needed for a robust component response is inadequate. ERP researchers can use metrics like the standardized measurement error (SME) to justify the examination of between-condition or between-group differences or estimates of internal consistency to justify the examination of individual differences. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma^2" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "t" }, { "math_id": 3, "text": "x(t,k)=s(t)+n(t,k)" }, { "math_id": 4, "text": "s(t)" }, { "math_id": 5, "text": "n(t,k)" }, { "math_id": 6, "text": "N" }, { "math_id": 7, "text": "\\bar x(t) = \\frac{1}{N} \\sum_{k=1}^N x(t,k) = s(t) + \\frac{1}{N} \\sum_{k=1}^N n(t,k)" }, { "math_id": 8, "text": "\\bar x(t)" }, { "math_id": 9, "text": "\\operatorname{E}[\\bar x(t)] = s(t)" }, { "math_id": 10, "text": "\\operatorname{Var}[\\bar x(t)] = \\operatorname{E}\\left[\\left(\\bar x(t) - \\operatorname{E}[\\bar x(t)]\\right)^2\\right] = \\frac{1}{N^2} \\operatorname{E}\\left[\\left(\\sum_{k=1}^N n(t,k)\\right)^2\\right] = \\frac{1}{N^2} \\sum_{k=1}^N \\operatorname{E}\\left[n(t,k)^2\\right] = \\frac{\\sigma^2}{N}" }, { "math_id": 11, "text": "\\sigma/{\\sqrt{N}}" }, { "math_id": 12, "text": "1/{\\sqrt{N}}" }, { "math_id": 13, "text": "2 \\sigma/{\\sqrt{N}}" } ]
https://en.wikipedia.org/wiki?curid=614230
6142533
Limited-memory BFGS
Optimization algorithm Limited-memory BFGS (L-BFGS or LM-BFGS) is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize formula_0 over unconstrained values of the real-vector formula_1 where formula_2 is a differentiable scalar function. Like the original BFGS, L-BFGS uses an estimate of the inverse Hessian matrix to steer its search through variable space, but where BFGS stores a dense formula_3 approximation to the inverse Hessian ("n" being the number of variables in the problem), L-BFGS stores only a few vectors that represent the approximation implicitly. Due to its resulting linear memory requirement, the L-BFGS method is particularly well suited for optimization problems with many variables. Instead of the inverse Hessian Hk", L-BFGS maintains a history of the past "m" updates of the position x and gradient ∇"f"(x), where generally the history size "m" can be small (often formula_4). These updates are used to implicitly do operations requiring the Hk"-vector product. Algorithm. The algorithm starts with an initial estimate of the optimal value, formula_5, and proceeds iteratively to refine that estimate with a sequence of better estimates formula_6. The derivatives of the function formula_7 are used as a key driver of the algorithm to identify the direction of steepest descent, and also to form an estimate of the Hessian matrix (second derivative) of formula_0. L-BFGS shares many features with other quasi-Newton algorithms, but is very different in how the matrix-vector multiplication formula_8 is carried out, where formula_9 is the approximate Newton's direction, formula_10 is the current gradient, and formula_11 is the inverse of the Hessian matrix. There are multiple published approaches using a history of updates to form this direction vector. Here, we give a common approach, the so-called "two loop recursion." We take as given formula_12, the position at the k-th iteration, and formula_13 where formula_2 is the function being minimized, and all vectors are column vectors. We also assume that we have stored the last m updates of the form formula_14 formula_15. We define formula_16, and formula_17 will be the 'initial' approximate of the inverse Hessian that our estimate at iteration k begins with. The algorithm is based on the BFGS recursion for the inverse Hessian as formula_18 For a fixed k we define a sequence of vectors formula_19 as formula_20 and formula_21. Then a recursive algorithm for calculating formula_22 from formula_23 is to define formula_24 and formula_25. We also define another sequence of vectors formula_26 as formula_27. There is another recursive algorithm for calculating these vectors which is to define formula_28 and then recursively define formula_29 and formula_30. The value of formula_31 is then our ascent direction. Thus we can compute the descent direction as follows: formula_32 This formulation gives the search direction for the minimization problem, i.e., formula_33. For maximization problems, one should thus take -z instead. Note that the initial approximate inverse Hessian formula_17 is chosen as a diagonal matrix or even a multiple of the identity matrix since this is numerically efficient. The scaling of the initial matrix formula_34 ensures that the search direction is well scaled and therefore the unit step length is accepted in most iterations. A Wolfe line search is used to ensure that the curvature condition is satisfied and the BFGS updating is stable. Note that some software implementations use an Armijo backtracking line search, but cannot guarantee that the curvature condition formula_35 will be satisfied by the chosen step since a step length greater than formula_36 may be needed to satisfy this condition. Some implementations address this by skipping the BFGS update when formula_37 is negative or too close to zero, but this approach is not generally recommended since the updates may be skipped too often to allow the Hessian approximation formula_11 to capture important curvature information. Some solvers employ so called damped (L)BFGS update which modifies quantities formula_38 and formula_39 in order to satisfy the curvature condition. The two-loop recursion formula is widely used by unconstrained optimizers due to its efficiency in multiplying by the inverse Hessian. However, it does not allow for the explicit formation of either the direct or inverse Hessian and is incompatible with non-box constraints. An alternative approach involves using a low-rank representation for the direct and/or inverse Hessian. This represents the Hessian as a sum of a diagonal matrix and a low-rank update. Such a representation enables the use of L-BFGS in constrained settings, for example, as part of the SQP method. Applications. L-BFGS has been called "the algorithm of choice" for fitting log-linear (MaxEnt) models and conditional random fields with formula_40-regularization. Variants. Since BFGS (and hence L-BFGS) is designed to minimize smooth functions without constraints, the L-BFGS algorithm must be modified to handle functions that include non-differentiable components or constraints. A popular class of modifications are called active-set methods, based on the concept of the active set. The idea is that when restricted to a small neighborhood of the current iterate, the function and constraints can be simplified. L-BFGS-B. The L-BFGS-B algorithm extends L-BFGS to handle simple box constraints (aka bound constraints) on variables; that is, constraints of the form "li" ≤ "xi" ≤ "ui" where li and ui are per-variable constant lower and upper bounds, respectively (for each xi, either or both bounds may be omitted). The method works by identifying fixed and free variables at every step (using a simple gradient method), and then using the L-BFGS method on the free variables only to get higher accuracy, and then repeating the process. OWL-QN. Orthant-wise limited-memory quasi-Newton (OWL-QN) is an L-BFGS variant for fitting formula_41-regularized models, exploiting the inherent sparsity of such models. It minimizes functions of the form formula_42 where formula_43 is a differentiable convex loss function. The method is an active-set type method: at each iterate, it estimates the sign of each component of the variable, and restricts the subsequent step to have the same sign. Once the sign is fixed, the non-differentiable formula_44 term becomes a smooth linear term which can be handled by L-BFGS. After an L-BFGS step, the method allows some variables to change sign, and repeats the process. O-LBFGS. Schraudolph "et al." present an online approximation to both BFGS and L-BFGS. Similar to stochastic gradient descent, this can be used to reduce the computational complexity by evaluating the error function and gradient on a randomly drawn subset of the overall dataset in each iteration. It has been shown that O-LBFGS has a global almost sure convergence while the online approximation of BFGS (O-BFGS) is not necessarily convergent. Implementation of variants. Notable open source implementations include: Notable non open source implementations include: Works cited. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(\\mathbf{x})" }, { "math_id": 1, "text": "\\mathbf{x}" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "n\\times n" }, { "math_id": 4, "text": "m<10" }, { "math_id": 5, "text": "\\mathbf{x}_0" }, { "math_id": 6, "text": "\\mathbf{x}_1,\\mathbf{x}_2,\\ldots" }, { "math_id": 7, "text": "g_k:=\\nabla f(\\mathbf{x}_k)" }, { "math_id": 8, "text": "d_k=-H_k g_k" }, { "math_id": 9, "text": "d_k" }, { "math_id": 10, "text": "g_k" }, { "math_id": 11, "text": "H_k" }, { "math_id": 12, "text": "x_k" }, { "math_id": 13, "text": "g_k\\equiv\\nabla f(x_k)" }, { "math_id": 14, "text": "s_k = x_{k+1} - x_k" }, { "math_id": 15, "text": "y_k = g_{k+1} - g_k" }, { "math_id": 16, "text": "\\rho_k = \\frac{1}{y^{\\top}_k s_k} " }, { "math_id": 17, "text": "H^0_k" }, { "math_id": 18, "text": "H_{k+1} = (I-\\rho_k s_k y_k^\\top)H_k(I-\\rho_k y_k s_k^\\top) + \\rho_k s_k s_k^\\top. " }, { "math_id": 19, "text": "q_{k-m},\\ldots,q_k" }, { "math_id": 20, "text": "q_k:=g_k" }, { "math_id": 21, "text": "q_i:=(I-\\rho_i y_i s_i^\\top)q_{i+1}" }, { "math_id": 22, "text": "q_i" }, { "math_id": 23, "text": "q_{i+1}" }, { "math_id": 24, "text": "\\alpha_i := \\rho_i s_i^\\top q_{i+1}" }, { "math_id": 25, "text": "q_i=q_{i+1}-\\alpha_i y_i" }, { "math_id": 26, "text": "z_{k-m},\\ldots,z_k" }, { "math_id": 27, "text": "z_i:=H_iq_i" }, { "math_id": 28, "text": "z_{k-m}=H_k^0 q_{k-m}" }, { "math_id": 29, "text": "\\beta_i:=\\rho_i y_i^\\top z_i" }, { "math_id": 30, "text": "z_{i+1}=z_i + (\\alpha_i - \\beta_i)s_i" }, { "math_id": 31, "text": "z_k" }, { "math_id": 32, "text": "\\begin{array}{l}\nq = g_k\\\\ \n\\mathtt{For}\\ i=k-1, k-2, \\ldots, k-m\\\\ \n\\qquad \\alpha_i = \\rho_i s^\\top_i q\\\\ \n\\qquad q = q - \\alpha_i y_i\\\\ \n\\gamma_k = \\frac{s_{k - 1}^{\\top} y_{k - 1}}{y_{k - 1}^{\\top} y_{k - 1}} \\\\ \nH^0_k= \\gamma_k I\\\\ \nz = H^0_k q\\\\ \n\\mathtt{For}\\ i=k-m, k-m+1, \\ldots, k-1\\\\ \n\\qquad \\beta_i = \\rho_i y^\\top_i z\\\\ \n\\qquad z = z + s_i (\\alpha_i - \\beta_i)\\\\\nz = -z\n\\end{array}" }, { "math_id": 33, "text": "z = - H_k g_k" }, { "math_id": 34, "text": "\\gamma_k" }, { "math_id": 35, "text": "y_k^{\\top} s_k > 0" }, { "math_id": 36, "text": "1" }, { "math_id": 37, "text": "y_k^{\\top} s_k" }, { "math_id": 38, "text": "s_k" }, { "math_id": 39, "text": "y_k" }, { "math_id": 40, "text": "\\ell_2" }, { "math_id": 41, "text": "\\ell_1" }, { "math_id": 42, "text": "f(\\vec x) = g(\\vec x) + C \\|\\vec x\\|_1" }, { "math_id": 43, "text": "g" }, { "math_id": 44, "text": " \\|\\vec x\\|_1" } ]
https://en.wikipedia.org/wiki?curid=6142533
61427802
Dorothee Haroske
German mathematician Dorothee D. Haroske (born 1968) is a German mathematician who holds the chair for function spaces in the Institute of Mathematics of the University of Jena. Education and career. Haroske completed her doctorate (Dr. rer. nat.) at the University of Jena in 1995, and her habilitation at Jena in 2002. Her doctoral dissertation, "Entropy Numbers and Application Numbers in Weighted Function Space of Type formula_0 and formula_1, Eigenvalue Distributions of Some Degenerate Pseudodifferential Operators", was supervised by Hans Triebel. In 2018, she was given a chair for function spaces at the University of Rostock before returning to her present position in Jena. Books. Haroske is the author of the book "Envelopes and Sharp Embeddings of Function Spaces" (Chapman &amp; Hall, 2007). With Hans Triebel she also wrote "Distributions, Sobolev Spaces, Elliptic Equations" (EMS Textbooks in Mathematics, European Mathematical Society, 2008). She is one of the editors of "Function Spaces, Differential Operators and Nonlinear Analysis: The Hans Triebel Anniversary Volume" (Springer Basel AG, 2003). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B^s_{pq}" }, { "math_id": 1, "text": "F^s_{pq}" } ]
https://en.wikipedia.org/wiki?curid=61427802
61428301
HIP 65426 b
Hot Jupiter exoplanet orbiting HIP 65426 HIP 65426 b, formally named Najsakopajk, is a super-Jupiter exoplanet orbiting the star HIP 65426. It was discovered on 6 July 2017 by the SPHERE consortium using the Spectro-Polarimetric High-Contrast Exoplanet Research (SPHERE) instrument belonging to the European Southern Observatory (ESO). It is 385 light-years from Earth. It is the first planet discovered by ESO's SPHERE instrument. Nomenclature. In August 2022, this planet and its host star were included among 20 systems to be named by the third NameExoWorlds project. The approved names, proposed by a team from Mexico, were announced in June 2023. HIP 65426 b is named Najsakopajk and its host star is named Matza, after Zoque words for "Mother Earth" and "star". Overview. The exoplanet HIP 65426 b orbits its host star HIP 65426, an A2V star with apparent magnitude 7.01 and a mass of . This planetary system is located in the constellation Centaurus. The planet is around 14 million years old. However, it is not associated with a debris disk, despite its young age, causing it to not fit current models for planetary formation. It is around 92 AU from its parent star, with a possible dusty atmosphere. It was discovered as part of the SHINE program, which aimed to find planetary systems around 600 new stars. In September 2022, HIP 65426 b became the first exoplanet directly observed by the James Webb Space Telescope. Planetary atmosphere. The spectrum taken in 2020 has indicated that HIP 65426 b is carbon-poor and oxygen-rich compared to Solar System gas giants. Spectral analysis of data from the James Webb Space Telescope revealed strong evidence of silicate clouds containing enstatite with no evidence of a dusty atmosphere. James Webb Space Telescope observations. In August 2022, a pre-print of the James Webb Space Telescope (JWST) observations was published. The JWST direct imaging observations between 2-16 μm of HIP 65426 b tightly constrained its bolometric luminosity to formula_0, which provides a robust mass constraint of 7.1±1.1 MJ. The atmospheric fitting of both temperature and radius are in disagreement with evolutionary models. The team also constrained the semi-major axis and the inclination of the planet, but the new JWST astrometry of the planet did not significantly improve the orbit of the planet, especially the eccentricity remains unconstrained. HIP 65426 b is the first exoplanet to be imaged by JWST and the first exoplanet to be detected beyond 5 μm. The observations demonstrate that the James Webb Space Telescope will exceed its nominal predicted performance by a factor of 10 and that it will be able to image 0.3 MJ planets at 100 au for main-sequence stars, Neptune and Uranus-mass objects at 100-200 au for M-dwarfs and Saturn-mass objects at 10 au for M-dwarfs. For α Cen A JWST might be able to push the limit to a 5 R🜨 planet at 0.5 to 2.5 au. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\log(L_{bol}/L_{sun})=-4.23\\plusmn0.02" } ]
https://en.wikipedia.org/wiki?curid=61428301
61437348
Complex inverse Wishart distribution
The complex inverse Wishart distribution is a matrix probability distribution defined on complex-valued positive-definite matrices and is the complex analog of the real inverse Wishart distribution. The complex Wishart distribution was extensively investigated by Goodman while the derivation of the inverse is shown by Shaman and others. It has greatest application in least squares optimization theory applied to complex valued data samples in digital radio communications systems, often related to Fourier Domain complex filtering. Letting formula_1 be the sample covariance of independent complex "p"-vectors formula_2 whose Hermitian covariance has complex Wishart distribution formula_3 with mean value formula_4 degrees of freedom, then the pdf of formula_5 follows the complex inverse Wishart distribution. Density. If formula_6 is a sample from the complex Wishart distribution formula_7 such that, in the simplest case, formula_8 then formula_9 is sampled from the inverse complex Wishart distribution formula_10. The density function of formula_11 is formula_12 where formula_13 is the complex multivariate Gamma function formula_0 Moments. The variances and covariances of the elements of the inverse complex Wishart distribution are shown in Shaman's paper above while Maiwald and Kraus determine the 1-st through 4-th moments. Shaman finds the first moment to be formula_14 and, in the simplest case formula_15, given formula_16, then formula_17 The vectorised covariance is formula_18 where formula_19 is a formula_20 identity matrix with ones in diagonal positions formula_21 and formula_22 are real constants such that for formula_23 formula_24, marginal diagonal variances formula_25, off-diagonal variances. formula_26, intra-diagonal covariances For formula_27, we get the sparse matrix: formula_28 Eigenvalue distributions. The joint distribution of the real eigenvalues of the inverse complex (and real) Wishart are found in Edelman's paper who refers back to an earlier paper by James. In the non-singular case, the eigenvalues of the inverse Wishart are simply the inverted values for the Wishart. Edelman also characterises the marginal distributions of the smallest and largest eigenvalues of complex and real Wishart matrices. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{C}\\Gamma_p(\\nu) = \\pi^{\\tfrac{1}{2}p(p-1)}\\prod_{j=1}^p \\Gamma(\\nu-j+1)" }, { "math_id": 1, "text": " \\mathbf{S}_{p \\times p} = \\sum_{j=1}^\\nu G_j G_j^H " }, { "math_id": 2, "text": " G_j " }, { "math_id": 3, "text": " \\mathbf{S} \\sim \\mathcal{CW}(\\mathbf\\Sigma,\\nu,p)" }, { "math_id": 4, "text": "\\mathbf{\\Sigma} \\text{ and } \\nu " }, { "math_id": 5, "text": "\\mathbf{X} = \\mathbf{S^{-1}} " }, { "math_id": 6, "text": " \\mathbf{S}_{p \\times p} " }, { "math_id": 7, "text": " \\mathcal{CW}({\\mathbf\\Sigma},\\nu,p)" }, { "math_id": 8, "text": " \\nu \\ge p \\text { and } \\left| \\mathbf{S} \\right | > 0 " }, { "math_id": 9, "text": " \\mathbf{X} = \\mathbf{S}^{-1}" }, { "math_id": 10, "text": " \\mathcal{CW}^{-1}({\\mathbf\\Psi},\\nu,p) \\text{ where } \\mathbf\\Psi = \\mathbf{\\Sigma}^{-1}" }, { "math_id": 11, "text": " \\mathbf{X} " }, { "math_id": 12, "text": " f_{\\mathbf{x}}(\\mathbf{x}) = \\frac{\\left|\\mathbf\\Psi\\right|^{\\nu}}{\\mathcal{C}\\Gamma_p(\\nu)} \\left|\\mathbf{x}\\right|^{-(\\nu+p)}e^{-\\operatorname{tr}(\\mathbf\\Psi\\mathbf{x}^{-1})} " }, { "math_id": 13, "text": " \\mathcal{C}\\Gamma_p(\\nu) " }, { "math_id": 14, "text": " \\mathbf E [\\mathcal C \\mathbf { W^{-1} } ] = \\frac{1}{n-p} \\mathbf{ \\Psi ^ {-1} }, \\; n > p " }, { "math_id": 15, "text": "\\mathbf\\Psi = \\mathbf I_{p \\times p}" }, { "math_id": 16, "text": " d = \\frac{1}{n - p} " }, { "math_id": 17, "text": " \\mathbf {\\mathbf E \\left [vec( \\mathcal C W _3^{-1} ) \\right ]} = \\begin{bmatrix}\n d & 0 & 0 & 0 & d & 0 & 0 & 0 & d \\\\\n \\end{bmatrix}\n " }, { "math_id": 18, "text": " \\mathbf {Cov \\left [vec( \\mathcal C W _p^{-1} ) \\right ]} \n= b \\left( \\mathbf I_p \\otimes I_p \\right ) + c \\, \\mathbf {vecI_p} \\left ( \\mathbf {vecI_p} \\right ) ^T\n+ (a-b-c) \\mathbf J " }, { "math_id": 19, "text": " \\mathbf J " }, { "math_id": 20, "text": " p^2 \\times p^2 " }, { "math_id": 21, "text": " 1 + (p + 1)j, \\; j = 0,1,\\dots p-1 " }, { "math_id": 22, "text": " a, b, c " }, { "math_id": 23, "text": " n > p + 1 " }, { "math_id": 24, "text": " a = \\frac{1}{(n - p)^2 (n-p-1)} " }, { "math_id": 25, "text": " b = \\frac{1}{(n - p + 1)(n - p) (n - p - 1)} " }, { "math_id": 26, "text": " c = \\frac{1}{(n - p + 1)(n - p)^2 (n - p - 1)} " }, { "math_id": 27, "text": "\\mathbf \\Psi = \\mathbf I _ 3" }, { "math_id": 28, "text": " \\mathbf {Cov \\left [vec( \\mathcal C W _3^{-1} ) \\right ]} = \\begin{bmatrix}\n a & \\cdot & \\cdot & \\cdot & c & \\cdot & \\cdot & \\cdot & c \\\\\n \\cdot & b & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot \\\\\n \\cdot & \\cdot & b & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot \\\\\n \\cdot & \\cdot & \\cdot & b & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot \\\\\n c & \\cdot & \\cdot & \\cdot & a & \\cdot & \\cdot & \\cdot & c \\\\\n \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & b & \\cdot & \\cdot & \\cdot \\\\\n \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & b & \\cdot & \\cdot \\\\\n \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & b & \\cdot \\\\\n c & \\cdot & \\cdot & \\cdot & c & \\cdot & \\cdot & \\cdot & a \\\\\n \\end{bmatrix}\n " } ]
https://en.wikipedia.org/wiki?curid=61437348
6144390
Benaloh cryptosystem
The Benaloh Cryptosystem is an extension of the Goldwasser-Micali cryptosystem (GM) created in 1985 by Josh (Cohen) Benaloh. The main improvement of the Benaloh Cryptosystem over GM is that longer blocks of data can be encrypted at once, whereas in GM each bit is encrypted individually. Scheme Definition. Like many public key cryptosystems, this scheme works in the group formula_0 where "n" is a product of two large primes. This scheme is homomorphic and hence malleable. Key Generation. Given block size "r", a public/private key pair is generated as follows: Note: If "r" is composite, it was pointed out by Fousse et al. in 2011 that the above conditions (i.e., those stated in the original paper) are insufficient to guarantee correct decryption, i.e., to guarantee that formula_6 in all cases (as should be the case). To address this, the authors propose the following check: let formula_7 be the prime factorization of "r". Choose formula_4 such that for each factor formula_8, it is the case that formula_9. The public key is then formula_11, and the private key is formula_12. Message Encryption. To encrypt message formula_13: Message Decryption. To decrypt a ciphertext formula_16: To understand decryption, first notice that for any formula_13 and formula_20 we have: formula_21 To recover "m" from "a", we take the discrete log of "a" base "x". If "r" is small, we can recover m by an exhaustive search, i.e. checking if formula_22 for all formula_23. For larger values of "r", the Baby-step giant-step algorithm can be used to recover "m" in formula_24 time and space. Security. The security of this scheme rests on the Higher residuosity problem, specifically, given "z","r" and "n" where the factorization of "n" is unknown, it is computationally infeasible to determine whether "z" is an "r"th residue mod "n", i.e. if there exists an "x" such that formula_25. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\mathbb{Z}/n\\mathbb{Z})^*" }, { "math_id": 1, "text": "r \\vert (p-1), \\operatorname{gcd}(r, (p-1)/r)=1," }, { "math_id": 2, "text": "\\operatorname{gcd}(r, (q-1))=1" }, { "math_id": 3, "text": "n=pq, \\phi=(p-1)(q-1)" }, { "math_id": 4, "text": "y \\in \\mathbb{Z}^*_n" }, { "math_id": 5, "text": "y^{\\phi/r} \\not \\equiv 1 \\mod n" }, { "math_id": 6, "text": "D(E(m)) = m" }, { "math_id": 7, "text": "r=p_1p_2\\dots p_k" }, { "math_id": 8, "text": "p_i" }, { "math_id": 9, "text": "y^{\\phi/p_i}\\ne 1\\mod n" }, { "math_id": 10, "text": "x=y^{\\phi/r}\\mod n" }, { "math_id": 11, "text": "y,n" }, { "math_id": 12, "text": "\\phi,x" }, { "math_id": 13, "text": "m\\in\\mathbb{Z}_r" }, { "math_id": 14, "text": "u \\in \\mathbb{Z}^*_n" }, { "math_id": 15, "text": "E_r(m) = y^m u^r \\mod n" }, { "math_id": 16, "text": "c\\in\\mathbb{Z}^*_n" }, { "math_id": 17, "text": "a=c^{\\phi/r}\\mod n" }, { "math_id": 18, "text": "m=\\log_x(a)" }, { "math_id": 19, "text": "x^m\\equiv a \\mod n" }, { "math_id": 20, "text": "u\\in\\mathbb{Z}^*_n" }, { "math_id": 21, "text": "a = (c)^{\\phi/r} \\equiv (y^m u^r)^{\\phi/r} \\equiv (y^{m})^{\\phi/r}(u^r)^{\\phi/r} \\equiv (y^{\\phi/r})^m(u)^{\\phi} \\equiv (x)^m (u)^0 \\equiv x^m \\mod n" }, { "math_id": 22, "text": "x^i\\equiv a \\mod n" }, { "math_id": 23, "text": "0\\dots (r-1)" }, { "math_id": 24, "text": "O(\\sqrt{r})" }, { "math_id": 25, "text": "z \\equiv x^r \\mod n" } ]
https://en.wikipedia.org/wiki?curid=6144390