id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
64536363
Electrochemical quartz crystal microbalance
Electrochemical quartz crystal microbalance (EQCM) is the combination of electrochemistry and quartz crystal microbalance, which was generated in the eighties. Typically, an EQCM device contains an electrochemical cells part and a QCM part. Two electrodes on both sides of the quartz crystal serve two purposes. Firstly, an alternating electric field is generated between the two electrodes for making up the oscillator. Secondly, the electrode contacting electrolyte is used as a working electrode (WE), together with a counter electrode (CE) and a reference electrode (RE), in the potentiostatic circuit constituting the electrochemistry cell. Thus, the working electrode of electrochemistry cell is the sensor of QCM. As a high mass sensitive in-situ measurement, EQCM is suitable to monitor the dynamic response of reactions at the electrode–solution interface at the applied potential. When the potential of a QCM metal electrode changes, a negative or positive mass change is monitored depending on the ratio of anions adoption on the electrode surface and the dissolution of metal ions into solution. EQCM calibration. The EQCM sensitivity factor "K" can be calculated by combing the electrochemical cell measured charge density and QCM measured frequency shift. The sensitivity factor is only valid when the mass change on the electrode is homogenous. Otherwise, "K" is taken as the average sensitivity factor of the EQCM. formula_0 where formula_1 is the measured frequency shift (Hz), "S" is the quartz crystal active area (cm2), "ρ" is the density of quartz crystal, formula_2 is the quartz crystal shear modulus and formula_3 is the fundamental quartz crystal frequency. "K" is the intrinsic sensitivity factor of the EQCM. In a certain electrolyte solution, a metal film will deposited on the working electrode, which is the QCM sensor surface of QCM. formula_4 The charge density (formula_5) is involved in the electro-reduction of metal ions at a constant current formula_6, in a period of time formula_7 (formula_8). The active areal mass density is calculated by formula_9 where formula_10 is the atomic weight of deposited metal, z is the electrovalency, and F is the Faraday constant. The experimental sensitivity of the EQCM is calculated by combing formula_11 and formula_1. formula_12 EQCM application. Application of EQCM in electrosynthesis. EQCM can be used to monitor the chemical reaction occurring on the electrode, which offers the optimized reaction condition by comparing the influence factors during the synthesis process. Some previous work has already investigated the polymerization process and charge transport properties, polymer film growth on gold electrode surface, and polymerization process of polypyrrole and its derivatives. EQCM was used to study electro-polymerization process and doping/de-doping properties of polyaniline film on gold electrode surface as well. To investigate the electrosynthesis process, sometimes it is necessary to combine other characterization technologies, such as using FTIR and EQCM to study the effect of different conditions on the formation of poly(3,4-ethylenedioxythiophene) film structure, and using EQCM, together with AFM, FTIR, EIS, to investigate the film formation process in the alkyl carbonate/lithium salt electrolyte solution on precious metal electrodes surfaces. Application of EQCM in electrodeposition and dissolution. EQCM is broadly used to study the deposition/dissolution process on electrode surface, such as the oscillation of electrode potential during Cu/CuO2 layered nanostructure electrodeposition, deposition growth process of cobalt and nickel hexacyanoferrate in calcium nitrate and barium nitrate electrolyte solution, and the Mg electrode electrochemical behaviour in various polar aprotic electrolyte solutions. EQCM can be used as a powerful tool for corrosion and corrosion protection study, which is usually combined with other characterization technologies. A previous work used EQCM and XPS studied Fe-17Cr-33Mo/ Fe-25Cr alloy electrodes mass changes during the potential sweep and potential step experiments in the passive potential region in an acidic and a basic electrolyte. Another previous work used EQCM and SEM to study the influence of purine (PU) on Cu electrode corrosion and spontaneous dissolution in NaCl electrolyte solution. Application of EQCM in adsorption and desorption. EQCM has been used to study the self-assembled monolayers of long chain alkyl mercaptan and alkanethiol and mercaptoalkanoic on gold electrode surface. Application of EQCM in polymer modified electrode. EQCM can be used to ideally modify polymer membranes together with other electrochemical measurements or surface characterization methods. A team has used CV, UV-Vis, IR and EQCM studied irreversible changes of some polythiophenes in the electrochemical reduction process in acetonitrile. Later on they used AFM and EQCM investigated growth of polypyrrole film in anionic surfactant micellar solution. Then combing with CV, UV-Vis, FTIR, ESR, they used EQCM to study conductivity and magnetic properties of 3,4-dimethoxy and 3,4-ethylenedioxy-terminated polypyrrole and polythiophene. Application of EQCM in energy conversion and storage. EQCM can be used to study the process of adsorption and oxidation of fuel molecules on the electrode surface, and the effect of electrode catalyst or other additives on the electrode, such as assessment of polypyrrole internal Pt load in the polypyrrole/platinum composites fuel cell, methanol fuel cell anodizing process, and electrodeposition of cerium oxide suspended nanoparticles doped with gadolinium oxide under the ultrasound for Co/CeO2 and Ni/CeO2 composite fuel cells. EQCM can also be used to study the energy storage performance and influencing factors of supercapacitors and electrochemical capacitors. For example, EQCM is used to study the ion movement gauge of conductive polymer of capacitor on cathode. Some work studied the EQCM application in solar energy, which is mostly additive and thin film material related, for instance, using EQCM to study the electrochemical deposition process and stability of Co-Pi oxygen evolution catalyst for solar storage. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\Delta f=-\\left(\\frac{2 f_o^2}{S\\sqrt{\\mu\\rho}}\\right)\\Delta m=-K\\Delta m" }, { "math_id": 1, "text": "\\Delta f" }, { "math_id": 2, "text": "\\mu" }, { "math_id": 3, "text": "f_o" }, { "math_id": 4, "text": "M^{z+}+ze^- \\longrightarrow M" }, { "math_id": 5, "text": "\\frac{\\Delta Q}{C}\\cdot \\mathrm{cm}^{-2}" }, { "math_id": 6, "text": "I" }, { "math_id": 7, "text": "\\Delta T" }, { "math_id": 8, "text": "\\Delta Q= I \\Delta t" }, { "math_id": 9, "text": "\\Delta m=\\frac{A_m}{zF}\\Delta Q" }, { "math_id": 10, "text": "A_m" }, { "math_id": 11, "text": "\\Delta m" }, { "math_id": 12, "text": "K= -\\frac{zF}{A_m} \\frac{\\Delta f}{\\Delta Q}" } ]
https://en.wikipedia.org/wiki?curid=64536363
64540037
Mirror symmetry conjecture
In mathematics, mirror symmetry is a conjectural relationship between certain Calabi–Yau manifolds and a constructed "mirror manifold". The conjecture allows one to relate the number of rational curves on a Calabi-Yau manifold (encoded as Gromov–Witten invariants) to integrals from a family of varieties (encoded as period integrals on a variation of Hodge structures). In short, this means there is a relation between the number of genus formula_0 algebraic curves of degree formula_1 on a Calabi-Yau variety formula_2 and integrals on a dual variety formula_3. These relations were original discovered by Candelas, de la Ossa, Green, and Parkes in a paper studying a generic quintic threefold in formula_4 as the variety formula_2 and a construction from the quintic Dwork family formula_5 giving formula_6. Shortly after, Sheldon Katz wrote a summary paper outlining part of their construction and conjectures what the rigorous mathematical interpretation could be. Constructing the mirror of a quintic threefold. Originally, the construction of mirror manifolds was discovered through an ad-hoc procedure. Essentially, to a generic quintic threefold formula_7 there should be associated a one-parameter family of Calabi-Yau manifolds formula_5 which has multiple singularities. After blowing up these singularities, they are resolved and a new Calabi-Yau manifold formula_8 was constructed. which had a flipped Hodge diamond. In particular, there are isomorphisms formula_9 but most importantly, there is an isomorphism formula_10 where the string theory (the "A-model" of formula_2) for states in formula_11 is interchanged with the string theory (the "B-model" of formula_8) having states in formula_12. The string theory in the A-model only depended upon the Kahler or symplectic structure on formula_2 while the B-model only depends upon the complex structure on formula_8. Here we outline the original construction of mirror manifolds, and consider the string-theoretic background and conjecture with the mirror manifolds in a later section of this article. Complex moduli. Recall that a generic quintic threefold formula_2 in formula_4 is defined by a homogeneous polynomial of degree formula_13. This polynomial is equivalently described as a global section of the line bundle formula_14. Notice the vector space of global sections has dimensionformula_15 but there are two equivalences of these polynomials. First, polynomials under scaling by the algebraic torus formula_16 (non-zero scalers of the base field) given equivalent spaces. Second, projective equivalence is given by the automorphism group of formula_4, formula_17 which is formula_18 dimensional. This gives a formula_19 dimensional parameter spaceformula_20 since formula_21, which can be constructed using Geometric invariant theory. The set formula_22 corresponds to the equivalence classes of polynomials which define smooth Calabi-Yau quintic threefolds in formula_4, giving a moduli space of Calabi-Yau quintics. Now, using Serre duality and the fact each Calabi-Yau manifold has trivial canonical bundle formula_23, the space of deformations has an isomorphismformula_24 with the formula_25 part of the Hodge structure on formula_26. Using the Lefschetz hyperplane theorem the only non-trivial cohomology group is formula_26 since the others are isomorphic to formula_27. Using the Euler characteristic and the Euler class, which is the top Chern class, the dimension of this group is formula_28. This is because formula_29 Using the Hodge structure we can find the dimensions of each of the components. First, because formula_2 is Calabi-Yau, formula_30 soformula_31 giving the Hodge numbers formula_32, hence formula_33 giving the dimension of the moduli space of Calabi-Yau manifolds. Because of the Bogomolev-Tian-Todorov theorem, all such deformations are unobstructed, so the smooth space formula_34 is in fact the moduli space of quintic threefolds. The whole point of this construction is to show how the complex parameters in this moduli space are converted into Kähler parameters of the mirror manifold. Mirror manifold. There is a distinguished family of Calabi-Yau manifolds formula_5 called the Dwork family. It is the projective family formula_35 over the complex plane formula_36. Now, notice there is only a single dimension of complex deformations of this family, coming from formula_37 having varying values. This is important because the Hodge diamond of the mirror manifold formula_3 has formula_38Anyway, the family formula_5 has symmetry group formula_39 acting by formula_40 Notice the projectivity of formula_5 is the reason for the condition formula_41 The associated quotient variety formula_42 has a crepant resolution given by blowing up the formula_43 singularities formula_44 giving a new Calabi-Yau manifold formula_3 with formula_19 parameters in formula_45. This is the mirror manifold and has formula_46 where each Hodge number is formula_47. Ideas from string theory. In string theory there is a class of models called non-linear sigma models which study families of maps formula_48 where formula_49 is a genus formula_0 algebraic curve and formula_2 is Calabi-Yau. These curves formula_49 are called world-sheets and represent the birth and death of a particle as a closed string. Since a string could split over time into two strings, or more, and eventually these strings will come together and collapse at the end of the lifetime of the particle, an algebraic curve mathematically represents this string lifetime. For simplicity, only genus 0 curves were considered originally, and many of the results popularized in mathematics focused only on this case. Also, in physics terminology, these theories are formula_50 heterotic string theories because they have formula_51 supersymmetry that comes in a pair, so really there are four supersymmetries. This is important because it implies there is a pair of operators formula_52 acting on the Hilbert space of states, but only defined up to a sign. This ambiguity is what originally suggested to physicists there should exist a pair of Calabi-Yau manifolds which have dual string theories, one's that exchange this ambiguity between one another. The space formula_2 has a complex structure, which is an integrable almost-complex structure formula_53, and because it is a Kähler manifold it necessarily has a symplectic structure formula_54 called the Kähler form which can be complexified to a complexified Kähler form formula_55 which is a closed formula_56-form, hence its cohomology class is in formula_57 The main idea behind the Mirror Symmetry conjectures is to study the deformations, or moduli, of the complex structure formula_58 and the complexified symplectic structure formula_59 in a way that makes these two "dual" to each other. In particular, from a physics perspective, the super conformal field theory of a Calabi-Yau manifold formula_2 should be equivalent to the dual super conformal field theory of the mirror manifold formula_8. Here conformal means conformal equivalence which is the same as and equivalence class of complex structures on the curve formula_49. There are two variants of the non-linear sigma models called the A-model and the B-model which consider the pairs formula_60 and formula_61 and their moduli.ch 38 pg 729 A-model. Correlation functions from String theory. Given a Calabi-Yau manifold formula_2 with complexified Kähler class formula_62 the nonlinear sigma model of the string theory should contain the three generations of particles, plus the electromagnetic, weak, and strong forces.27 In order to understand how these forces interact, a three-point function called the Yukawa coupling is introduced which acts as the correlation function for states in formula_63. Note this space is the eigenspace of an operator formula_64 on the Hilbert space of states for the string theory. This three point function is "computed" as formula_65 using Feynman path-integral techniques where the formula_66 are the naive number of rational curves with homology class formula_67, and formula_68. Defining these instanton numbers formula_66 is the subject matter of Gromov–Witten theory. Note that in the definition of this correlation function, it only depends on the Kahler class. This inspired some mathematicians to study hypothetical moduli spaces of Kahler structures on a manifold. Mathematical interpretation of A-model correlation functions. In the A-model the corresponding moduli space are the moduli of pseudoholomorphic curves153 formula_69 or the Kontsevich moduli spaces formula_70 These moduli spaces can be equipped with a virtual fundamental class formula_71 or formula_72 which is represented as the vanishing locus of a section formula_73 of a sheaf called the Obstruction sheaf formula_74 over the moduli space. This section comes from the differential equationformula_75 which can be viewed as a perturbation of the map formula_76. It can also be viewed as the Poincaré dual of the Euler class of formula_74 if it is a Vector bundle. With the original construction, the A-model considered was on a generic quintic threefold in formula_4. B-model. Correlation functions from String theory. For the same Calabi-Yau manifold formula_2 in the A-model subsection, there is a dual superconformal field theory which has states in the eigenspace formula_77 of the operator formula_78. Its three-point correlation function is defined as formula_79 where formula_80 is a holomorphic 3-form on formula_2 and for an infinitesimal deformation formula_81 (since formula_77 is the tangent space of the moduli space of Calabi-Yau manifolds containing formula_2, by the Kodaira–Spencer map and the Bogomolev-Tian-Todorov theorem) there is the Gauss-Manin connection formula_82 taking a formula_83 class to a formula_84 class, hence formula_85 can be integrated on formula_2. Note that this correlation function only depends on the complex structure of formula_2. Another formulation of Gauss-Manin connection. The action of the cohomology classes formula_86 on the formula_80 can also be understood as a cohomological variant of the interior product. Locally, the class formula_81 corresponds to a Cech cocycle formula_87 for some nice enough cover formula_88 giving a section formula_89. Then, the insertion product gives an element formula_90 which can be glued back into an element formula_91 of formula_92. This is because on the overlaps formula_93 formula_94 giving formula_95 hence it defines a 1-cocycle. Repeating this process gives a 3-cocycle formula_96 which is equal to formula_97. This is because locally the Gauss-Manin connection acts as the interior product. Mathematical interpretation of B-model correlation functions. Mathematically, the B-model is a variation of hodge structures which was originally given by the construction from the Dwork family. Mirror conjecture. Relating these two models of string theory by resolving the ambiguity of sign for the operators formula_98 led physicists to the following conjecture:22 for a Calabi-Yau manifold formula_2 there should exist a mirror Calabi-Yau manifold formula_8 such that there exists a mirror isomorphism formula_99 giving the compatibility of the associated A-model and B-model. This means given formula_100 and formula_101 such that formula_102 under the mirror map, there is the equality of correlation functionsformula_103 This is significant because it relates the number of degree formula_1 genus formula_104 curves on a quintic threefold formula_2 in formula_4 (so formula_105) to integrals in a variation of Hodge structures. Moreover, these integrals are actually computable!
[ { "math_id": 0, "text": "g" }, { "math_id": 1, "text": "d" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "\\check{X}" }, { "math_id": 4, "text": "\\mathbb{P}^4" }, { "math_id": 5, "text": "X_\\psi" }, { "math_id": 6, "text": "\\check{X} = \\tilde{X}_\\psi" }, { "math_id": 7, "text": "X \\subset \\mathbb{CP}^4" }, { "math_id": 8, "text": "X^\\vee" }, { "math_id": 9, "text": "H^q(X,\\Omega_X^p) \\cong H^q(X^\\vee, \\Omega_{X^\\vee}^{3-p})" }, { "math_id": 10, "text": "H^1(X,\\Omega_X^1) \\cong H^1(X^\\vee, \\Omega_{X^\\vee}^{2})" }, { "math_id": 11, "text": "H^1(X,\\Omega_X^1)" }, { "math_id": 12, "text": "H^1(X^\\vee, \\Omega_{X^\\vee}^{2})" }, { "math_id": 13, "text": "5" }, { "math_id": 14, "text": "f \\in \\Gamma(\\mathbb{P}^4,\\mathcal{O}_{\\mathbb{P}^4}(5))" }, { "math_id": 15, "text": "\\dim { \\Gamma (\\mathbb {P} ^{4},{\\mathcal {O}}_{\\mathbb {P} ^{4}}(5))} = 126" }, { "math_id": 16, "text": "\\mathbb{G}_m" }, { "math_id": 17, "text": "\\text{PGL}(5)" }, { "math_id": 18, "text": "24" }, { "math_id": 19, "text": "101" }, { "math_id": 20, "text": "U_\\text{smooth} \\subset \\mathbb{P}(\\Gamma(\\mathbb{P}^4,\\mathcal{O}_{\\mathbb{P}^4}(5)))/PGL(5)" }, { "math_id": 21, "text": "126 - 24 - 1 = 101" }, { "math_id": 22, "text": "U_{\\text{smooth}}" }, { "math_id": 23, "text": "\\omega_X" }, { "math_id": 24, "text": "H^1(X,T_X) \\cong H^2(X,\\Omega_X)" }, { "math_id": 25, "text": "(2,1)" }, { "math_id": 26, "text": "H^3(X)" }, { "math_id": 27, "text": "H^i(\\mathbb{P}^4)" }, { "math_id": 28, "text": "204" }, { "math_id": 29, "text": "\\begin{align}\n\\chi(X) &= -200 \\\\\n&= h^0 + h^2 - h^3 +h^4 + h^6 \\\\\n&= 1 + 1 - \\dim H^3(X) + 1 + 1\n\\end{align}" }, { "math_id": 30, "text": "\\omega_X \\cong \\mathcal{O}_X" }, { "math_id": 31, "text": "H^0(X,\\Omega_X^3) \\cong H^0(X,\\mathcal{O}_X) " }, { "math_id": 32, "text": "h^{0,3} = h^{3,0} = 1" }, { "math_id": 33, "text": "\\dim H^2(X,\\Omega_X) = h^{1,2} = 101" }, { "math_id": 34, "text": "U_\\text{smooth}" }, { "math_id": 35, "text": "X_\\psi = \\text{Proj} \\left(\n\\frac{\\mathbb{C}[\\psi][x_0,\\ldots, x_4]}{(x_0^5 + \\cdots + x_4^5 - 5\\psi x_0x_1x_2x_3x_4)}\n\\right)" }, { "math_id": 36, "text": "\\text{Spec}(\\mathbb{C}[\\psi])" }, { "math_id": 37, "text": "\\psi" }, { "math_id": 38, "text": "\\dim H^{2,1}(\\check{X}) = 1." }, { "math_id": 39, "text": "G = \\left\\{ (a_0,\\ldots, a_4) \\in (\\mathbb{Z}/5)^5 : \\sum a_i = 0 \\right\\}" }, { "math_id": 40, "text": "(a_0,\\ldots,a_4)\\cdot [x_0:\\cdots:x_4] = [e^{ a_0\\cdot 2\\pi i/5}x_0:\\cdots : e^{ a_4 \\cdot 2\\pi i/5}x_4]" }, { "math_id": 41, "text": "\\sum_i a_i = 0." }, { "math_id": 42, "text": "X_\\psi / G" }, { "math_id": 43, "text": "100" }, { "math_id": 44, "text": "\\check{X} \\to X_\\psi / G" }, { "math_id": 45, "text": "H^{1,1}(\\check{X})" }, { "math_id": 46, "text": "H^3(\\check{X}) = 4" }, { "math_id": 47, "text": "1" }, { "math_id": 48, "text": "\\phi: \\Sigma \\to X" }, { "math_id": 49, "text": "\\Sigma" }, { "math_id": 50, "text": "(2,2)" }, { "math_id": 51, "text": "N=2" }, { "math_id": 52, "text": "(Q,\\overline{Q})" }, { "math_id": 53, "text": "J \\in \\text{End}(TX)" }, { "math_id": 54, "text": "\\omega" }, { "math_id": 55, "text": "\\omega^\\mathbb{C} = B + i\\omega" }, { "math_id": 56, "text": "(1,1)" }, { "math_id": 57, "text": "[\\omega^\\mathbb{C}] \\in H^1(X,\\Omega_X^1) " }, { "math_id": 58, "text": "J" }, { "math_id": 59, "text": "\\omega^\\mathbb{C}" }, { "math_id": 60, "text": "(X,\\omega^\\mathbb{C})" }, { "math_id": 61, "text": "(X,J)" }, { "math_id": 62, "text": "[\\omega^\\mathbb{C}] \\in H^1(X,\\Omega_X^1) " }, { "math_id": 63, "text": "H^1(X,\\Omega^1_X)" }, { "math_id": 64, "text": "Q" }, { "math_id": 65, "text": "\\begin{align}\n\\langle \\omega_1,\\omega_2,\\omega_3 \\rangle =& \\int_X \\omega_1\\wedge\\omega_2\\wedge\\omega_3\n+ \\sum_{\\beta\\neq 0 }n_\\beta\\int_\\beta\\omega_1\\int_\\beta\\omega_2\\int_\\beta\\omega_2\n\\frac{e^{2\\pi i \\int_\\beta \\omega^{\\mathbb{C}}}}{1 - e^{2\\pi i \\int_\\beta \\omega^{\\mathbb{C}}}}\n\\end{align}" }, { "math_id": 66, "text": "n_\\beta" }, { "math_id": 67, "text": "\\beta \\in H_2(X;\\mathbb{Z})" }, { "math_id": 68, "text": "\\omega_i \\in H^1(X,\\Omega_X)" }, { "math_id": 69, "text": "\\overline{\\mathcal{M}}_{g,k}(X,J,\\beta) = \\{\n(u:\\Sigma \\to X, j, z_1,\\ldots, z_k) : u_*[\\Sigma] = \\beta, \\overline{\\partial}_Ju = 0 \n\\}" }, { "math_id": 70, "text": "\\overline{\\mathcal{M}}_{g,n}(X,\\beta) = \\{u:\\Sigma \\to X : u \\text{ is stable and } u_*([\\Sigma]) = \\beta \\}" }, { "math_id": 71, "text": "[\\overline{\\mathcal{M}}_{g,k}(X,J,\\beta)]^{virt}" }, { "math_id": 72, "text": "[\\overline{\\mathcal{M}}_{g,n}(X,\\beta)]^{virt}" }, { "math_id": 73, "text": "\\pi_{Coker}(v)" }, { "math_id": 74, "text": "\\underline{\\text{Obs}}" }, { "math_id": 75, "text": "\\overline{\\partial}_J(u) = v" }, { "math_id": 76, "text": "u" }, { "math_id": 77, "text": "H^1(X,T_X)" }, { "math_id": 78, "text": "\\overline{Q}" }, { "math_id": 79, "text": "\\langle \\theta_1,\\theta_2,\\theta_3 \\rangle =\n\\int_X\\Omega \\wedge (\\nabla_{\\theta_1}\\nabla_{\\theta_2}\\nabla_{\\theta_3}\\Omega)" }, { "math_id": 80, "text": "\\Omega \\in H^0(X,\\Omega_X^3)" }, { "math_id": 81, "text": "\\theta" }, { "math_id": 82, "text": "\\nabla_\\theta" }, { "math_id": 83, "text": "(p,q)" }, { "math_id": 84, "text": "(p+1,q-1)" }, { "math_id": 85, "text": "\\Omega \\wedge (\\nabla_{\\theta_1}\\nabla_{\\theta_2}\\nabla_{\\theta_3}\\Omega) \\in H^3(X,\\Omega_X^3)" }, { "math_id": 86, "text": "\\theta \\in H^1(X,T_X)" }, { "math_id": 87, "text": "[\\theta_{i}]_{i \\in I}" }, { "math_id": 88, "text": "\\{U_i \\}_{i \\in I}" }, { "math_id": 89, "text": "\\theta_i \\in T_X(U_i)" }, { "math_id": 90, "text": "\\iota_{\\theta_i}(\\Omega|_{U_i}) \\in H^0(U_i,\\Omega_X^2|_{U_i})" }, { "math_id": 91, "text": "\\iota_\\theta(\\Omega)" }, { "math_id": 92, "text": "H^1(X,\\Omega_X^2)" }, { "math_id": 93, "text": "U_i\\cap U_j = U_{ij}," }, { "math_id": 94, "text": "\\theta_{i}|_{ij} = \\theta_{j}|_{ij}" }, { "math_id": 95, "text": "\\begin{align}\n(\\iota_{\\theta_i}\\Omega|_{U_{i}})|_{U_{ij}} &= \\iota_{\n \\theta_i|_{U_{ij}}\n} (\\Omega|_{U_{ij}}) \\\\\n&= \\iota_{\n \\theta_j|_{U_{ij}}\n} (\\Omega|_{U_{ij}}) \\\\\n&= (\\iota_{\\theta_j}\\Omega|_{U_j})|_{U_{ij}}\n\\end{align}" }, { "math_id": 96, "text": "\\iota_{\\theta_1}\\iota_{\\theta_2}\\iota_{\\theta_3}\\Omega \\in H^3(X,\\mathcal{O}_X)" }, { "math_id": 97, "text": "\\nabla_{\\theta_1}\\nabla_{\\theta_2}\\nabla_{\\theta_3}\\Omega" }, { "math_id": 98, "text": "(Q,\\overline{Q})" }, { "math_id": 99, "text": "H^1(X,\\Omega_X) \\cong H^1(X^\\vee, T_{X^\\vee})" }, { "math_id": 100, "text": "H \\in H^1(X,\\Omega_X)" }, { "math_id": 101, "text": "\\theta \\in H^1(X^\\vee,T_{X^\\vee})" }, { "math_id": 102, "text": "H \\mapsto \\theta" }, { "math_id": 103, "text": "\\langle H,H,H\\rangle = \\langle \\theta,\\theta,\\theta\\rangle" }, { "math_id": 104, "text": "0" }, { "math_id": 105, "text": "H^{1,1}\\cong \\mathbb{Z}" } ]
https://en.wikipedia.org/wiki?curid=64540037
64549902
Birth process
Type of continuous process in probability theory In probability theory, a birth process or a pure birth process is a special case of a continuous-time Markov process and a generalisation of a Poisson process. It defines a continuous process which takes values in the natural numbers and can only increase by one (a "birth") or remain unchanged. This is a type of birth–death process with no deaths. The rate at which births occur is given by an exponential random variable whose parameter depends only on the current value of the process Definition. Birth rates definition. A birth process with birth rates formula_0 and initial value formula_1 is a minimal right-continuous process formula_2 such that formula_3 and the interarrival times formula_4 are independent exponential random variables with parameter formula_5. Infinitesimal definition. A birth process with rates formula_0 and initial value formula_1 is a process formula_2 such that: These conditions ensure that the process starts at formula_11, is non-decreasing and has independent single births continuously at rate formula_12, when the process has value formula_13. Continuous-time Markov chain definition. A birth process can be defined as a continuous-time Markov process (CTMC) formula_2 with the non-zero Q-matrix entries formula_14 and initial distribution formula_11 (the random variable which takes value formula_11 with probability 1). formula_15 Variations. Some authors require that a birth process start from 0 i.e. that formula_16, while others allow the initial value to be given by a probability distribution on the natural numbers. The state space can include infinity, in the case of an explosive birth process. The birth rates are also called intensities. Properties. As for CTMCs, a birth process has the Markov property. The CTMC definitions for communicating classes, irreducibility and so on apply to birth processes. By the conditions for recurrence and transience of a birth–death process, any birth process is transient. The transition matrices formula_17 of a birth process satisfy the Kolmogorov forward and backward equations. The backwards equations are: formula_18 (for formula_19) The forward equations are: formula_20 (for formula_21) formula_22 (for formula_23) From the forward equations it follows that: formula_24 (for formula_21) formula_25 (for formula_23) Unlike a Poisson process, a birth process may have infinitely many births in a finite amount of time. We define formula_26 and say that a birth process explodes if formula_27 is finite. If formula_28 then the process is explosive with probability 1; otherwise, it is non-explosive with probability 1 ("honest"). Examples. A Poisson process is a birth process where the birth rates are constant i.e. formula_29 for some formula_30. Simple birth process. A simple birth process is a birth process with rates formula_31. It models a population in which each individual gives birth repeatedly and independently at rate formula_32. Udny Yule studied the processes, so they may be known as Yule processes. The number of births in time formula_33 from a simple birth process of population formula_13 is given by: formula_34 In exact form, the number of births is the negative binomial distribution with parameters formula_13 and formula_35. For the special case formula_36, this is the geometric distribution with success rate formula_35. The expectation of the process grows exponentially; specifically, if formula_37 then formula_38. A simple birth process with immigration is a modification of this process with rates formula_39. This models a population with births by each population member in addition to a constant rate of immigration into the system. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "(\\lambda_n, n\\in \\mathbb{N})" }, { "math_id": 1, "text": "k\\in \\mathbb{N}" }, { "math_id": 2, "text": "(X_t, t\\ge 0)" }, { "math_id": 3, "text": "X_0=k" }, { "math_id": 4, "text": "T_i = \\inf\\{t\\ge 0: X_t=i+1\\} - \\inf\\{t\\ge 0: X_t=i\\}" }, { "math_id": 5, "text": "\\lambda_i" }, { "math_id": 6, "text": "\\forall s,t\\ge 0: s<t\\implies X_s \\le X_t" }, { "math_id": 7, "text": "\\mathbb{P}(X_{t+h}=X_t+1)=\\lambda_{X_t}h+o(h)" }, { "math_id": 8, "text": "\\mathbb{P}(X_{t+h}=X_t)=o(h)" }, { "math_id": 9, "text": "\\forall s,t\\ge 0: s<t\\implies X_t-X_s" }, { "math_id": 10, "text": "(X_u, u < s)" }, { "math_id": 11, "text": "i" }, { "math_id": 12, "text": "\\lambda_n" }, { "math_id": 13, "text": "n" }, { "math_id": 14, "text": "q_{n,n+1}=\\lambda_n=-q_{n,n}" }, { "math_id": 15, "text": "Q=\\begin{pmatrix}\n-\\lambda_0 & \\lambda_0 & 0 & 0 & \\cdots \\\\\n0 & -\\lambda_1 & \\lambda_1 & 0 & \\cdots \\\\\n0 & 0 & -\\lambda_2 & \\lambda_2 & \\cdots\\\\\n\\vdots & \\vdots & \\vdots & & \\vdots \\ddots\n\\end{pmatrix}" }, { "math_id": 16, "text": "X_0=0" }, { "math_id": 17, "text": "((p_{i,j}(t))_{i,j\\in\\mathbb{N}}), t\\ge 0)" }, { "math_id": 18, "text": "p'_{i,j}(t)=\\lambda_i (p_{i+1,j}(t)-p_{i,j}(t))" }, { "math_id": 19, "text": "i,j\\in\\mathbb{N}" }, { "math_id": 20, "text": "p'_{i,i}(t)=-\\lambda_i p_{i,i}(t)" }, { "math_id": 21, "text": "i\\in\\mathbb{N}" }, { "math_id": 22, "text": "p'_{i,j}(t)=\\lambda_{j-1}p_{i,j-1}(t)-\\lambda_j p_{i,j}(t)" }, { "math_id": 23, "text": "j\\ge i+1" }, { "math_id": 24, "text": "p_{i,i}(t)=e^{-\\lambda_i t}" }, { "math_id": 25, "text": "p_{i,j}(t)=\\lambda_{j-1}e^{-\\lambda_j t}\\int_0^t e^{\\lambda_j s}p_{i,j-1}(s)\\, \\text{d} s" }, { "math_id": 26, "text": "T_\\infty=\\sup \\{T_n:n\\in\\mathbb{N}\\}" }, { "math_id": 27, "text": "T_\\infty" }, { "math_id": 28, "text": "\\sum_{n=0}^\\infty \\frac{1}{\\lambda_n}<\\infty" }, { "math_id": 29, "text": "\\lambda_n=\\lambda" }, { "math_id": 30, "text": "\\lambda>0" }, { "math_id": 31, "text": "\\lambda_n=n\\lambda" }, { "math_id": 32, "text": "\\lambda" }, { "math_id": 33, "text": "t" }, { "math_id": 34, "text": "p_{n,n+m}(t)=\\binom{n}{m}(\\lambda t)^m(1-\\lambda t)^{n-m}+o(h)" }, { "math_id": 35, "text": "e^{-\\lambda t}" }, { "math_id": 36, "text": "n=1" }, { "math_id": 37, "text": "X_0=1" }, { "math_id": 38, "text": "\\mathbb{E}(X_t)=e^{\\lambda t}" }, { "math_id": 39, "text": "\\lambda_n=n\\lambda+\\nu" } ]
https://en.wikipedia.org/wiki?curid=64549902
64550014
Robert Riley (mathematician)
American mathematician Robert F. Riley (December 22, 1935–March 4, 2000) was an American mathematician. He is known for his work in low-dimensional topology using computational tools and hyperbolic geometry, being one of the inspirations for William Thurston's later breakthroughs in 3-dimensional topology. Career. Riley earned a bachelor's degree in mathematics from MIT in 1957; shortly thereafter he dropped out of the graduate program and went on to work in industry, eventually moving to Amsterdam in 1966. In 1968 he took a temporary position at the University of Southampton. He defended his Ph.D. at this institution in 1980, under the nominal direction of David Singerman. For the next two years he occupied a postdoctoral position in Boulder where William Thurston was employed at the time, before moving on to Binghamton University as a professor. Mathematical work. Riley's research was in geometric topology, especially in knot theory, where he mostly studied representations of knot groups. Early on, following work of Ralph Fox, he was interested in morphisms to finite groups. Later on in Southampton, considering formula_0-representations sending peripheral elements to parabolics led him to discover the hyperbolic structure on the complement of the figure-eight knot and some others. This was one of the few examples of hyperbolic 3-manifolds that were available at the time, and as such it was one of the motivations which led to William Thurston's geometrisation conjecture, which includes as a particular case a criterion for a knot complement to support a hyperbolic structure. One notable feature of Riley's work is that it relied much on the assistance of a computer. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{SL}_2(\\mathbb C)" } ]
https://en.wikipedia.org/wiki?curid=64550014
64550992
Ryser's conjecture
In graph theory, Ryser's conjecture is a conjecture relating the maximum matching size and the minimum transversal size in hypergraphs. This conjecture first appeared in 1971 in the Ph.D. thesis of J. R. Henderson, whose advisor was Herbert John Ryser. Preliminaries. A matching in a hypergraph is a set of hyperedges such that each vertex appears in at most one of them. The largest size of a matching in a hypergraph "H" is denoted by formula_0. A transversal (or vertex cover) in a hypergraph is a set of vertices such that each hyperedge contains at least one of them. The smallest size of a transversal in a hypergraph "H" is denoted by formula_1. For every "H", formula_2, since every cover must contain at least one point from each edge in any matching. If H is "r"-uniform (each hyperedge has exactly "r" vertices), then formula_3, since the union of the edges from any maximal matching is a set of at most "rv" vertices that meets every edge. The conjecture. Ryser's conjecture is that, if H is not only "r"-uniform but also "r-partite" (i.e., its vertices can be partitioned into "r" sets so that every edge contains exactly one element of each set), then:formula_4I.e., the multiplicative factor in the above inequality can be decreased by 1. Extremal hypergraphs. An extremal hypergraph to Ryser's conjecture is a hypergraph in which the conjecture holds with equality, i.e., formula_5. The existence of such hypergraphs show that the factor "r"-1 is the smallest possible. An example of an extremal hypergraph is the truncated projective plane - the projective plane of order "r"-1 in which one vertex and all lines containing it is removed. It is known to exist whenever "r"-1 is the power of a prime integer. There are other families of such extremal hypergraphs. Special cases. In the case "r"=2, the hypergraph becomes a bipartite graph, and the conjecture becomes formula_6. This is known to be true by Kőnig's theorem. In the case "r"=3, the conjecture has been proved by Ron Aharoni. The proof uses the Aharoni-Haxell theorem for matching in hypergraphs. In the cases "r"=4 and "r"=5, the following weaker version has been proved by Penny Haxell and Scott: there exists some ε &gt; 0 such thatformula_7.Moreover, in the cases "r"=4 and "r"=5, Ryser's conjecture has been proved by Tuza (1978) in the special case formula_8, i.e.:formula_9. Fractional variants. A fractional matching in a hypergraph is an assignment of a weight to each hyperedge such that the sum of weights near each vertex is at most one. The largest size of a fractional matching in a hypergraph "H" is denoted by formula_10. A fractional transversal in a hypergraph is an assignment of a weight to each vertex such that the sum of weights in each hyperedge is at least one. The smallest size of a fractional transversal in a hypergraph "H" is denoted by formula_11. Linear programming duality implies that formula_12. Furedi has proved the following fractional version of Ryser's conjecture: If "H" is "r"-partite and "r"-regular (each vertex appears in exactly "r" hyperedges), thenformula_13.Lovasz has shown thatformula_14. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu(H)" }, { "math_id": 1, "text": "\\tau(H)" }, { "math_id": 2, "text": "\\nu(H)\\leq \\tau(H)" }, { "math_id": 3, "text": "\\tau(H) \\leq r\\cdot \\nu(H)" }, { "math_id": 4, "text": "\\tau(H) \\leq (r-1)\\cdot \\nu(H)" }, { "math_id": 5, "text": "\\tau(H) = (r-1)\\cdot \\nu(H)" }, { "math_id": 6, "text": "\\tau(H) \\leq \\nu(H)" }, { "math_id": 7, "text": "\\tau(H) \\leq (r-\\varepsilon)\\cdot \\nu(H)" }, { "math_id": 8, "text": "\\nu(H)=1" }, { "math_id": 9, "text": "\\nu(H) = 1 \\implies \\tau(H)\\leq r-1" }, { "math_id": 10, "text": "\\nu^*(H)" }, { "math_id": 11, "text": "\\tau^*(H)" }, { "math_id": 12, "text": "\\nu^*(H) = \\tau^*(H)" }, { "math_id": 13, "text": "\\tau^*(H) \\leq (r-1)\\cdot \\nu(H)" }, { "math_id": 14, "text": "\\tau(H) \\leq \\frac{r}{2}\\cdot \\nu^*(H)" } ]
https://en.wikipedia.org/wiki?curid=64550992
64551468
Truncated projective plane
In geometry, a truncated projective plane (TPP), also known as a dual affine plane, is a special kind of a hypergraph or geometric configuration that is constructed in the following way. These objects have been studied in many different settings, often independent of one another, and so, many terminologies have been developed. Also, different areas tend to ask different types of questions about these objects and are interested in different aspects of the same objects. Example: the Pasch hypergraph. Consider the Fano plane, which is the projective plane of order 2. It has 7 vertices {1,2,3,4,5,6,7} and 7 edges {123, 145, 167, 246, 257, 347, 356}. It can be truncated e.g. by removing the vertex 7 and the edges containing it. The remaining hypergraph is the TPP of order 2. It has 6 vertices {1,2,3,4,5,6} and 4 edges {123, 154, 624, 653}. It is a tripartite hypergraph with sides {1,6},{2,5},{3,4} (which are exactly the neighbors of the removed vertex 7). It is also called the "Pasch hypergraph", due to its connection with Pasch's axiom. It is a 2-regular hypergraph (each vertex is in exactly two edges), and its maximum matching is of size 1 (every two of its edges intersect). Combinatorics of dual affine planes. A finite projective plane of order n has n + 1 points on every line ("n" + 1 = "r" in the hypergraph description). There are "n"2 + "n" + 1 total points and an equal number of lines. Each point is on n + 1 lines. Every two distinct points lie on a unique line and every two distinct lines meet at a unique point. By removing a point and all the lines that pass through that point, the configuration that is left has "n"2 + "n" points, n2 lines, each point is on n lines and each line contains n + 1 points. Each pair of distinct lines still meet at a unique point, but two distinct points are on at most one line. This dual affine plane is thus a configuration of type (("n"2 + "n")"n" ("n"2)"n" + 1). The points can be partitioned into n + 1 sets of n points apiece, where no two points in the same partition set are joined by a line. These sets are the analogs of classes of parallel lines in an affine plane, and some authors refer to the points in a partition piece as "parallel points" in keeping with the dual nature of the structure. Projective planes constructed from finite fields (Desarguesian planes) have automorphism groups that act transitively on the points of the plane, so for these planes the point removed to form the dual affine plane is immaterial, the results of choosing different points are isomorphic. However, there do exist non-Desarguesian planes and the choice of point to remove in them may result in non-isomorphic dual affine planes having the same parameters. An affine plane is obtained by removing a line and all the points on that line from a projective plane. Since a projective plane is a self-dual configuration, the dual configuration of an affine plane is obtained from a projective plane by removing a point and all the lines through that point. Hence the name of this configuration. Hypergraph properties. It is known that the projective plane of order "r"-1 exists whenever "r"-1 is a prime power; hence the same is true for the TPP. The finite projective plane of order "r"-1 contains "r"2-"r"+1 vertices and "r"2-"r"+1 edges; hence the TPP of order "r"-1 contains "r"2-"r" vertices and "r"2-"2r"+1 edges. The TPP of order "r"-1 is an "r"-partite hypergraph: its vertices can be partitioned into "r" parts such that each hyperedge contains exactly one vertex of each part. For example, in the TPP of order 2, the 3 parts are {1,6}, {2,5} and {3,4}. In general, each of the "r" parts contains "r"-1 vertices. Each edge in a TPP intersects every other edge. Therefore, its maximum matching size is 1:formula_0.On the other hand, covering all edges of the TPP requires all "r"-1 vertices of one of the parts. Therefore, its minimum vertex-cover size is "r"-1:formula_1.Therefore, the TPP is an extremal hypergraph for Ryser's conjecture. The minimum fractional vertex-cover size of the TPP is "r"-1 too: assigning a weight of 1/"r" to each vertex (which is a vertex-cover since each hyperedge contains "r" vertices) yields a fractional cover of size ("r"2-"r")/"r"="r"-1. Its maximum fractional matching size of the is "r"-1 too: assigning a weight of 1/("r-1") to each hyperedge (which is a matching since each vertex is contained in "r"-1 edges) yields a fractional matching of size ("r"2-"2r"+1)/("r"-1)="r"-1. Therefore:formula_2.Note that the above fractional matching is perfect, since its size equals the number of vertices in each part of the "r"-partite hypergraph. However, there is no perfect matching, and moreover, the maximum matching size is only 1. This is in contrast to the situation in bipartite graphs, in which a perfect fractional matching implies the existence of a perfect matching. Design-theoretic aspects. Dual affine planes can be viewed as a point residue of a projective plane, a 1-design, and, more classically, as a tactical configuration. Since they are not pairwise balanced designs (PBDs), they have not been studied extensively from the design-theoretic viewpoint. However, tactical configurations are central topics in geometry, especially finite geometry. History. According to , the term "tactical configuration" appears to be due to E. H. Moore in 1896. For the history of dual configurations, see Duality (projective geometry)#History. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu(H)=1" }, { "math_id": 1, "text": "\\tau(H)=r-1" }, { "math_id": 2, "text": "\\tau^*(H)=\\nu^*(H)=r-1" } ]
https://en.wikipedia.org/wiki?curid=64551468
64553550
Industrial 2 of 5
Industrial 2 of 5. (also known as Standard 2 of 5) is a variable length, discrete, two width symbology. Industrial 2 of 5 is a subset of two-out-of-five codes. Industrial 2 of 5 is one of the first 1D and oldest barcodes and can encode only digits (0-9). It was invented in 1971 by Identicon Corp. and Computer Identics Corp. At this time, it has only historical value because of low encoding density and restricted charset. Previously it was used for cardboard printing, photo developing envelopes, warehouse sorting systems and for management of physical distribution. Industrial 2 of 5 has low encoding density because an information can be encoded only in black bars and white spaces are just ignored. Industrial 2 of 5 barcode may include an optional check digit. Most of barcode readers support this symbology. Encoding. Industrial 2 of 5 can encode digits from 0 to 9. The digit can be encoded in 5 black bars on digit and white spaces are ignored. Any black bar can have two width: wide or narrow. Any white space can have any width by not more than narrow black bar. Industrial 2 of 5 start/stop patterns and data patterns are split by white space. Industrial 2 of 5 could include optional checksum character which is added to the end of the barcode. Industrial 2 of 5 features: Four bars in encoding scheme, except zero, have own weights which encode value of the symbol. Also, last black bar is used as parity bit to avoid single error. Symbol consists of five bars: two wide bars and three narrow bars. Value of the symbol is a sum of nonzero weights of first four bars. As an example, we can see digit 3 is encoded. Weight 1 and 2 is not zero and parity bits is 0 means the count of bits is divisible on 2. The result: 1*1 + 1*2 + 0*4 + 0*7 = 3. The same with digit 4: weight 4 is not zero and parity bit is 1, which means that count of bits is not divisible on 2. 0*1 + 0*2 + 1*4 + 0*7 = 4. N - narrow black bar. &lt;br&gt;W - wide black bar. &lt;br&gt;S - white space between bars, in most cases must be same size as narrow black bar. The barcode has the following physical structure: &lt;br&gt;1. Quiet zone 10X wide &lt;br&gt;2. Start character &lt;br&gt;3. Variable length digit characters, properly encoded &lt;br&gt;4. Optional check digit &lt;br&gt;5. Stop character &lt;br&gt;6. Quiet zone 10X wide Checksum. Industrial 2 of 5 may include an optional check digit, which is calculated as other UPC checksums. This is not required as part of the specification, but check digit is added as last digit in the code to improve the accuracy of the symbology. &lt;br&gt;formula_0, &lt;br&gt;where formula_1 is the most right data digit. Example for the first 6 digits 423456: Result: 4234562 barcode IATA 2 of 5. IATA 2 of 5 (also known as Computer Identics 2 of 5, Airline 2 of 5) is a variable length, discrete, two width symbology, which is fully similar to Industrial 2 of 5 symbology except start/stop symbols. In this way it has all advantages and issues of Industrial 2 of 5 symbology. N - narrow black bar. &lt;br&gt;W - wide black bar. &lt;br&gt;S - white space between bars, in most cases must be same size as narrow black bar. IATA 2 of 5 was invented in 1974 by Computer Identics Corp. The barcode was used by International Air Transport Association (IATA) for managing air cargo. IATA 2 of 5 version used by International Air Transport Association had fixed 17 digits length with 16 valuable package identification digit and 17-th check digit. Some readers currently still support this symbology References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_{check} = 10 - ((3x_1 + x_2 + 3x_3 + x_4 +\\cdots+ x_{2n} + 3x_{2n+1})\\pmod{10})" }, { "math_id": 1, "text": "x_1" } ]
https://en.wikipedia.org/wiki?curid=64553550
645554
Synthetic-aperture radar
Form of radar used to create images of landscapes Synthetic-aperture radar (SAR) is a form of radar that is used to create two-dimensional images or three-dimensional reconstructions of objects, such as landscapes. SAR uses the motion of the radar antenna over a target region to provide finer spatial resolution than conventional stationary beam-scanning radars. SAR is typically mounted on a moving platform, such as an aircraft or spacecraft, and has its origins in an advanced form of side looking airborne radar (SLAR). The distance the SAR device travels over a target during the period when the target scene is illuminated creates the large "synthetic" antenna aperture (the "size" of the antenna). Typically, the larger the aperture, the higher the image resolution will be, regardless of whether the aperture is physical (a large antenna) or synthetic (a moving antenna) – this allows SAR to create high-resolution images with comparatively small physical antennas. For a fixed antenna size and orientation, objects which are further away remain illuminated longer – therefore SAR has the property of creating larger synthetic apertures for more distant objects, which results in a consistent spatial resolution over a range of viewing distances. To create a SAR image, successive pulses of radio waves are transmitted to "illuminate" a target scene, and the echo of each pulse is received and recorded. The pulses are transmitted and the echoes received using a single beam-forming antenna, with wavelengths of a meter down to several millimeters. As the SAR device on board the aircraft or spacecraft moves, the antenna location relative to the target changes with time. Signal processing of the successive recorded radar echoes allows the combining of the recordings from these multiple antenna positions. This process forms the "synthetic antenna aperture" and allows the creation of higher-resolution images than would otherwise be possible with a given physical antenna. Motivation and applications. SAR is capable of high-resolution remote sensing, independent of flight altitude, and independent of weather, as SAR can select frequencies to avoid weather-caused signal attenuation. SAR has day and night imaging capability as illumination is provided by the SAR. SAR images have wide applications in remote sensing and mapping of surfaces of the Earth and other planets. Applications of SAR are numerous. Examples include topography, oceanography, glaciology, geology (for example, terrain discrimination and subsurface imaging). SAR can also be used in forestry to determine forest height, biomass, and deforestation. Volcano and earthquake monitoring use differential interferometry. SAR can also be applied for monitoring civil infrastructure stability such as bridges. SAR is useful in environment monitoring such as oil spills, flooding, urban growth, military surveillance: including strategic policy and tactical assessment. SAR can be implemented as inverse SAR by observing a moving target over a substantial time with a stationary antenna. Basic principle. A "synthetic-aperture radar" is an imaging radar mounted on a moving platform. Electromagnetic waves are transmitted sequentially, the echoes are collected and the system electronics digitizes and stores the data for subsequent processing. As transmission and reception occur at different times, they map to different small positions. The well ordered combination of the received signals builds a virtual aperture that is much longer than the physical antenna width. That is the source of the term "synthetic aperture," giving it the property of an imaging radar. The range direction is perpendicular to the flight track and perpendicular to the azimuth direction, which is also known as the "along-track" direction because it is in line with the position of the object within the antenna's field of view. The 3D processing is done in two stages. The azimuth and range direction are focused for the generation of 2D (azimuth-range) high-resolution images, after which a digital elevation model (DEM) is used to measure the phase differences between complex images, which is determined from different look angles to recover the height information. This height information, along with the azimuth-range coordinates provided by 2-D SAR focusing, gives the third dimension, which is the elevation. The first step requires only standard processing algorithms, for the second step, additional pre-processing such as image co-registration and phase calibration is used. In addition, multiple baselines can be used to extend 3D imaging to the "time dimension". 4D and multi-D SAR imaging allows imaging of complex scenarios, such as urban areas, and has improved performance with respect to classical interferometric techniques such as persistent scatterer interferometry (PSI). Algorithm. SAR algorithms model the scene as a set of point targets that do not interact with each other (the Born approximation). While the details of various SAR algorithms differ, SAR processing in each case is the application of a matched filter to the raw data, for each pixel in the output image, where the matched filter coefficients are the response from a single isolated point target. In the early days of SAR processing, the raw data was recorded on film and the postprocessing by matched filter was implemented optically using lenses of conical, cylindrical and spherical shape. The Range-Doppler algorithm is an example of a more recent approach. Existing spectral estimation approaches. Synthetic-aperture radar determines the 3D reflectivity from measured SAR data. It is basically a spectrum estimation, because for a specific cell of an image, the complex-value SAR measurements of the SAR image stack are a sampled version of the Fourier transform of reflectivity in elevation direction, but the Fourier transform is irregular. Thus the spectral estimation techniques are used to improve the resolution and reduce speckle compared to the results of conventional Fourier transform SAR imaging techniques. Non-parametric methods. FFT. FFT (Fast Fourier Transform i.e., periodogram or matched filter) is one such method, which is used in majority of the spectral estimation algorithms, and there are many fast algorithms for computing the multidimensional discrete Fourier transform. Computational "Kronecker-core array algebra" is a popular algorithm used as new variant of FFT algorithms for the processing in multidimensional synthetic-aperture radar (SAR) systems. This algorithm uses a study of theoretical properties of input/output data indexing sets and groups of permutations. A branch of finite multi-dimensional linear algebra is used to identify similarities and differences among various FFT algorithm variants and to create new variants. Each multidimensional DFT computation is expressed in matrix form. The multidimensional DFT matrix, in turn, is disintegrated into a set of factors, called functional primitives, which are individually identified with an underlying software/hardware computational design. The FFT implementation is essentially a realization of the mapping of the mathematical framework through generation of the variants and executing matrix operations. The performance of this implementation may vary from machine to machine, and the objective is to identify on which machine it performs best. Capon method. The Capon spectral method, also called the minimum-variance method, is a multidimensional array-processing technique. It is a nonparametric covariance-based method, which uses an adaptive matched-filterbank approach and follows two main steps: The adaptive Capon bandpass filter is designed to minimize the power of the filter output, as well as pass the frequencies (formula_0) without any attenuation, i.e., to satisfy, for each (formula_0), formula_2 subject to formula_3 where "R" is the covariance matrix, formula_4 is the complex conjugate transpose of the impulse response of the FIR filter, formula_5 is the 2D Fourier vector, defined as formula_6, formula_7 denotes Kronecker product. Therefore, it passes a 2D sinusoid at a given frequency without distortion while minimizing the variance of the noise of the resulting image. The purpose is to compute the spectral estimate efficiently. "Spectral estimate" is given as formula_8 where "R" is the covariance matrix, and formula_9 is the 2D complex-conjugate transpose of the Fourier vector. The computation of this equation over all frequencies is time-consuming. It is seen that the forward–backward Capon estimator yields better estimation than the forward-only classical capon approach. The main reason behind this is that while the forward–backward Capon uses both the forward and backward data vectors to obtain the estimate of the covariance matrix, the forward-only Capon uses only the forward data vectors to estimate the covariance matrix. APES method. The APES (amplitude and phase estimation) method is also a matched-filter-bank method, which assumes that the phase history data is a sum of 2D sinusoids in noise. APES spectral estimator has 2-step filtering interpretation: Empirically, the APES method results in wider spectral peaks than the Capon method, but more accurate spectral estimates for amplitude in SAR. In the Capon method, although the spectral peaks are narrower than the APES, the sidelobes are higher than that for the APES. As a result, the estimate for the amplitude is expected to be less accurate for the Capon method than for the APES method. The APES method requires about 1.5 times more computation than the Capon method. SAMV method. SAMV method is a parameter-free sparse signal reconstruction based algorithm. It achieves super-resolution and is robust to highly correlated signals. The name emphasizes its basis on the asymptotically minimum variance (AMV) criterion. It is a powerful tool for the recovery of both the amplitude and frequency characteristics of multiple highly correlated sources in challenging environment (e.g., limited number of snapshots, low signal-to-noise ratio. Applications include synthetic-aperture radar imaging and various source localization. Advantages. SAMV method is capable of achieving resolution higher than some established parametric methods, e.g., MUSIC, especially with highly correlated signals. Disadvantages. Computational complexity of the SAMV method is higher due to its iterative procedure. Parametric subspace decomposition methods. Eigenvector method. This subspace decomposition method separates the eigenvectors of the autocovariance matrix into those corresponding to signals and to clutter. The amplitude of the image at a point (formula_13 ) is given by: formula_14 where formula_15 is the amplitude of the image at a point formula_16 , formula_17is the coherency matrix and formula_18 is the Hermitian of the coherency matrix,formula_19 is the inverse of the eigenvalues of the clutter subspace, formula_20 are vectors defined as formula_21 where ⊗ denotes the Kronecker product of the two vectors. MUSIC method. MUSIC detects frequencies in a signal by performing an eigen decomposition on the covariance matrix of a data vector of the samples obtained from the samples of the received signal. When all of the eigenvectors are included in the clutter subspace (model order = 0) the EV method becomes identical to the Capon method. Thus the determination of model order is critical to operation of the EV method. The eigenvalue of the R matrix decides whether its corresponding eigenvector corresponds to the clutter or to the signal subspace. The MUSIC method is considered to be a poor performer in SAR applications. This method uses a constant instead of the clutter subspace. In this method, the denominator is equated to zero when a sinusoidal signal corresponding to a point in the SAR image is in alignment to one of the signal subspace eigenvectors which is the peak in image estimate. Thus this method does not accurately represent the scattering intensity at each point, but show the particular points of the image. Backprojection algorithm. Backprojection Algorithm has two methods: "Time-domain Backprojection" and "Frequency-domain Backprojection". The time-domain Backprojection has more advantages over frequency-domain and thus, is more preferred. The time-domain Backprojection forms images or spectrums by matching the data acquired from the radar and as per what it expects to receive. It can be considered as an ideal matched-filter for synthetic-aperture radar. There is no need of having a different motion compensation step due to its quality of handling non-ideal motion/sampling. It can also be used for various imaging geometries. Application: geosynchronous orbit synthetic-aperture radar (GEO-SAR). In GEO-SAR, to focus specially on the relative moving track, the backprojection algorithm works very well. It uses the concept of Azimuth Processing in the time domain. For the satellite-ground geometry, GEO-SAR plays a significant role. The procedure of this concept is elaborated as follows. Comparison between the algorithms. Capon and APES can yield more accurate spectral estimates with much lower sidelobes and more narrow spectral peaks than the fast Fourier transform (FFT) method, which is also a special case of the FIR filtering approaches. It is seen that although the APES algorithm gives slightly wider spectral peaks than the Capon method, the former yields more accurate overall spectral estimates than the latter and the FFT method. FFT method is fast and simple but have larger sidelobes. Capon has high resolution but high computational complexity. EV also has high resolution and high computational complexity. APES has higher resolution, faster than capon and EV but high computational complexity. MUSIC method is not generally suitable for SAR imaging, as whitening the clutter eigenvalues destroys the spatial inhomogeneities associated with terrain clutter or other diffuse scattering in SAR imagery. But it offers higher frequency resolution in the resulting power spectral density (PSD) than the fast Fourier transform (FFT)-based methods. The backprojection algorithm is computationally expensive. It is specifically attractive for sensors that are wideband, wide-angle, and/or have long coherent apertures with substantial off-track motion. Multistatic operation. SAR requires that echo captures be taken at multiple antenna positions. The more captures taken (at different antenna locations) the more reliable the target characterization. Multiple captures can be obtained by moving a single antenna to different locations, by placing multiple stationary antennas at different locations, or combinations thereof. The advantage of a single moving antenna is that it can be easily placed in any number of positions to provide any number of monostatic waveforms. For example, an antenna mounted on an airplane takes many captures per second as the plane travels. The principal advantages of multiple static antennas are that a moving target can be characterized (assuming the capture electronics are fast enough), that no vehicle or motion machinery is necessary, and that antenna positions need not be derived from other, sometimes unreliable, information. (One problem with SAR aboard an airplane is knowing precise antenna positions as the plane travels). For multiple static antennas, all combinations of monostatic and multistatic radar waveform captures are possible. Note, however, that it is not advantageous to capture a waveform for each of both transmission directions for a given pair of antennas, because those waveforms will be identical. When multiple static antennas are used, the total number of unique echo waveforms that can be captured is formula_23 where "N" is the number of unique antenna positions. Scanning modes. Stripmap mode airborne SAR. The antenna stays in a fixed position. It may be orthogonal to the flight path, or it may be squinted slightly forward or backward. When the antenna aperture travels along the flight path, a signal is transmitted at a rate equal to the pulse repetition frequency (PRF). The lower boundary of the PRF is determined by the Doppler bandwidth of the radar. The backscatter of each of these signals is commutatively added on a pixel-by-pixel basis to attain the fine azimuth resolution desired in radar imagery. Spotlight mode SAR. The spotlight synthetic aperture is given by formula_24 where formula_25 is the angle formed between the beginning and end of the imaging, as shown in the diagram of spotlight imaging and formula_26 is the range distance. The spotlight mode gives better resolution albeit for a smaller ground patch. In this mode, the illuminating radar beam is steered continually as the aircraft moves, so that it illuminates the same patch over a longer period of time. This mode is not a traditional continuous-strip imaging mode; however, it has high azimuth resolution. A technical explanation of spotlight SAR from first principles is offered in. Scan mode SAR. While operating as a scan mode SAR, the antenna beam sweeps periodically and thus cover much larger area than the spotlight and stripmap modes. However, the azimuth resolution become much lower than the stripmap mode due to the decreased azimuth bandwidth. Clearly there is a balance achieved between the azimuth resolution and the scan area of SAR. Here, the synthetic aperture is shared between the sub swaths, and it is not in direct contact within one subswath. Mosaic operation is required in azimuth and range directions to join the azimuth bursts and the range sub-swaths. Special techniques. Polarimetry. Radar waves have a polarization. Different materials reflect radar waves with different intensities, but anisotropic materials such as grass often reflect different polarizations with different intensities. Some materials will also convert one polarization into another. By emitting a mixture of polarizations and using receiving antennas with a specific polarization, several images can be collected from the same series of pulses. Frequently three such RX-TX polarizations (HH-pol, VV-pol, VH-pol) are used as the three color channels in a synthesized image. This is what has been done in the picture at right. Interpretation of the resulting colors requires significant testing of known materials. New developments in polarimetry include using the changes in the random polarization returns of some surfaces (such as grass or sand) and between two images of the same location at different times to determine where changes not visible to optical systems occurred. Examples include subterranean tunneling or paths of vehicles driving through the area being imaged. Enhanced SAR sea oil slick observation has been developed by appropriate physical modelling and use of fully polarimetric and dual-polarimetric measurements. SAR polarimetry. SAR polarimetry is a technique used for deriving qualitative and quantitative physical information for land, snow and ice, ocean and urban applications based on the measurement and exploration of the polarimetric properties of man-made and natural scatterers. "Terrain" and "land use" classification is one of the most important applications of polarimetric synthetic-aperture radar (PolSAR). SAR polarimetry uses a scattering matrix (S) to identify the scattering behavior of objects after an interaction with electromagnetic wave. The matrix is represented by a combination of horizontal and vertical polarization states of transmitted and received signals. formula_27 where, HH is for horizontal transmit and horizontal receive, VV is for vertical transmit and vertical receive, HV is for horizontal transmit and vertical receive, and VH – for vertical transmit and horizontal receive. The first two of these polarization combinations are referred to as like-polarized (or co-polarized), because the transmit and receive polarizations are the same. The last two combinations are referred to as cross-polarized because the transmit and receive polarizations are orthogonal to one another. Three-component scattering power model. The three-component scattering power model by Freeman and Durden is successfully used for the decomposition of a PolSAR image, applying the reflection symmetry condition using covariance matrix. The method is based on simple physical scattering mechanisms (surface scattering, double-bounce scattering, and volume scattering). The advantage of this scattering model is that it is simple and easy to implement for image processing. There are 2 major approaches for a 3formula_283 polarimetric matrix decomposition. One is the lexicographic covariance matrix approach based on physically measurable parameters, and the other is the Pauli decomposition which is a coherent decomposition matrix. It represents all the polarimetric information in a single SAR image. The polarimetric information of [S] could be represented by the combination of the intensities formula_29 in a single RGB image where all the previous intensities will be coded as a color channel. Four-component scattering power model. For PolSAR image analysis, there can be cases where reflection symmetry condition does not hold. In those cases a "four-component scattering model" can be used to decompose polarimetric synthetic-aperture radar (SAR) images. This approach deals with the non-reflection symmetric scattering case. It includes and extends the three-component decomposition method introduced by Freeman and Durden to a fourth component by adding the helix scattering power. This helix power term generally appears in complex urban area but disappears for a natural distributed scatterer. There is also an improved method using the four-component decomposition algorithm, which was introduced for the general polSAR data image analyses. The SAR data is first filtered which is known as speckle reduction, then each pixel is decomposed by four-component model to determine the surface scattering power (formula_30), double-bounce scattering power (formula_31), volume scattering power (formula_32), and helix scattering power (formula_33). The pixels are then divided into 5 classes (surface, double-bounce, volume, helix, and mixed pixels) classified with respect to maximum powers. A mixed category is added for the pixels having two or three equal dominant scattering powers after computation. The process continues as the pixels in all these categories are divided in 20 small clutter approximately of same number of pixels and merged as desirable, this is called cluster merging. They are iteratively classified and then automatically color is delivered to each class. The summarization of this algorithm leads to an understanding that, brown colors denotes the surface scattering classes, red colors for double-bounce scattering classes, green colors for volume scattering classes, and blue colors for helix scattering classes. Although this method is aimed for non-reflection case, it automatically includes the reflection symmetry condition, therefore in can be used as a general case. It also preserves the scattering characteristics by taking the mixed scattering category into account therefore proving to be a better algorithm. Interferometry. Rather than discarding the phase data, information can be extracted from it. If two observations of the same terrain from very similar positions are available, aperture synthesis can be performed to provide the resolution performance which would be given by a radar system with dimensions equal to the separation of the two measurements. This technique is called interferometric SAR or InSAR. If the two samples are obtained simultaneously (perhaps by placing two antennas on the same aircraft, some distance apart), then any phase difference will contain information about the angle from which the radar echo returned. Combining this with the distance information, one can determine the position in three dimensions of the image pixel. In other words, one can extract terrain altitude as well as radar reflectivity, producing a digital elevation model (DEM) with a single airplane pass. One aircraft application at the Canada Centre for Remote Sensing produced digital elevation maps with a resolution of 5 m and altitude errors also about 5 m. Interferometry was used to map many regions of the Earth's surface with unprecedented accuracy using data from the Shuttle Radar Topography Mission. If the two samples are separated in time, perhaps from two flights over the same terrain, then there are two possible sources of phase shift. The first is terrain altitude, as discussed above. The second is terrain motion: if the terrain has shifted between observations, it will return a different phase. The amount of shift required to cause a significant phase difference is on the order of the wavelength used. This means that if the terrain shifts by centimeters, it can be seen in the resulting image (a digital elevation map must be available to separate the two kinds of phase difference; a third pass may be necessary to produce one). This second method offers a powerful tool in geology and geography. Glacier flow can be mapped with two passes. Maps showing the land deformation after a minor earthquake or after a volcanic eruption (showing the shrinkage of the whole volcano by several centimeters) have been published. Differential interferometry. Differential interferometry (D-InSAR) requires taking at least two images with addition of a DEM. The DEM can be either produced by GPS measurements or could be generated by interferometry as long as the time between acquisition of the image pairs is short, which guarantees minimal distortion of the image of the target surface. In principle, 3 images of the ground area with similar image acquisition geometry is often adequate for D-InSar. The principle for detecting ground movement is quite simple. One interferogram is created from the first two images; this is also called the reference interferogram or topographical interferogram. A second interferogram is created that captures topography + distortion. Subtracting the latter from the reference interferogram can reveal differential fringes, indicating movement. The described 3 image D-InSAR generation technique is called 3-pass or double-difference method. Differential fringes which remain as fringes in the differential interferogram are a result of SAR range changes of any displaced point on the ground from one interferogram to the next. In the differential interferogram, each fringe is directly proportional to the SAR wavelength, which is about 5.6 cm for ERS and RADARSAT single phase cycle. Surface displacement away from the satellite look direction causes an increase in path (translating to phase) difference. Since the signal travels from the SAR antenna to the target and back again, the measured displacement is twice the unit of wavelength. This means in differential interferometry one fringe cycle −π to +π or one wavelength corresponds to a displacement relative to SAR antenna of only half wavelength (2.8 cm). There are various publications on measuring subsidence movement, slope stability analysis, landslide, glacier movement, etc. tooling D-InSAR. Further advancement to this technique whereby differential interferometry from satellite SAR ascending pass and descending pass can be used to estimate 3-D ground movement. Research in this area has shown accurate measurements of 3-D ground movement with accuracies comparable to GPS based measurements can be achieved. Tomo-SAR. SAR Tomography is a subfield of a concept named as multi-baseline interferometry. It has been developed to give a 3D exposure to the imaging, which uses the beam formation concept. It can be used when the use demands a focused phase concern between the magnitude and the phase components of the SAR data, during information retrieval. One of the major advantages of Tomo-SAR is that it can separate out the parameters which get scattered, irrespective of how different their motions are. On using Tomo-SAR with differential interferometry, a new combination named "differential tomography" (Diff-Tomo) is developed. Tomo-SAR has an application based on radar imaging, which is the depiction of Ice Volume and Forest Temporal Coherence (Temporal coherence describes the correlation between waves observed at different moments in time). Ultra-wideband SAR. Conventional radar systems emit bursts of radio energy with a fairly narrow range of frequencies. A narrow-band channel, by definition, does not allow rapid changes in modulation. Since it is the change in a received signal that reveals the time of arrival of the signal (obviously an unchanging signal would reveal nothing about "when" it reflected from the target), a signal with only a slow change in modulation cannot reveal the distance to the target as well as a signal with a quick change in modulation. Ultra-wideband (UWB) refers to any radio transmission that uses a very large bandwidth – which is the same as saying it uses very rapid changes in modulation. Although there is no set bandwidth value that qualifies a signal as "UWB", systems using bandwidths greater than a sizable portion of the center frequency (typically about ten percent, or so) are most often called "UWB" systems. A typical UWB system might use a bandwidth of one-third to one-half of its center frequency. For example, some systems use a bandwidth of about 1 GHz centered around 3 GHz. The two most common methods to increase signal bandwidth used in UWB radar, including SAR, are very short pulses and high-bandwidth chirping. A general description of chirping appears elsewhere in this article. The bandwidth of a chirped system can be as narrow or as wide as the designers desire. Pulse-based UWB systems, being the more common method associated with the term "UWB radar", are described here. A pulse-based radar system transmits very short pulses of electromagnetic energy, typically only a few waves or less. A very short pulse is, of course, a very rapidly changing signal, and thus occupies a very wide bandwidth. This allows far more accurate measurement of distance, and thus resolution. The main disadvantage of pulse-based UWB SAR is that the transmitting and receiving front-end electronics are difficult to design for high-power applications. Specifically, the transmit duty cycle is so exceptionally low and pulse time so exceptionally short, that the electronics must be capable of extremely high instantaneous power to rival the average power of conventional radars. (Although it is true that UWB provides a notable gain in channel capacity over a narrow band signal because of the relationship of bandwidth in the Shannon–Hartley theorem and because the low receive duty cycle receives less noise, increasing the signal-to-noise ratio, there is still a notable disparity in link budget because conventional radar might be several orders of magnitude more powerful than a typical pulse-based radar.) So pulse-based UWB SAR is typically used in applications requiring average power levels in the microwatt or milliwatt range, and thus is used for scanning smaller, nearer target areas (several tens of meters), or in cases where lengthy integration (over a span of minutes) of the received signal is possible. However, that this limitation is solved in chirped UWB radar systems. The principal advantages of UWB radar are better resolution (a few millimeters using commercial off-the-shelf electronics) and more spectral information of target reflectivity. Doppler-beam sharpening. Doppler Beam Sharpening commonly refers to the method of processing unfocused real-beam phase history to achieve better resolution than could be achieved by processing the real beam without it. Because the real aperture of the radar antenna is so small (compared to the wavelength in use), the radar energy spreads over a wide area (usually many degrees wide in a direction orthogonal (at right angles) to the direction of the platform (aircraft)). Doppler-beam sharpening takes advantage of the motion of the platform in that targets ahead of the platform return a Doppler upshifted signal (slightly higher in frequency) and targets behind the platform return a Doppler downshifted signal (slightly lower in frequency). The amount of shift varies with the angle forward or backward from the ortho-normal direction. By knowing the speed of the platform, target signal return is placed in a specific angle "bin" that changes over time. Signals are integrated over time and thus the radar "beam" is synthetically reduced to a much smaller aperture – or more accurately (and based on the ability to distinguish smaller Doppler shifts) the system can have hundreds of very "tight" beams concurrently. This technique dramatically improves angular resolution; however, it is far more difficult to take advantage of this technique for range resolution. (See pulse-doppler radar). Chirped (pulse-compressed) radars. A common technique for many radar systems (usually also found in SAR systems) is to "chirp" the signal. In a "chirped" radar, the pulse is allowed to be much longer. A longer pulse allows more energy to be emitted, and hence received, but usually hinders range resolution. But in a chirped radar, this longer pulse also has a frequency shift during the pulse (hence the chirp or frequency shift). When the "chirped" signal is returned, it must be correlated with the sent pulse. Classically, in analog systems, it is passed to a dispersive delay line (often a surface acoustic wave device) that has the property of varying velocity of propagation based on frequency. This technique "compresses" the pulse in time – thus having the effect of a much shorter pulse (improved range resolution) while having the benefit of longer pulse length (much more signal returned). Newer systems use digital pulse correlation to find the pulse return in the signal. Typical operation. Data collection. In a typical SAR application, a single radar antenna is attached to an aircraft or spacecraft such that a substantial component of the antenna's radiated beam has a wave-propagation direction perpendicular to the flight-path direction. The beam is allowed to be broad in the vertical direction so it will illuminate the terrain from nearly beneath the aircraft out toward the horizon. Image resolution and bandwidth. Resolution in the range dimension of the image is accomplished by creating pulses which define very short time intervals, either by emitting short pulses consisting of a carrier frequency and the necessary sidebands, all within a certain bandwidth, or by using longer "chirp pulses" in which frequency varies (often linearly) with time within that bandwidth. The differing times at which echoes return allow points at different distances to be distinguished. Image resolution of SAR in its range coordinate (expressed in image pixels per distance unit) is mainly proportional to the radio bandwidth of whatever type of pulse is used. In the cross-range coordinate, the similar resolution is mainly proportional to the bandwidth of the Doppler shift of the signal returns within the beamwidth. Since Doppler frequency depends on the angle of the scattering point's direction from the broadside direction, the Doppler bandwidth available within the beamwidth is the same at all ranges. Hence the theoretical spatial resolution limits in both image dimensions remain constant with variation of range. However, in practice, both the errors that accumulate with data-collection time and the particular techniques used in post-processing further limit cross-range resolution at long ranges. Image resolution and beamwidth. The total signal is that from a beamwidth-sized patch of the ground. To produce a beam that is narrow in the cross-range direction, diffraction effects require that the antenna be wide in that dimension. Therefore, the distinguishing, from each other, of co-range points simply by strengths of returns that persist for as long as they are within the beam width is difficult with aircraft-carryable antennas, because their beams can have linear widths only about two orders of magnitude (hundreds of times) smaller than the range. (Spacecraft-carryable ones can do 10 or more times better.) However, if both the amplitude and the phase of returns are recorded, then the portion of that multi-target return that was scattered radially from any smaller scene element can be extracted by phase-vector correlation of the total return with the form of the return expected from each such element. The process can be thought of as combining the series of spatially distributed observations as if all had been made simultaneously with an antenna as long as the beamwidth and focused on that particular point. The "synthetic aperture" simulated at maximum system range by this process not only is longer than the real antenna, but, in practical applications, it is much longer than the radar aircraft, and tremendously longer than the radar spacecraft. Although some references to SARs have characterized them as "radar telescopes", their actual optical analogy is the microscope, the detail in their images being smaller than the length of the synthetic aperture. In radar-engineering terms, while the target area is in the "far field" of the illuminating antenna, it is in the "near field" of the simulated one. Careful design and operation can accomplish resolution of items smaller than a millionth of the range, for example, 30 cm at 300 km, or about one foot at nearly . Pulse transmission and reception. The conversion of return delay time to geometric range can be very accurate because of the natural constancy of the speed and direction of propagation of electromagnetic waves. However, for an aircraft flying through the never-uniform and never-quiescent atmosphere, the relating of pulse transmission and reception times to successive geometric positions of the antenna must be accompanied by constant adjusting of the return phases to account for sensed irregularities in the flight path. SAR's in spacecraft avoid that atmosphere problem, but still must make corrections for known antenna movements due to rotations of the spacecraft, even those that are reactions to movements of onboard machinery. Locating a SAR in a crewed space vehicle may require that the humans carefully remain motionless relative to the vehicle during data collection periods. Returns from scatterers within the range extent of any image are spread over a matching time interval. The inter-pulse period must be long enough to allow farthest-range returns from any pulse to finish arriving before the nearest-range ones from the next pulse begin to appear, so that those do not overlap each other in time. On the other hand, the interpulse rate must be fast enough to provide sufficient samples for the desired across-range (or across-beam) resolution. When the radar is to be carried by a high-speed vehicle and is to image a large area at fine resolution, those conditions may clash, leading to what has been called SAR's ambiguity problem. The same considerations apply to "conventional" radars also, but this problem occurs significantly only when resolution is so fine as to be available only through SAR processes. Since the basis of the problem is the information-carrying capacity of the single signal-input channel provided by one antenna, the only solution is to use additional channels fed by additional antennas. The system then becomes a hybrid of a SAR and a phased array, sometimes being called a Vernier array. Data processing. Combining the series of observations requires significant computational resources, usually using Fourier transform techniques. The high digital computing speed now available allows such processing to be done in near-real time on board a SAR aircraft. (There is necessarily a minimum time delay until all parts of the signal have been received.) The result is a map of radar reflectivity, including both amplitude and phase. Amplitude data. The amplitude information, when shown in a map-like display, gives information about ground cover in much the same way that a black-and-white photo does. Variations in processing may also be done in either vehicle-borne stations or ground stations for various purposes, so as to accentuate certain image features for detailed target-area analysis. Phase data. Although the phase information in an image is generally not made available to a human observer of an image display device, it can be preserved numerically, and sometimes allows certain additional features of targets to be recognized. Coherence speckle. Unfortunately, the phase differences between adjacent image picture elements ("pixels") also produce random interference effects called "coherence speckle", which is a sort of graininess with dimensions on the order of the resolution, causing the concept of resolution to take on a subtly different meaning. This effect is the same as is apparent both visually and photographically in laser-illuminated optical scenes. The scale of that random speckle structure is governed by the size of the synthetic aperture in wavelengths, and cannot be finer than the system's resolution. Speckle structure can be subdued at the expense of resolution. Optical holography. Before rapid digital computers were available, the data processing was done using an optical holography technique. The analog radar data were recorded as a holographic interference pattern on photographic film at a scale permitting the film to preserve the signal bandwidths (for example, 1:1,000,000 for a radar using a 0.6-meter wavelength). Then light using, for example, 0.6-micrometer waves (as from a helium–neon laser) passing through the hologram could project a terrain image at a scale recordable on another film at reasonable processor focal distances of around a meter. This worked because both SAR and phased arrays are fundamentally similar to optical holography, but using microwaves instead of light waves. The "optical data-processors" developed for this radar purpose were the first effective analog optical computer systems, and were, in fact, devised before the holographic technique was fully adapted to optical imaging. Because of the different sources of range and across-range signal structures in the radar signals, optical data-processors for SAR included not only both spherical and cylindrical lenses, but sometimes conical ones. Image appearance. The following considerations apply also to real-aperture terrain-imaging radars, but are more consequential when resolution in range is matched to a cross-beam resolution that is available only from a SAR. Range, cross-range, and angles. The two dimensions of a radar image are range and cross-range. Radar images of limited patches of terrain can resemble oblique photographs, but not ones taken from the location of the radar. This is because the range coordinate in a radar image is perpendicular to the vertical-angle coordinate of an oblique photo. The apparent entrance-pupil position (or camera center) for viewing such an image is therefore not as if at the radar, but as if at a point from which the viewer's line of sight is perpendicular to the slant-range direction connecting radar and target, with slant-range increasing from top to bottom of the image. Because slant ranges to level terrain vary in vertical angle, each elevation of such terrain appears as a curved surface, specifically a hyperbolic cosine one. Verticals at various ranges are perpendiculars to those curves. The viewer's apparent looking directions are parallel to the curve's "hypcos" axis. Items directly beneath the radar appear as if optically viewed horizontally (i.e., from the side) and those at far ranges as if optically viewed from directly above. These curvatures are not evident unless large extents of near-range terrain, including steep slant ranges, are being viewed. Visibility. When viewed as specified above, fine-resolution radar images of small areas can appear most nearly like familiar optical ones, for two reasons. The first reason is easily understood by imagining a flagpole in the scene. The slant-range to its upper end is less than that to its base. Therefore, the pole can appear correctly top-end up only when viewed in the above orientation. Secondly, the radar illumination then being downward, shadows are seen in their most-familiar "overhead-lighting" direction. The image of the pole's top will overlay that of some terrain point which is on the same slant range arc but at a shorter horizontal range ("ground-range"). Images of scene surfaces which faced both the illumination and the apparent eyepoint will have geometries that resemble those of an optical scene viewed from that eyepoint. However, slopes facing the radar will be foreshortened and ones facing away from it will be lengthened from their horizontal (map) dimensions. The former will therefore be brightened and the latter dimmed. Returns from slopes steeper than perpendicular to slant range will be overlaid on those of lower-elevation terrain at a nearer ground-range, both being visible but intermingled. This is especially the case for vertical surfaces like the walls of buildings. Another viewing inconvenience that arises when a surface is steeper than perpendicular to the slant range is that it is then illuminated on one face but "viewed" from the reverse face. Then one "sees", for example, the radar-facing wall of a building as if from the inside, while the building's interior and the rear wall (that nearest to, hence expected to be optically visible to, the viewer) have vanished, since they lack illumination, being in the shadow of the front wall and the roof. Some return from the roof may overlay that from the front wall, and both of those may overlay return from terrain in front of the building. The visible building shadow will include those of all illuminated items. Long shadows may exhibit blurred edges due to the illuminating antenna's movement during the "time exposure" needed to create the image. Mirroring artefacts and shadows. Surfaces that we usually consider rough will, if that roughness consists of relief less than the radar wavelength, behave as smooth mirrors, showing, beyond such a surface, additional images of items in front of it. Those mirror images will appear within the shadow of the mirroring surface, sometimes filling the entire shadow, thus preventing recognition of the shadow. The direction of overlay of any scene point is not directly toward the radar, but toward that point of the SAR's current path direction that is nearest to the target point. If the SAR is "squinting" forward or aft away from the exactly broadside direction, then the illumination direction, and hence the shadow direction, will not be opposite to the overlay direction, but slanted to right or left from it. An image will appear with the correct projection geometry when viewed so that the overlay direction is vertical, the SAR's flight-path is above the image, and range increases somewhat downward. Objects in motion. Objects in motion within a SAR scene alter the Doppler frequencies of the returns. Such objects therefore appear in the image at locations offset in the across-range direction by amounts proportional to the range-direction component of their velocity. Road vehicles may be depicted off the roadway and therefore not recognized as road traffic items. Trains appearing away from their tracks are more easily properly recognized by their length parallel to known trackage as well as by the absence of an equal length of railbed signature and of some adjacent terrain, both having been shadowed by the train. While images of moving vessels can be offset from the line of the earlier parts of their wakes, the more recent parts of the wake, which still partake of some of the vessel's motion, appear as curves connecting the vessel image to the relatively quiescent far-aft wake. In such identifiable cases, speed and direction of the moving items can be determined from the amounts of their offsets. The along-track component of a target's motion causes some defocus. Random motions such as that of wind-driven tree foliage, vehicles driven over rough terrain, or humans or other animals walking or running generally render those items not focusable, resulting in blurring or even effective invisibility. These considerations, along with the speckle structure due to coherence, take some getting used to in order to correctly interpret SAR images. To assist in that, large collections of significant target signatures have been accumulated by performing many test flights over known terrains and cultural objects. Relationship to phased arrays. A technique closely related to SAR uses an array (referred to as a "phased array") of real antenna elements spatially distributed over either one or two dimensions perpendicular to the radar-range dimension. These physical arrays are truly synthetic ones, indeed being created by synthesis of a collection of subsidiary physical antennas. Their operation need not involve motion relative to targets. All elements of these arrays receive simultaneously in real time, and the signals passing through them can be individually subjected to controlled shifts of the phases of those signals. One result can be to respond most strongly to radiation received from a specific small scene area, focusing on that area to determine its contribution to the total signal received. The coherently detected set of signals received over the entire array aperture can be replicated in several data-processing channels and processed differently in each. The set of responses thus traced to different small scene areas can be displayed together as an image of the scene. In comparison, a SAR's (commonly) single physical antenna element gathers signals at different positions at different times. When the radar is carried by an aircraft or an orbiting vehicle, those positions are functions of a single variable, distance along the vehicle's path, which is a single mathematical dimension (not necessarily the same as a linear geometric dimension). The signals are stored, thus becoming functions, no longer of time, but of recording locations along that dimension. When the stored signals are read out later and combined with specific phase shifts, the result is the same as if the recorded data had been gathered by an equally long and shaped phased array. What is thus synthesized is a set of signals equivalent to what could have been received simultaneously by such an actual large-aperture (in one dimension) phased array. The SAR simulates (rather than synthesizes) that long one-dimensional phased array. Although the term in the title of this article has thus been incorrectly derived, it is now firmly established by half a century of usage. While operation of a phased array is readily understood as a completely geometric technique, the fact that a synthetic aperture system gathers its data as it (or its target) moves at some speed means that phases which varied with the distance traveled originally varied with time, hence constituted temporal frequencies. Temporal frequencies being the variables commonly used by radar engineers, their analyses of SAR systems are usually (and very productively) couched in such terms. In particular, the variation of phase during flight over the length of the synthetic aperture is seen as a sequence of Doppler shifts of the received frequency from that of the transmitted frequency. Once the received data have been recorded and thus have become timeless, the SAR data-processing situation is also understandable as a special type of phased array, treatable as a completely geometric process. The core of both the SAR and the phased array techniques is that the distances that radar waves travel to and back from each scene element consist of some integer number of wavelengths plus some fraction of a "final" wavelength. Those fractions cause differences between the phases of the re-radiation received at various SAR or array positions. Coherent detection is needed to capture the signal phase information in addition to the signal amplitude information. That type of detection requires finding the differences between the phases of the received signals and the simultaneous phase of a well-preserved sample of the transmitted illumination. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega_1, \\omega_2" }, { "math_id": 1, "text": "\\omega_1 \\in [0, 2\\pi), \\omega_2 \\in [0, 2\\pi)" }, { "math_id": 2, "text": "\\min_h h^*_{\\omega_1,\\omega_2} Rh_{\\omega_1,\\omega_2}" }, { "math_id": 3, "text": "h^*_{\\omega_1,\\omega_2} a_{\\omega_1,\\omega_2} = 1," }, { "math_id": 4, "text": "h^*_{\\omega_1,\\omega_2}" }, { "math_id": 5, "text": "a_{\\omega_1,\\omega_2}" }, { "math_id": 6, "text": "a_{\\omega_1,\\omega_2} \\triangleq a_{\\omega_1} \\otimes a_{\\omega_2}" }, { "math_id": 7, "text": "\\otimes" }, { "math_id": 8, "text": "S_{\\omega_1,\\omega_2} = \\frac{1}{a_{\\omega_1,\\omega_2}^* R^{-1} a_{\\omega_1,\\omega_2}}," }, { "math_id": 9, "text": "a^*_{\\omega_1,\\omega_2}" }, { "math_id": 10, "text": "\\left(\\omega_1, \\omega_2\\right)" }, { "math_id": 11, "text": "\\omega" }, { "math_id": 12, "text": "\\omega \\in [0, 2\\pi)" }, { "math_id": 13, "text": "\\omega_x, \\omega_y" }, { "math_id": 14, "text": "\n \\hat{\\phi}_{EV}\\left(\\omega_x, \\omega_y\\right) = \\frac{1}\n {W^\\mathsf{H}\\left(\\omega_x, \\omega_y\\right) \\left(\\sum_\\text{clutter} \\frac{1}{\\lambda_i} \\underline{v_i}\\,\\underline{v_i}^\\mathsf{H}\\right) W\\left(\\omega_x, \\omega_y\\right)}" }, { "math_id": 15, "text": "\\hat{\\phi}_{EV}" }, { "math_id": 16, "text": "\\left(\\omega_x, \\omega_y\\right)" }, { "math_id": 17, "text": "\\underline{v_i}" }, { "math_id": 18, "text": "\\underline{v_i}^\\mathsf{H}" }, { "math_id": 19, "text": "\\frac{1}{\\lambda_i}" }, { "math_id": 20, "text": "W\\left(\\omega_x, \\omega_y\\right)" }, { "math_id": 21, "text": "\n W\\left(\\omega_x, \\omega_y\\right) =\n \\left[1 \\exp\\left(-j\\omega_x\\right) \\ldots \\exp\\left(-j(M - 1)\\omega_x\\right)\\right] \\otimes \\left[1 \\exp\\left(-j\\omega_y\\right) \\ldots \\exp\\left(-j(M - 1)\\omega_y\\right)\\right]" }, { "math_id": 22, "text": "s(t, \\tau) = \\exp \\left(-j \\cdot \\frac{4\\pi}{\\lambda} \\cdot R(t)\\right) \\cdot \\operatorname{sinc}\\left(\\tau - \\frac{2}{c} \\cdot R(t)\\right)" }, { "math_id": 23, "text": "\\frac{N^2 + N}{2}" }, { "math_id": 24, "text": "Lsa = r_0 \\Delta\\theta_a" }, { "math_id": 25, "text": "\\Delta\\theta_a" }, { "math_id": 26, "text": "r_0" }, { "math_id": 27, "text": "S = \\begin{bmatrix} S_{HH} & S_{HV} \\\\ S_{VH} & S_{VV} \\end{bmatrix}" }, { "math_id": 28, "text": "\\times" }, { "math_id": 29, "text": "|S_{HH}|^2 , |S_{VV}|^2 , 2|S_{HV}|^2" }, { "math_id": 30, "text": "P_{s}" }, { "math_id": 31, "text": "P_{d}" }, { "math_id": 32, "text": "P_{v}" }, { "math_id": 33, "text": "P_{c}" } ]
https://en.wikipedia.org/wiki?curid=645554
64555429
Perfect matching in high-degree hypergraphs
Area of research in mathematics (graph theory) In graph theory, perfect matching in high-degree hypergraphs is a research avenue trying to find sufficient conditions for existence of a perfect matching in a hypergraph, based only on the degree of vertices or subsets of them. Introduction. Degrees and matchings in graphs. In a simple graph "G" = ("V", "E"), the degree of a vertex v, often denoted by deg("v") or δ("v"), is the number of edges in E adjacent to v. The minimum degree of a graph, often denoted by deg("G") or δ("v"), is the minimum of deg("v") over all vertices v in V. A matching in a graph is a set of edges such that each vertex is adjacent to at most one edge; a perfect matching is a matching in which each vertex is adjacent to exactly one edge. A perfect matching does not always exist, and thus it is interesting to find sufficient conditions that guarantee its existence. One such condition follows from Dirac's theorem on Hamiltonian cycles. It says that, if deg("G") ≥ &lt;templatestyles src="Fraction/styles.css" /&gt;"n"⁄2, then the graph admits a Hamiltonian cycle; this implies that it admits a perfect matching. The factor is tight, since the complete bipartite graph on vertices has degree but does not admit a perfect matching. The results described below aim to extend these results from graphs to hypergraphs. Degrees in hypergraphs. In a hypergraph "H" = ("V", "E)", each edge of E may contain more than two vertices of V. The degree of a vertex v in V is, as before, the number of edges in E that contain v. But in a hypergraph we can also consider the degree of "subsets" of vertices: given a subset U of V, deg("U") is the number of edges in E that contain "all" vertices of U. Thus, the degree of a hypergraph can be defined in different ways depending on the size of subsets whose degree is considered. Formally, for every integer "d" ≥ 1, deg"d"("H") is the minimum of deg("U") over all subsets U of V that contain exactly d vertices. Thus, deg1("H") corresponds to the definition of a degree of a simple graph, namely the smallest degree of a single vertex; deg2("H") is the smallest degree of a pair of vertices; etc. A hypergraph "H" = ("V", "E") is called r-uniform if every hyperedge in E contains exactly r vertices of V. In r-uniform graphs, the relevant values of d are 1, 2, … , "r" – 1. In a simple graph, "r" = 2. Conditions on 1-vertex degree. Several authors proved sufficient conditions for the case "d" = 1, i.e., conditions on the smallest degree of a single vertex. For comparison, Dirac's theorem on Hamiltonian cycles says that, if H is 2-uniform (i.e., a simple graph) and formula_7 then H admits a perfect matching. Conditions on (r-1)-tuple degree. Several authors proved sufficient conditions for the case "d" = "r" – 1, i.e., conditions on the smallest degree of sets of "r" – 1 vertices, in r-uniform hypergraphs with n vertices. In "r"-partite "r"-uniform hypergraphs. The following results relate to r-partite hypergraphs that have exactly n vertices on each side (rn vertices overall): Other conditions. There are some sufficient conditions for other values of d:
[ { "math_id": 0, "text": "\\deg_1(H)\\geq\\bigg(5/9+o(1)\\bigg){n\\choose 2}," }, { "math_id": 1, "text": "\\deg_1(H)\\geq {n-1\\choose 2} - {2n/3\\choose 2}+1," }, { "math_id": 2, "text": "\\deg_1(H)\\geq {n-1\\choose 3} - {3n/4\\choose 3}," }, { "math_id": 3, "text": "\\deg_1(H)> \\frac{r-1}{r}\\left({\\binom{n-1}{r-1}-\\binom{n-kr-1}{r-1}} \\right)," }, { "math_id": 4, "text": "\\deg_1(H)> \\frac{r-1}{r}\\left({\\binom{n-1}{r-1}-1} \\right)," }, { "math_id": 5, "text": "\\deg_1(H)> \\frac{r-1}{r}\\left({n^{r-1}-(n-k)^{r-1}} \\right)," }, { "math_id": 6, "text": "\\deg_1(H)> \\frac{r-1}{r}\\left({n^{r-1}-1} \\right)," }, { "math_id": 7, "text": "\\deg_1(H) \\geq {n-1\\choose 1} - {n/2\\choose 1} + 1 = n/2," }, { "math_id": 8, "text": "\\deg_{r-1}(H)\\geq n/2 + \\sqrt{2 n\\log_2 n}" }, { "math_id": 9, "text": "\\deg_{r-1}(H)\\geq n/r" }, { "math_id": 10, "text": "\\deg_{r-1}(H) \\geq (1/2+\\gamma)n" }, { "math_id": 11, "text": "\\deg_{r-1}(H)\\geq n/2 + 3 r^2 \\sqrt{n\\log_2 n}" }, { "math_id": 12, "text": "\\deg_{r-1}(H)\\geq n/r + 3 r^2 \\sqrt{n\\log_2 n}," }, { "math_id": 13, "text": "\\deg_{r-1}(H)\\geq n/2 - r + c_{k,n}" }, { "math_id": 14, "text": "\\left(\\frac{1}{2}+o(1)\\right)\\binom{n}{r-d}." }, { "math_id": 15, "text": "\\left(\\frac{r-d}{r}+o(1)\\right)\\binom{n-d}{r-d}" }, { "math_id": 16, "text": "\\deg_d(H) \\geq \\frac{r-d}{r}n^{r-d} + r n^{r-d-1}," }, { "math_id": 17, "text": "\\deg_d(H) \\geq \\frac{r-d}{r}\\binom{n}{r-d} + r^{r+1}(\\ln{n})^{1/2}n^{r-d-1/2}," } ]
https://en.wikipedia.org/wiki?curid=64555429
645602
Minimax theorem
Gives conditions that guarantee the max–min inequality is also an equality In the mathematical area of game theory, a minimax theorem is a theorem providing conditions that guarantee that the max–min inequality is also an equality. The first theorem in this sense is von Neumann's minimax theorem about zero-sum games published in 1928, which was considered the starting point of game theory. Von Neumann is quoted as saying "As far as I can see, there could be no theory of games ... without that theorem ... I thought there was nothing worth publishing until the Minimax Theorem was proved". Since then, several generalizations and alternative versions of von Neumann's original theorem have appeared in the literature. Formally, von Neumann's minimax theorem states: Let formula_0 and formula_1 be compact convex sets. If formula_2 is a continuous function that is concave-convex, i.e. formula_3 is concave for fixed formula_4, and formula_5 is convex for fixed formula_6. Then we have that formula_7 Special case: Bilinear function. The theorem holds in particular if formula_8 is a linear function in both of its arguments (and therefore is bilinear) since a linear function is both concave and convex. Thus, if formula_9 for a finite matrix formula_10, we have: formula_11 The bilinear special case is particularly important for zero-sum games, when the strategy set of each player consists of lotteries over actions (mixed strategies), and payoffs are induced by expected value. In the above formulation, formula_12 is the payoff matrix. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X \\subset \\mathbb{R}^n" }, { "math_id": 1, "text": "Y \\subset \\mathbb{R}^m" }, { "math_id": 2, "text": "f: X \\times Y \\rightarrow \\mathbb{R}" }, { "math_id": 3, "text": "f(\\cdot,y): X \\to \\mathbb{R}" }, { "math_id": 4, "text": "y" }, { "math_id": 5, "text": "f(x,\\cdot): Y \\to \\mathbb{R}" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "\\max_{x\\in X} \\min_{y\\in Y} f(x,y) = \\min_{y\\in Y} \\max_{x\\in X}f(x,y)." }, { "math_id": 8, "text": "f(x,y)" }, { "math_id": 9, "text": "f(x,y) = x^\\mathsf{T} A y" }, { "math_id": 10, "text": "A \\in \\mathbb{R}^{n \\times m}" }, { "math_id": 11, "text": "\\max_{x \\in X} \\min_{y \\in Y} x^\\mathsf{T} A y = \\min_{y \\in Y}\\max_{x \\in X} x^\\mathsf{T} A y. " }, { "math_id": 12, "text": "A" } ]
https://en.wikipedia.org/wiki?curid=645602
64563432
EM algorithm and GMM model
In statistics, EM (expectation maximization) algorithm handles latent variables, while GMM is the Gaussian mixture model. Background. In the picture below, are shown the red blood cell hemoglobin concentration and the red blood cell volume data of two groups of people, the Anemia group and the Control Group (i.e. the group of people without Anemia). As expected, people with Anemia have lower red blood cell volume and lower red blood cell hemoglobin concentration than those without Anemia. formula_0 is a random vector such as formula_1, and from medical studies it is known that formula_0 are normally distributed in each group, i.e. formula_2. formula_3 is denoted as the group where formula_0 belongs, with formula_4 when formula_5 belongs to Anemia Group and formula_6 when formula_5 belongs to Control Group. Also formula_7 where formula_8, formula_9 and formula_10. See Categorical distribution. The following procedure can be used to estimate formula_11. A maximum likelihood estimation can be applied: formula_12 As the formula_13 for each formula_5 are known, the log likelihood function can be simplified as below: formula_14 Now the likelihood function can be maximized by making partial derivative over formula_15, obtaining: formula_16 formula_17 formula_18 If formula_13 is known, the estimation of the parameters results to be quite simple with maximum likelihood estimation. But if formula_13 is unknown it is much more complicated. Being formula_3 a latent variable (i.e. not observed), with unlabeled scenario, the Expectation Maximization Algorithm is needed to estimate formula_3 as well as other parameters. Generally, this problem is set as a GMM since the data in each group is normally distributed. In machine learning, the latent variable formula_3 is considered as a latent pattern lying under the data, which the observer is not able to see very directly. formula_5 is the known data, while formula_19 are the parameter of the model. With the EM algorithm, some underlying pattern formula_3 in the data formula_5 can be found, along with the estimation of the parameters. The wide application of this circumstance in machine learning is what makes EM algorithm so important. EM algorithm in GMM. The EM algorithm consists of two steps: the E-step and the M-step. Firstly, the model parameters and the formula_20 can be randomly initialized. In the E-step, the algorithm tries to guess the value of formula_20 based on the parameters, while in the M-step, the algorithm updates the value of the model parameters based on the guess of formula_20of the E-step. These two steps are repeated until convergence is reached. The algorithm in GMM is: Repeat until convergence: 1. (E-step) For each formula_21, set formula_22 2. (M-step) Update the parameters formula_23 formula_24 formula_25 With Bayes Rule, the following result is obtained by the E-step: formula_26 According to GMM setting, these following formulas are obtained: formula_27 formula_28 In this way, a switch between the E-step and the M-step is possible, according to the randomly initialized parameters. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "x:=\\big(\\text{red blood cell volume}, \\text{red blood cell hemoglobin concentration}\\big)" }, { "math_id": 2, "text": "x \\sim \\mathcal N(\\mu, \\Sigma)" }, { "math_id": 3, "text": "z" }, { "math_id": 4, "text": "z_i = 0" }, { "math_id": 5, "text": "x_i" }, { "math_id": 6, "text": "z_i=1" }, { "math_id": 7, "text": "z \\sim \\operatorname{Categorical}(k, \\phi)" }, { "math_id": 8, "text": "k=2" }, { "math_id": 9, "text": "\\phi_j \\geq 0," }, { "math_id": 10, "text": "\\sum_{j=1}^k\\phi_j=1" }, { "math_id": 11, "text": "\\phi, \\mu , \\Sigma" }, { "math_id": 12, "text": "\\ell(\\phi,\\mu,\\Sigma)=\\sum_{i=1}^m \\log (p(x^{(i)};\\phi,\\mu,\\Sigma))\n=\\sum_{i=1}^m \\log \\sum_{z^{(i)}=1}^k p\\left(x^{(i)} \\mid z^{(i)} ; \\mu, \\Sigma\\right) p(z^{(i)} ; \\phi)\n" }, { "math_id": 13, "text": "z_i" }, { "math_id": 14, "text": "\\ell(\\phi, \\mu, \\Sigma)=\\sum_{i=1}^{m} \\log p\\left(x^{(i)} \\mid z^{(i)} ; \\mu, \\Sigma\\right)+\\log p\\left(z^{(i)} ; \\phi\\right)" }, { "math_id": 15, "text": "\\mu, \\Sigma, \\phi" }, { "math_id": 16, "text": "\\phi_{j} =\\frac{1}{m} \\sum_{i=1}^m 1\\{z^{(i)}=j\\}" }, { "math_id": 17, "text": "\\mu_j =\\frac{\\sum_{i=1}^m 1\\{z^{(i)}=j\\} x^{(i)}}{\\sum_{i=1}^{m} 1\\left\\{z^{(i)}=j\\right\\}}" }, { "math_id": 18, "text": "\\Sigma_j =\\frac{\\sum_{i=1}^m 1\\{z^{(i)}=j\\} (x^{(i)}-\\mu_j)(x^{(i)}-\\mu_j)^T}{\\sum_{i=1}^m 1\\{z^{(i)}=j\\}}" }, { "math_id": 19, "text": "\\phi, \\mu, \\Sigma" }, { "math_id": 20, "text": "z^{(i)}" }, { "math_id": 21, "text": "i, j" }, { "math_id": 22, "text": "w_{j}^{(i)}:=p\\left(z^{(i)}=j | x^{(i)} ; \\phi, \\mu, \\Sigma\\right)" }, { "math_id": 23, "text": "\\phi_{j} :=\\frac{1}{m} \\sum_{i=1}^{m} w_{j}^{(i)}" }, { "math_id": 24, "text": "\\mu_{j} :=\\frac{\\sum_{i=1}^{m} w_{j}^{(i)} x^{(i)}}{\\sum_{i=1}^{m} w_{j}^{(i)}}" }, { "math_id": 25, "text": "\\Sigma_{j} :=\\frac{\\sum_{i=1}^{m} w_{j}^{(i)}\\left(x^{(i)}-\\mu_{j}\\right)\\left(x^{(i)}-\\mu_{j}\\right)^{T}}{\\sum_{i=1}^{m} w_{j}^{(i)}}" }, { "math_id": 26, "text": "p\\left(z^{(i)}=j | x^{(i)} ; \\phi, \\mu, \\Sigma\\right)=\\frac{p\\left(x^{(i)} | z^{(i)}=j ; \\mu, \\Sigma\\right) p\\left(z^{(i)}=j ; \\phi\\right)}{\\sum_{l=1}^{k} p\\left(x^{(i)} | z^{(i)}=l ; \\mu, \\Sigma\\right) p\\left(z^{(i)}=l ; \\phi\\right)}" }, { "math_id": 27, "text": "p\\left(x^{(i)} | z^{(i)}=j ; \\mu, \\Sigma\\right)=\\frac{1}{(2 \\pi)^{n / 2}\\left|\\Sigma_{j}\\right|^{1 / 2}} \\exp \\left(-\\frac{1}{2}\\left(x^{(i)}-\\mu_{j}\\right)^{T} \\Sigma_{j}^{-1}\\left(x^{(i)}-\\mu_{j}\\right)\\right)" }, { "math_id": 28, "text": "p\\left(z^{(i)}=j ; \\phi\\right)=\\phi_j" } ]
https://en.wikipedia.org/wiki?curid=64563432
64566469
Linear time property
In model checking, a branch of computer science, linear time properties are used to describe requirements of a model of a computer system. Example properties include "the vending machine does not dispense a drink until money has been entered" (a safety property) or "the computer program eventually terminates" (a liveness property). Fairness properties can be used to rule out unrealistic paths of a model. For instance, in a model of two traffic lights, the liveness property "both traffic lights are green infinitely often" may only be true under the unconditional fairness constraint "each traffic light changes colour infinitely often" (to exclude the case where one traffic light is "infinitely faster" than the other). Formally, a linear time property is an ω-language over the power set of "atomic propositions". That is, the property contains sequences of sets of propositions, each sequence known as a "word". Every property can be rewritten as ""P" and "Q" both occur" for some safety property "P" and liveness property "Q". An invariant for a system is something that is true or false for a particular state. Invariant properties describe an invariant that every reachable state of a model must satisfy, while persistence properties are of the form "eventually forever some invariant holds". Temporal logics such as linear temporal logic describe types of linear time properties using formulae. This article is about propositional linear-time properties and cannot handle predicates about program states, so it cannot define a property like: "the current value of y determines the number of times that x toggles between 0 and 1 before termination." The more general formalism used in Safety and liveness properties can handle this. Definition. Let "AP" be a set of atomic propositions. A word over formula_0 (the power set of "AP") is an infinite sequence of sets of propositions, such as formula_1 (for the atomic propositions formula_2). A linear time (LT) property over "AP" is a subset of formula_3 i.e. a set of words. An example of an LT property over the set formula_4 is "the set of words that contain "a" infinitely often". The word "w" is in this set, because "a" is contained in formula_4, which occurs infinitely often. A word not in this set is formula_5, as "a" only occurs once (in the first set). An LT property is an ω-language over the alphabet formula_6 (and vice versa). We denote by "pref"("w") the finite prefixes of "w" (i.e. formula_7 in the above case). The closure of an LT property "P" is: formula_8 Applications. Using the theory of finite-state machines, a program or computer system can be modelled by a Kripke structure. LT properties then describe restrictions on the traces (outputs) of a Kripke structure. For instance, if two traffic lights at an intersection are represented by a Kripke structure then the atomic propositions may be the possible colours of each light and it may be desirable that the traces satisfy the LT property "the traffic lights cannot both be green at the same time" (to avoid car collisions). If every trace of the Kripke structure "TS" is a trace of "TS'" then every LT property that "TS'" satisfies is satisfied by "TS". This is useful in model checking to allow abstraction: if a simplified model of the system satisfies an LT property then the actual model of the system will satisfy it as well. Classification of linear time properties. Safety properties. A safety property is informally of the form "a bad thing does not happen". For instance, if a system models an automated teller machine (ATM) then such a property is "money should not be dispensed unless a PIN has been entered". Formally, a safety property is an LT property such that any word that violates the property has a "bad prefix", for which no word with that prefix satisfies the property. That is, formula_9 In the ATM example, a "minimal" bad prefix is a finite set of steps carried out in which money is dispensed in the last step and a PIN is not entered at any step. To verify a safety property, it is sufficient to consider only the finite traces of a Kripke structure and check whether any such trace is a bad prefix. An LT property "P" is a safety property if and only if formula_10. Invariant properties. An invariant property is a type of safety property in which the condition only refers to the current state. For instance, the ATM example is not an invariant because we cannot tell whether the property is violated by seeing that the current state is "dispense money", only by seeing that the current state is "dispense money" and no previous state was "read PIN". An example of an invariant is the traffic light condition "the traffic lights cannot both be green at the same time" above. Another is "the variable "x" is never negative", in a model of a computer program. Formally, an invariant is of the form: formula_11 for some propositional logic formula formula_12. A Kripke structure satisfies an invariant if and only if every reachable state satisfies the invariant, which can be checked by a breadth-first search or depth-first search. Safety properties can be verified inductively using invariants. Liveness properties. A liveness property is informally of the form "something good eventually happens". Formally, "P" is a liveness property if formula_13 i.e. any finite string can be continued to a valid trace. An example of a liveness property is the previous LT property "the set of words that contain "a" infinitely often". No finite prefix of a word can prove that the word does not satisfy this property, as the word could continue on to have infinitely many "a"s. In terms of computer programs, useful liveness properties include "the program eventually terminates" and, in concurrent computing, "every process must eventually be served". Persistence properties. A persistence property is a liveness property of the form "eventually forever formula_12". That is, a property of the form: formula_14 Relation between safety and liveness properties. No LT property other than formula_3 (the set of all words over formula_0) is both a safety and a liveness property. Though not every property is a safety property or a liveness property (consider ""a" occurs exactly once"), every property is the intersection of a safety and a liveness property. In topology, the set of all words formula_3 can be equipped with the metric: formula_15 Then a safety property is a closed set and a liveness property is a dense set. Fairness properties. Fairness properties are preconditions imposed on a system to rule out unrealistic traces. Unconditional fairness is of the form "every process gets its turn infinitely often". Strong fairness is of the form "every process gets its turn infinitely often if it is enabled infinitely often". Weak fairness is of the form "every process gets its turn infinitely often if it is continuously enabled from a particular point". In some systems, a fairness constraint is defined by a set of states, and a "fair path" is one that passes through some state in the fairness constraint infinitely often. If there are multiple fairness constraints, then a fair path must pass infinitely often through one state per constraint. A program "fairly satisfies" an LT property "P" with respect to a set of fairness conditions if for every path, either the path fails a fairness condition or it satisfies "P". That is, the property "P" is satisfied for all fair paths. A fairness property is realizable for a Kripke structure if every reachable state has a fair path starting from that state. So long as a set of fairness conditions are realizable, they are irrelevant to safety properties. Temporal logic. Temporal logics such as computation tree logic (CTL) can be used to specify some LT properties. All linear temporal logic (LTL) formulae are LT properties. By a counting argument, we see that any logic in which each formula is a finite string cannot represent all LT properties, as there must be countably many formulae but there are uncountably many LT properties. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^{AP}" }, { "math_id": 1, "text": "w:=\\{a\\}\\{a,b\\}\\empty \\{a,b\\}^\\omega" }, { "math_id": 2, "text": "AP=\\{a,b\\}" }, { "math_id": 3, "text": "(2^{AP})^\\omega" }, { "math_id": 4, "text": "\\{a,b\\}" }, { "math_id": 5, "text": "x:=\\{a\\}(\\{b\\}\\empty)^\\omega" }, { "math_id": 6, "text": "\\Sigma=2^{AP}" }, { "math_id": 7, "text": "\\{\\{a\\}, \\{a\\}\\{a,b\\}, \\{a\\}\\{a,b\\}\\empty, ... \\}" }, { "math_id": 8, "text": "\\mathit{closure}(P):=\\{\\sigma \\in (2^{AP})^\\omega \\mid \\mathit{pref}(\\sigma)\\subseteq \\mathit{pref}(P)\\}" }, { "math_id": 9, "text": "\\forall \\sigma\\in (2^{AP})^\\omega\\setminus P\\ \\exists \\text{ a finite prefix}\\ \\hat \\sigma: P\\cap \\{\\sigma' \\mid \\hat \\sigma\\ \\text{is a prefix of}\\ \\sigma' \\} =\\empty" }, { "math_id": 10, "text": "\\mathit{closure}(P)=P" }, { "math_id": 11, "text": "P=\\{A_0A_1....\\in (2^{AP})^\\omega \\mid \\forall j\\in \\mathbb{N}: A_j \\ \\text{satisfies}\\ \\Phi\\}" }, { "math_id": 12, "text": "\\Phi" }, { "math_id": 13, "text": "\\mathit{pref}(P)=(2^{AP})^*" }, { "math_id": 14, "text": "P=\\{A_0A_1...\\in (2^{AP})^\\omega \\mid \\exists N\\in \\mathbb{N}:\\forall n \\ge N: A_n\\ \\text{satisfies}\\ \\Phi\\}" }, { "math_id": 15, "text": "d(w,x):=\\inf \\{2^{-|s|} \\mid s\\in \\mathit{pref}(w) \\cap \\mathit{pref}(x)\\}" } ]
https://en.wikipedia.org/wiki?curid=64566469
645676
Flow network
Directed graph where edges have a capacity In graph theory, a flow network (also known as a transportation network) is a directed graph where each edge has a capacity and each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge. Often in operations research, a directed graph is called a network, the vertices are called nodes and the edges are called arcs. A flow must satisfy the restriction that the amount of flow into a node equals the amount of flow out of it, unless it is a source, which has only outgoing flow, or sink, which has only incoming flow. A network can be used to model traffic in a computer network, circulation with demands, fluids in pipes, currents in an electrical circuit, or anything similar in which something travels through a network of nodes. Definition. A network is a directed graph "G" ("V", "E") with a non-negative capacity function "c" for each edge, and without multiple arcs (i.e. edges with the same source and target nodes). Without loss of generality, we may assume that if ("u", "v") ∈ "E", then ("v", "u") is also a member of E. Additionally, if ("v", "u") ∉ "E" then we may add ("v", "u") to "E" and then set the "c"("v", "u") 0. If two nodes in G are distinguished – one as the source s and the other as the sink t – then ("G", "c", "s", "t") is called a flow network. Flows. Flow functions model the net flow of units between pairs of nodes, and are useful when asking questions such as "what is the maximum number of units that can be transferred from the source node s to the sink node t?" The amount of flow between two nodes is used to represent the net amount of units being transferred from one node to the other. The excess function "x""f" : "V" → formula_0 represents the net flow entering a given node u (i.e. the sum of the flows entering u) and is defined byformula_1A node u is said to be active if "x""f" ("u") &gt; 0 (i.e. the node u consumes flow), deficient if "x""f" ("u") &lt; 0 (i.e. the node u produces flow), or conserving if "x""f" ("u") 0. In flow networks, the source s is deficient, and the sink t is active. Pseudo-flows, feasible flows, and pre-flows are all examples of flow functions. A pseudo-flow is a function "f" of each edge in the network that satisfies the following two constraints for all nodes u and v: *"Skew symmetry constraint": The flow on an arc from u to v is equivalent to the negation of the flow on the arc from v to u, that is: "f" ("u", "v") −"f" ("v", "u"). The sign of the flow indicates the flow's direction. *"Capacity constraint": An arc's flow cannot exceed its capacity, that is: "f" ("u", "v") ≤ "c"("u", "v"). A pre-flow is a pseudo-flow that, for all "v" ∈ "V" \{"s"}, satisfies the additional constraint: *"Non-deficient flows": The net flow "entering" the node v is non-negative, except for the source, which "produces" flow. That is: "x""f" ("v") ≥ 0 for all "v" ∈ "V" \{"s"}. A feasible flow, or just a flow, is a pseudo-flow that, for all "v" ∈ "V" \{"s", "t"}, satisfies the additional constraint: *"Flow conservation constraint": The total net flow entering a node v is zero for all nodes in the network except the source formula_2 and the sink formula_3, that is: "x""f" ("v") 0 for all "v" ∈ "V" \{"s", "t"}. In other words, for all nodes in the network except the source formula_2 and the sink formula_3, the total sum of the incoming flow of a node is equal to its outgoing flow (i.e. formula_4, for each vertex "v" ∈ "V" \{"s", "t"}). The value | "f" | of a feasible flow f for a network, is the net flow into the sink t of the flow network, that is: | "f" | "x""f" ("t"). Note, the flow value in a network is also equal to the total outgoing flow of source s, that is: | "f" | -"x""f" ("s"). Also, if we define "A" as a set of nodes in "G" such that "s" ∈ "A" and "t" ∉ "A", the flow value is equal to the total net flow going out of A (i.e. | "f" | "f" out("A") - "f" in("A")). The flow value in a network is the total amount of flow from s to t. Concepts useful to flow problems. Flow decomposition. Flow decomposition is a process of breaking down a given flow into a collection of path flows and cycle flows. Every flow through a network can be decomposed into one or more paths and corresponding quantities, such that each edge in the flow equals the sum of all quantities of paths that pass through it. Flow decomposition is a powerful tool in optimization problems to maximize or minimize specific flow parameters. Adding arcs and flows. We do not use multiple arcs within a network because we can combine those arcs into a single arc. To combine two arcs into a single arc, we add their capacities and their flow values, and assign those to the new arc: Along with the other constraints, the skew symmetry constraint must be remembered during this step to maintain the direction of the original pseudo-flow arc. Adding flow to an arc is the same as adding an arc with the capacity of zero. Residuals. The residual capacity of an arc e with respect to a pseudo-flow f is denoted "c""f", and it is the difference between the arc's capacity and its flow. That is, "c""f" ("e") "c"("e") - "f"("e"). From this we can construct a residual network, denoted "G""f" ("V", "E""f"), with a capacity function "c""f" which models the amount of "available" capacity on the set of arcs in "G" ("V", "E"). More specifically, capacity function "c""f" of each arc ("u", "v") in the residual network represents the amount of flow which can be transferred from "u" to "v" given the current state of the flow within the network. This concept is used in Ford–Fulkerson algorithm which computes the maximum flow in a flow network. Note that there can be an unsaturated path (a path with available capacity) from u to v in the residual network, even though there is no such path from u to v in the original network. Since flows in opposite directions cancel out, "decreasing" the flow from v to u is the same as "increasing" the flow from u to v. Augmenting paths. An augmenting path is a path ("u"1, "u"2, ..., "u""k") in the residual network, where "u"1 "s", "u""k" "t", and for all "u""i", "u""i" + 1 ("c""f" ("u""i", "u""i" + 1) &gt; 0) (1 ≤ i &lt; k). More simply, an augmenting path is an available flow path from the source to the sink. A network is at maximum flow if and only if there is no augmenting path in the residual network "G""f". The bottleneck is the minimum residual capacity of all the edges in a given augmenting path. See example explained in the "Example" section of this article. The flow network is at maximum flow if and only if it has a bottleneck with a value equal to zero. If any augmenting path exists, its bottleneck weight will be greater than 0. In other words, if there is a bottleneck value greater than 0, then there is an augmenting path from the source to the sink. However, we know that if there is any augmenting path, then the network is not at maximum flow, which in turn means that, if there is a bottleneck value greater than 0, then the network is not at maximum flow. The term "augmenting the flow" for an augmenting path means updating the flow f of each arc in this augmenting path to equal the capacity "c" of the bottleneck. Augmenting the flow corresponds to pushing additional flow along the augmenting path until there is no remaining available residual capacity in the bottleneck. Multiple sources and/or sinks. Sometimes, when modeling a network with more than one source, a supersource is introduced to the graph. This consists of a vertex connected to each of the sources with edges of infinite capacity, so as to act as a global source. A similar construct for sinks is called a supersink. Example. In Figure 1 you see a flow network with source labeled s, sink t, and four additional nodes. The flow and capacity is denoted formula_5. Notice how the network upholds the capacity constraint and flow conservation constraint. The total amount of flow from s to t is 5, which can be easily seen from the fact that the total outgoing flow from s is 5, which is also the incoming flow to t. By the skew symmetry constraint, from c to a is -2 because the flow from a to c is 2. In Figure 2 you see the residual network for the same given flow. Notice how there is positive residual capacity on some edges where the original capacity is zero in Figure 1, for example for the edge formula_6. This network is not at maximum flow. There is available capacity along the paths formula_7, formula_8 and formula_9, which are then the augmenting paths. The bottleneck of the formula_7 path is equal to formula_10 formula_11 formula_12 formula_13. Applications. Picture a series of water pipes, fitting into a network. Each pipe is of a certain diameter, so it can only maintain a flow of a certain amount of water. Anywhere that pipes meet, the total amount of water coming into that junction must be equal to the amount going out, otherwise we would quickly run out of water, or we would have a buildup of water. We have a water inlet, which is the source, and an outlet, the sink. A flow would then be one possible way for water to get from source to sink so that the total amount of water coming out of the outlet is consistent. Intuitively, the total flow of a network is the rate at which water comes out of the outlet. Flows can pertain to people or material over transportation networks, or to electricity over electrical distribution systems. For any such physical network, the flow coming into any intermediate node needs to equal the flow going out of that node. This conservation constraint is equivalent to Kirchhoff's current law. Flow networks also find applications in ecology: flow networks arise naturally when considering the flow of nutrients and energy between different organisms in a food web. The mathematical problems associated with such networks are quite different from those that arise in networks of fluid or traffic flow. The field of ecosystem network analysis, developed by Robert Ulanowicz and others, involves using concepts from information theory and thermodynamics to study the evolution of these networks over time. Classifying flow problems. The simplest and most common problem using flow networks is to find what is called the maximum flow, which provides the largest possible total flow from the source to the sink in a given graph. There are many other problems which can be solved using max flow algorithms, if they are appropriately modeled as flow networks, such as bipartite matching, the assignment problem and the transportation problem. Maximum flow problems can be solved in polynomial time with various algorithms (see table). The max-flow min-cut theorem states that finding a maximal network flow is equivalent to finding a cut of minimum capacity that separates the source and the sink, where a cut is the division of vertices such that the source is in one division and the sink is in another. In a multi-commodity flow problem, you have multiple sources and sinks, and various "commodities" which are to flow from a given source to a given sink. This could be for example various goods that are produced at various factories, and are to be delivered to various given customers through the "same" transportation network. In a minimum cost flow problem, each edge formula_14 has a given cost formula_15, and the cost of sending the flow formula_16 across the edge is formula_17. The objective is to send a given amount of flow from the source to the sink, at the lowest possible price. In a circulation problem, you have a lower bound formula_18 on the edges, in addition to the upper bound formula_19. Each edge also has a cost. Often, flow conservation holds for "all" nodes in a circulation problem, and there is a connection from the sink back to the source. In this way, you can dictate the total flow with formula_20 and formula_21. The flow "circulates" through the network, hence the name of the problem. In a network with gains or generalized network each edge has a gain, a real number (not zero) such that, if the edge has gain "g", and an amount "x" flows into the edge at its tail, then an amount "gx" flows out at the head. In a source localization problem, an algorithm tries to identify the most likely source node of information diffusion through a partially observed network. This can be done in linear time for trees and cubic time for arbitrary networks and has applications ranging from tracking mobile phone users to identifying the originating source of disease outbreaks. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}" }, { "math_id": 1, "text": "x_f(u)=\\sum_{w \\in V} f(w,u)." }, { "math_id": 2, "text": "s" }, { "math_id": 3, "text": "t" }, { "math_id": 4, "text": "\\sum_{(u,v) \\in E} f(u,v) = \\sum_{(v,z) \\in E} f(v,z) " }, { "math_id": 5, "text": "f/c" }, { "math_id": 6, "text": "(d,c)" }, { "math_id": 7, "text": "(s,a,c,t)" }, { "math_id": 8, "text": "(s,a,b,d,t)" }, { "math_id": 9, "text": "(s,a,b,d,c,t)" }, { "math_id": 10, "text": "\\min(c(s,a)-f(s,a), c(a,c)-f(a,c), c(c,t)-f(c,t))" }, { "math_id": 11, "text": "=\\min(c_f(s,a), c_f(a,c), c_f(c,t))" }, { "math_id": 12, "text": "= \\min(5-3, 3-2, 2-1)" }, { "math_id": 13, "text": "= \\min(2, 1, 1) = 1" }, { "math_id": 14, "text": "u,v" }, { "math_id": 15, "text": "k(u,v)" }, { "math_id": 16, "text": "f(u,v)" }, { "math_id": 17, "text": "f(u,v) \\cdot k(u,v)" }, { "math_id": 18, "text": "\\ell(u,v)" }, { "math_id": 19, "text": "c(u,v)" }, { "math_id": 20, "text": "\\ell(t,s)" }, { "math_id": 21, "text": "c(t,s)" } ]
https://en.wikipedia.org/wiki?curid=645676
64571411
Total inorganic carbon
Sum of the inorganic carbon species Total inorganic carbon ("C"T or TIC) is the sum of the inorganic carbon species. Carbon compounds can be distinguished as either organic or inorganic, and dissolved or particulate, depending on their composition. Organic carbon forms the backbone of key components of organic compounds such as proteins, lipids, carbohydrates, and nucleic acids. Inorganic carbon is found primarily in simple compounds such as carbon dioxide (), carbonic acid (), bicarbonate (), and carbonate (). Overview. The aquatic inorganic carbon system is composed of the various ionic, dissolved, solid, and/or gaseous forms of carbon dioxide in water. These species include dissolved carbon dioxide, carbonic acid, bicarbonate anion, carbonate anion, calcium carbonate, magnesium carbonate, and others. The relative amounts of each species in a body of water depends on physical variables including temperature and salinity, as well as chemical variables like pH and gas partial pressure. Variables like alkalinity and dissolved (or total) inorganic carbon further define a mass and charge balance that constrains the total state of the system. Given any two of the four central inorganic carbon system parameters (pH, alkalinity, dissolved inorganic carbon, partial pressure of carbon dioxide) the remainder may be derived by solving a system of equations that adhere to the principles of chemical thermodynamics. For most of the 20th century, chemical equilibria in marine and freshwater systems was calculated according to various conventions, which led to discrepancies among laboratories' calculations and limited scientific reproducibility. Since 1998, a family of software programs called CO2SYS has been widely used. This software calculate chemical equilibria for aquatic inorganic carbon species and parameters. Their core function is to use any two of the four central inorganic carbon system parameters (pH, alkalinity, dissolved inorganic carbon, and partial pressure of carbon dioxide) to calculate various chemical properties of the system. The programs are widely used by oceanographers and limnologists to understand and predict chemical equilibria in natural waters. Inorganic carbon species. The inorganic carbon species include carbon dioxide, carbonic acid, bicarbonate anion, and carbonate. It is customary to express carbon dioxide and carbonic acid simultaneously as . "C"T is a key parameter when making measurements related to the pH of natural aqueous systems, and carbon dioxide flux estimates. formula_0 where, Each of these species are related by the following pH-driven chemical equilibria: &lt;chem&gt;CO2 + H2O &lt;=&gt; H2CO3 &lt;=&gt; H+ + HCO3- &lt;=&gt; 2 H+ + CO3^2-&lt;/chem&gt; The concentrations of the different species of DIC (and which species is dominant) depends on the pH of the solution, as shown by a Bjerrum plot. Total inorganic carbon is typically measured by the acidification of the sample which drives the equilibria to . This gas is then sparged from solution and trapped, and the quantity trapped is then measured, usually by infrared spectroscopy. Marine carbon. Marine carbon is further separated into particulate and dissolved phases. These pools are operationally defined by physical separation – dissolved carbon passes through a 0.2 μm filter, and particulate carbon does not. There are two main types of inorganic carbon that are found in the oceans: Some of the inorganic carbon species in the ocean, such as bicarbonate and carbonate, are major contributors to alkalinity, a natural ocean buffer that prevents drastic changes in acidity (or pH). The marine carbon cycle also affects the reaction and dissolution rates of some chemical compounds, regulates the amount of carbon dioxide in the atmosphere and Earth's temperature. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C_T = \\ce{[CO2^\\star]{} + [HCO3^-]{} + [CO3^{2-}]}" } ]
https://en.wikipedia.org/wiki?curid=64571411
64572373
Phase-space wavefunctions
Phase-space representation of quantum state vectors is a formulation of quantum mechanics elaborating the phase-space formulation with a Hilbert space. It "is obtained within the framework of the relative-state formulation. For this purpose, the Hilbert space of a quantum system is enlarged by introducing an auxiliary quantum system. Relative-position state and relative-momentum state are defined in the extended Hilbert space of the composite quantum system and expressions of basic operators such as canonical position and momentum operators, acting on these states, are obtained." Thus, it is possible to assign a meaning to the wave function in phase space, formula_0, as a quasiamplitude, associated to a quasiprobability distribution. The first wave-function approach of quantum mechanics in phase space was introduced by Torres-Vega and Frederick in 1990 (also see). It is based on a generalised Husimi distribution. In 2004 Oliveira et al. developed a new wave-function formalism in phase space where the wave-function is associated to the Wigner quasiprobability distribution by means of the Moyal product. An advantage might be that off-diagonal Wigner functions used in superpositions are treated in an intuitive way, formula_1, also gauge theories are treated in an operator form. Phase space operators. Instead of thinking in terms multiplication of function using the star product, we can shift to think in terms of operators acting in functions in phase space. Where for the Torres-Vega and Frederick approach the phase space operators are formula_2 with formula_3 and formula_4 And Oliveira's approach the phase space operators are formula_5 with formula_6 formula_7 In the general case formula_8 and formula_9 with formula_10, where formula_11, formula_12, formula_13 and formula_14 are constants. These operators satisfy the uncertainty principle: formula_15 Symplectic Hilbert space. To associate the Hilbert space, formula_16, with the phase space formula_17, we will consider the set of complex functions of integrable square, formula_18 in formula_17, such that formula_19 Then we can write formula_20, with formula_21 where formula_22 is the dual vector of formula_23. This symplectic Hilbert space is denoted by formula_24. An association with the Schrödinger wavefunction can be made by formula_25, letting formula_26, we have formula_27. Then formula_28. Torres-Vega–Frederick representation. With the operators of position and momentum a Schrödinger picture is developed in phase space formula_29 The Torres-Vega–Frederick distribution is formula_30 Oliveira representation. Thus, it is now, with aid of the star product possible to construct a Schrödinger picture in phase space for formula_18 formula_31 deriving both side by formula_32, we have formula_33 therefore, the above equation has the same role of Schrödinger equation in usual quantum mechanics. To show that formula_34, we take the 'Schrödinger equation' in phase space and 'star-multiply' by the right for formula_35 formula_36 where formula_37 is the classical Hamiltonian of the system. And taking the complex conjugate formula_38 subtracting both equations we get formula_39 which is the time evolution of Wigner function, for this reason formula_40 is sometimes called quasiamplitude of probability. The formula_41-genvalue is given by the time independent equation formula_42. Star-multiplying for formula_35 on the right, we obtain formula_43 Therefore, the static Wigner distribution function is a formula_41-genfunction of the formula_41-genvalue equation, a result well known in the usual phase-space formulation of quantum mechanics. In the case where formula_44, worked in the beginning of the section, the Oliveira approach and phase-space formulation are indistinguishable, at least for pure states. Equivalence of representations. As it was states before, the first wave-function formulation of quantum mechanics was developed by Torres-Vega and Frederick, its phase-space operators are given by formula_3 and formula_4 This operators are obtained transforming the operators formula_45 and formula_46 (developed in the same article) as formula_47 and formula_48 where formula_49. This representation is some times associated with the Husimi distribution and it was shown to coincides with the totality of coherent-state representations for the Heisenberg–Weyl group. The Wigner quasiamplitude, formula_40, and Torres-Vega–Frederick wave-function, formula_50, are related by formula_51 where formula_52 and formula_53. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\psi(x,p,t)" }, { "math_id": 1, "text": "\\psi_1\\star\\psi_2" }, { "math_id": 2, "text": "\\widehat{F}_{{}_\\text{TV}}(\\widehat{x},\\widehat{p})=f\\bigg(\\frac{1}{2}x+i\\hbar {\\frac {\\partial }{\\partial p}},\\;\\frac {1}{2}}p-i\\hbar {\\frac {\\partial }{\\partial x}\\bigg) ," }, { "math_id": 3, "text": "\\widehat{x}_{{}_\\text{TV}}=\\frac{1}{2}x+i\\hbar\\frac{\\partial}{\\partial p} ," }, { "math_id": 4, "text": "\\widehat{p\\,}_{{}_\\text{TV}}=\\frac{1}{2}p-i\\hbar\\frac{\\partial}{\\partial x} ." }, { "math_id": 5, "text": "\\widehat{F}_w=f(x,p)= f\\star=\\left(x+\\tfrac{i \\hbar}{2} \\frac{\\partial}{\\partial p} , p - \\tfrac{i \\hbar}{2} \\frac{\\partial}{\\partial x}\\right)" }, { "math_id": 6, "text": "\\widehat{p\\,}_w=p\\star= p-i\\frac{\\hbar}{2}\\partial_x ," }, { "math_id": 7, "text": "\\widehat{x}_w=x\\star=x+i\\frac{\\hbar}{2}\\partial_p ." }, { "math_id": 8, "text": "\\widehat{x}=\\alpha x+i\\beta\\hbar\\frac{\\partial}{\\partial p} ," }, { "math_id": 9, "text": "\\widehat{p\\,}=\\gamma p+i\\delta\\hbar\\frac{\\partial}{\\partial x} ," }, { "math_id": 10, "text": "\\gamma\\beta-\\alpha\\delta=1" }, { "math_id": 11, "text": "\\alpha" }, { "math_id": 12, "text": "\\beta" }, { "math_id": 13, "text": "\\gamma" }, { "math_id": 14, "text": "\\delta" }, { "math_id": 15, "text": "[\\widehat{x},\\widehat{p\\,}]=i\\hbar ." }, { "math_id": 16, "text": "\\mathcal{H}" }, { "math_id": 17, "text": "\\Gamma" }, { "math_id": 18, "text": "\\psi(x,p)" }, { "math_id": 19, "text": "\\int dp\\,dx\\, \\psi^{\\ast}(x,p)\\psi(x,p) < \\infty ." }, { "math_id": 20, "text": "\\psi(x,p)=\\langle x,p|\\psi\\rangle" }, { "math_id": 21, "text": "\\int dp\\, dx\\; |x,p\\rangle\\langle x,p| =1 ," }, { "math_id": 22, "text": "\\langle\\psi|" }, { "math_id": 23, "text": "|\\psi\\rangle" }, { "math_id": 24, "text": "\\mathcal{H}(\\Gamma)" }, { "math_id": 25, "text": "\\psi(q,p)=e^{-ixp/2\\hbar}\\int g(x')\\phi(x+x')e^{-(i/\\hbar)px'}dx'" }, { "math_id": 26, "text": "g(x')=\\phi^*(-\\frac{z}{2})" }, { "math_id": 27, "text": "\\psi(q,p)=\\int \\phi(x-\\frac{z}{2})\\phi(x+\\frac{z}{2})e^{-(i/\\hbar)pz}dz" }, { "math_id": 28, "text": "\\psi(x,p)\\propto W(q,p)" }, { "math_id": 29, "text": "i\\hbar\\frac{\\partial}{\\partial t}\\psi(x,p,t)=\\widehat{H}_{{}_\\text{TV}}\\psi(x,p,t) ." }, { "math_id": 30, "text": "f_{{}_\\text{TV}}=|\\psi_{{}_\\text{TV}}(q,p)|^2 ." }, { "math_id": 31, "text": "\\psi(x,p,t)=e^{-\\frac{i}{\\hbar}H\\star\\,t}\\psi(x,p) ," }, { "math_id": 32, "text": "t" }, { "math_id": 33, "text": "i\\hbar\\frac{\\partial}{\\partial t}\\psi(x,p,t)=H\\star\\psi(x,p,t) ," }, { "math_id": 34, "text": "W(x,p,t)=\\psi(x,p,t)\\star\\psi^\\dagger(x,p,t)" }, { "math_id": 35, "text": "\\psi^\\dagger(x,p,t)" }, { "math_id": 36, "text": "i\\hbar\\frac{\\partial \\psi}{\\partial t}\\star\\psi^\\dagger=H\\star\\psi\\star\\psi^\\dagger ," }, { "math_id": 37, "text": "H" }, { "math_id": 38, "text": "-i\\hbar\\,\\psi\\star\\frac{\\partial \\psi^\\dagger}{\\partial t}=\\psi\\star\\psi^\\dagger\\star H ," }, { "math_id": 39, "text": "\\frac{\\partial}{\\partial t}(\\psi\\star\\psi^\\dagger)=-\\frac{1}{i\\hbar}[(\\psi\\star\\psi^\\dagger)\\star H-H\\star(\\psi\\star\\psi^\\dagger)] ," }, { "math_id": 40, "text": "\\psi" }, { "math_id": 41, "text": "\\star" }, { "math_id": 42, "text": "H\\star\\psi=E\\psi" }, { "math_id": 43, "text": "H\\star W= E\\,W ." }, { "math_id": 44, "text": "\\psi(q,p)\\propto W(q,p)" }, { "math_id": 45, "text": "\\bar{x}_{{}_\\text{TV}}=x +i\\hbar \\frac{\\partial}{\\partial p}" }, { "math_id": 46, "text": "\\bar{p}_{{}_\\text{TV}}=-i\\hbar\\frac{\\partial}{\\partial q}" }, { "math_id": 47, "text": "U^{-1}\\bar{x}_{{}_{TV}}U " }, { "math_id": 48, "text": "U^{-1}\\bar{p}_{{}_{TV}}U ," }, { "math_id": 49, "text": "U=\\exp(i\\frac{x\\,p}{2\\hbar})" }, { "math_id": 50, "text": "\\psi_{{}_\\text{TV}}" }, { "math_id": 51, "text": "\\begin{aligned}\n\\widehat{x}_{{}_\\text{TV}}\\psi_{{}_\\text{TV}}=(2\\widehat{x}_w\\otimes\\widehat{1})\\psi_{w},\\\\\n\\widehat{p}_{{}_\\text{TV}}\\psi_{{}_\\text{TV}}=(\\widehat{1}\\otimes\\widehat{p}_w)\\psi_{w},\n\\end{aligned}" }, { "math_id": 52, "text": "\\widehat{x}_w=x+\\frac{i\\hbar}{2}\\partial_p" }, { "math_id": 53, "text": "\\widehat{p}_w=p-\\frac{i\\hbar}{2}\\partial_x" } ]
https://en.wikipedia.org/wiki?curid=64572373
645738
Erosion (morphology)
Basic operation in mathematical morphology Erosion (usually represented by ⊖) is one of two fundamental operations (the other being dilation) in morphological image processing from which all other morphological operations are based. It was originally defined for binary images, later being extended to grayscale images, and subsequently to complete lattices. The erosion operation usually uses a structuring element for probing and reducing the shapes contained in the input image. Binary erosion. In binary morphology, an image is viewed as a subset of a Euclidean space formula_0 or the integer grid formula_1, for some dimension "d". The basic idea in binary morphology is to probe an image with a simple, pre-defined shape, drawing conclusions on how this shape fits or misses the shapes in the image. This simple "probe" is called structuring element, and is itself a binary image (i.e., a subset of the space or grid). Let "E" be a Euclidean space or an integer grid, and "A" a binary image in "E". The erosion of the binary image "A" by the structuring element "B" is defined by: formula_2, where "B""z" is the translation of "B" by the vector z, i.e., formula_3, formula_4. When the structuring element "B" has a center (e.g., a disk or a square), and this center is located on the origin of "E", then the erosion of "A" by "B" can be understood as the locus of points reached by the center of "B" when "B" moves inside "A". For example, the erosion of a square of side 10, centered at the origin, by a disc of radius 2, also centered at the origin, is a square of side 6 centered at the origin. The erosion of "A" by "B" is also given by the expression: formula_5, where "A−b" denotes the translation of "A" by "-b". This is more generally also known as a Minkowski difference. Example. Suppose A is a 13 x 13 matrix and B is a 3 x 3 matrix: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Assuming that the origin B is at its center, for each pixel in A superimpose the origin of B, if B is completely contained by A the pixel is retained, else deleted. Therefore the Erosion of A by B is given by this 13 x 13 matrix. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 1 1 1 1 0 0 1 1 1 1 0 0 0 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 This means that only when B is completely contained inside A that the pixels values are retained, otherwise it gets deleted or eroded. Grayscale erosion. In grayscale morphology, images are functions mapping a Euclidean space or grid "E" into formula_11, where formula_12 is the set of reals, formula_13 is an element larger than any real number, and formula_14 is an element smaller than any real number. Denoting an image by "f(x)" and the grayscale structuring element by "b(x)", where B is the space that b(x) is defined, the grayscale erosion of "f" by "b" is given by formula_15, where "inf" denotes the infimum. In other words the erosion of a point is the minimum of the points in its neighborhood, with that neighborhood defined by the structuring element. In this way it is similar to many other kinds of image filters like the median filter and the gaussian filter. Erosions on complete lattices. Complete lattices are partially ordered sets, where every subset has an infimum and a supremum. In particular, it contains a least element and a greatest element (also denoted "universe"). Let formula_16 be a complete lattice, with infimum and supremum symbolized by formula_17 and formula_18, respectively. Its universe and least element are symbolized by "U" and formula_19, respectively. Moreover, let formula_20 be a collection of elements from "L". An erosion in formula_16 is any operator formula_21 that distributes over the infimum, and preserves the universe. I.e.:
[ { "math_id": 0, "text": "\\mathbb{R}^d" }, { "math_id": 1, "text": "\\mathbb{Z}^d" }, { "math_id": 2, "text": "A \\ominus B = \\{z\\in E \\mid B_{z} \\subseteq A\\}" }, { "math_id": 3, "text": "B_z = \\{b+z \\mid b\\in B\\}" }, { "math_id": 4, "text": "\\forall z\\in E" }, { "math_id": 5, "text": "A \\ominus B = \\bigcap_{b\\in B} A_{-b}" }, { "math_id": 6, "text": "A\\subseteq C" }, { "math_id": 7, "text": "A\\ominus B \\subseteq C\\ominus B" }, { "math_id": 8, "text": "A\\ominus B\\subseteq A" }, { "math_id": 9, "text": "(A\\ominus B)\\ominus C = A\\ominus (B\\oplus C)" }, { "math_id": 10, "text": "\\oplus" }, { "math_id": 11, "text": "\\mathbb{R}\\cup\\{\\infty,-\\infty\\}" }, { "math_id": 12, "text": "\\mathbb{R}" }, { "math_id": 13, "text": "\\infty" }, { "math_id": 14, "text": "-\\infty" }, { "math_id": 15, "text": "(f\\ominus b)(x)=\\inf_{y\\in B}[f(x+y)-b(y)]" }, { "math_id": 16, "text": "(L,\\leq)" }, { "math_id": 17, "text": "\\wedge" }, { "math_id": 18, "text": "\\vee" }, { "math_id": 19, "text": "\\emptyset" }, { "math_id": 20, "text": "\\{ X_{i} \\}" }, { "math_id": 21, "text": "\\varepsilon: L\\rightarrow L" }, { "math_id": 22, "text": "\\bigwedge_{i}\\varepsilon(X_i)=\\varepsilon\\left(\\bigwedge_{i} X_i\\right)" }, { "math_id": 23, "text": "\\varepsilon(U)=U" } ]
https://en.wikipedia.org/wiki?curid=645738
645744
Dilation (morphology)
Operation in mathematical morphology Dilation (usually represented by ⊕) is one of the basic operations in mathematical morphology. Originally developed for binary images, it has been expanded first to grayscale images, and then to complete lattices. The dilation operation usually uses a structuring element for probing and expanding the shapes contained in the input image. Binary dilation. In binary morphology, dilation is a shift-invariant (translation invariant) operator, equivalent to Minkowski addition. A binary image is viewed in mathematical morphology as a subset of a Euclidean space R"d" or the integer grid Z"d", for some dimension "d". Let "E" be a Euclidean space or an integer grid, "A" a binary image in "E", and "B" a structuring element regarded as a subset of R"d". The dilation of "A" by "B" is defined by formula_0 where "A""b" is the translation of "A" by "b". Dilation is commutative, also given by formula_1. If "B" has a center on the origin, then the dilation of "A" by "B" can be understood as the locus of the points covered by "B" when the center of "B" moves inside "A". The dilation of a square of size 10, centered at the origin, by a disk of radius 2, also centered at the origin, is a square of side 14, with rounded corners, centered at the origin. The radius of the rounded corners is 2. The dilation can also be obtained by formula_2, where "B""s" denotes the symmetric of "B", that is, formula_3. Example. Suppose A is the following 11 x 11 matrix and B is the following 3 x 3 matrix: 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 1 1 1 0 0 1 1 1 1 0 0 1 1 1 0 0 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 0 0 0 1 1 1 1 0 1 1 1 0 1 1 0 0 0 1 1 1 1 0 1 1 1 0 1 1 0 0 0 1 1 1 1 0 0 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 For each pixel in A that has a value of 1, superimpose B, with the center of B aligned with the corresponding pixel in A. Each pixel of every superimposed B is included in the dilation of A by B. The dilation of A by B is given by this 11 x 11 matrix. 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 0 0 Properties of binary dilation. Here are some properties of the binary dilation operator Grayscale dilation. In grayscale morphology, images are functions mapping a Euclidean space or grid "E" into formula_8, where formula_9 is the set of reals, formula_10 is an element greater than any real number, and formula_11 is an element less than any real number. Grayscale structuring elements are also functions of the same format, called "structuring functions". Denoting an image by "f"("x") and the structuring function by "b"("x"), the grayscale dilation of "f" by "b" is given by formula_12 where "sup" denotes the supremum. Flat structuring functions. It is common to use flat structuring elements in morphological applications. Flat structuring functions are functions "b"("x") in the form formula_13 where formula_14. In this case, the dilation is greatly simplified, and given by formula_15 In the bounded, discrete case ("E" is a grid and "B" is bounded), the supremum operator can be replaced by the maximum. Thus, dilation is a particular case of order statistics filters, returning the maximum value within a moving window (the symmetric of the structuring function support "B"). Dilation on complete lattices. Complete lattices are partially ordered sets, where every subset has an infimum and a supremum. In particular, it contains a least element and a greatest element (also denoted "universe"). Let formula_16 be a complete lattice, with infimum and supremum symbolized by formula_17 and formula_18, respectively. Its universe and least element are symbolized by "U" and formula_19, respectively. Moreover, let formula_20 be a collection of elements from "L". A dilation is any operator formula_21 that distributes over the supremum, and preserves the least element. That is, the following are true:
[ { "math_id": 0, "text": "A \\oplus B = \\bigcup_{b\\in B} A_b," }, { "math_id": 1, "text": "A \\oplus B = B\\oplus A = \\bigcup_{a\\in A} B_a" }, { "math_id": 2, "text": "A \\oplus B = \\{z \\in E \\mid (B^s)_z \\cap A\\neq \\varnothing\\}" }, { "math_id": 3, "text": "B^s=\\{x\\in E \\mid -x \\in B\\}" }, { "math_id": 4, "text": "A\\subseteq C" }, { "math_id": 5, "text": "A\\oplus B \\subseteq C\\oplus B" }, { "math_id": 6, "text": "A\\subseteq A\\oplus B" }, { "math_id": 7, "text": "(A\\oplus B)\\oplus C = A\\oplus (B\\oplus C)" }, { "math_id": 8, "text": "\\mathbb{R}\\cup\\{\\infty,-\\infty\\}" }, { "math_id": 9, "text": "\\mathbb{R}" }, { "math_id": 10, "text": "\\infty" }, { "math_id": 11, "text": "-\\infty" }, { "math_id": 12, "text": "(f\\oplus b)(x)=\\sup_{y\\in E}[f(y)+b(x-y)]," }, { "math_id": 13, "text": "b(x)=\\left\\{\\begin{array}{ll}0,&x\\in B,\\\\-\\infty,&\\text{otherwise},\\end{array}\\right." }, { "math_id": 14, "text": "B\\subseteq E" }, { "math_id": 15, "text": "(f\\oplus b)(x)=\\sup_{y\\in E}[f(y)+b(x-y)]=\\sup_{z\\in E}[f(x-z)+b(z)]=\\sup_{z\\in B}[f(x-z)]." }, { "math_id": 16, "text": "(L,\\leq)" }, { "math_id": 17, "text": "\\wedge" }, { "math_id": 18, "text": "\\vee" }, { "math_id": 19, "text": "\\varnothing" }, { "math_id": 20, "text": "\\{ X_{i} \\}" }, { "math_id": 21, "text": "\\delta: L\\rightarrow L" }, { "math_id": 22, "text": "\\bigvee_i\\delta(X_i)=\\delta\\left(\\bigvee_i X_i\\right)," }, { "math_id": 23, "text": "\\delta(\\varnothing)=\\varnothing." } ]
https://en.wikipedia.org/wiki?curid=645744
645748
Opening (morphology)
In mathematical morphology, opening is the dilation of the erosion of a set A by a structuring element B: formula_0 where formula_1 and formula_2 denote erosion and dilation, respectively. Together with closing, the opening serves in computer vision and image processing as a basic workhorse of morphological noise removal. Opening removes small objects from the foreground (usually taken as the bright pixels) of an image, placing them in the background, while closing removes small holes in the foreground, changing small islands of background into foreground. These techniques can also be used to find specific shapes in an image. Opening can be used to find things into which a specific structuring element can fit (edges, corners, ...). One can think of "B" sweeping around the inside of the boundary of "A", so that it does not extend beyond the boundary, and shaping the "A" boundary around the boundary of the element. Example. Perform Erosion formula_9: Suppose A is the following 16 x 15 matrix and B is the following 3 x 3 matrix: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 1 1 1 0 0 0 0 1 1 1 1 1 0 0 0 1 1 1 1 0 0 1 1 1 0 0 1 1 1 1 0 0 0 1 1 1 1 1 0 0 1 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0 0 0 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 First, perform Erosion on A by B formula_10): Assuming that the origin of B is at its center, for each pixel in A superimpose the origin of B, if B is completely contained by A the pixel is retained, else deleted. Therefore the Erosion of A by B is given by this 16 x 15 matrix. formula_9 is given by: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Then, perform Dilation on the result of Erosion by B: formula_11: For each pixel in formula_9 that has a value of 1, superimpose B, with the center of B aligned with the corresponding pixel in formula_9. Each pixel of every superimposed B is included in the dilation of A by B. The dilation of formula_9 by B is given by this 16 x 15 matrix. formula_11 is given by : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Therefore, the opening operation formula_12 removes small protrusions from the boundary of the object represented by A, while preserving the overall shape and size of the larger components. Extension: Opening by reconstruction. In morphological opening formula_11 , the erosion operation removes objects that are smaller than structuring element B and the dilation operation (approximately) restores the size and shape of the remaining objects. However, restoration accuracy in the dilation operation depends highly on the type of structuring element and the shape of the restoring objects. The opening by reconstruction method is able to restore the objects more completely after erosion has been applied. It is defined as the reconstruction by geodesic dilation of formula_13 erosions of formula_14 by formula_15 with respect to formula_14 : formula_16 where formula_17 denotes a marker image and formula_14 is a mask image in morphological reconstruction by dilation. formula_18 formula_19 denotes geodesic dilation with formula_20 iterations until stability, i.e., such that formula_21 Since formula_22, the marker image is limited in the growth region by the mask image, so the dilation operation on the marker image will not expand beyond the mask image. As a result, the marker image is a subset of the mask image formula_23 (Strictly, this holds for binary masks only. However, similar statements hold when the mask is not binary.) The images below present a simple opening-by-reconstruction example which extracts the vertical strokes from an input text image. Since the original image is converted from grayscale to binary image, it has a few distortions in some characters so that same characters might have different vertical lengths. In this case, the structuring element is an 8-pixel vertical line which is applied in the erosion operation in order to find objects of interest. Moreover, morphological reconstruction by dilation, formula_24 iterates formula_25 times until the resulting image converges. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A\\circ B = (A\\ominus B)\\oplus B, \\, " }, { "math_id": 1, "text": "\\ominus" }, { "math_id": 2, "text": "\\oplus" }, { "math_id": 3, "text": "(A\\circ B)\\circ B = A\\circ B" }, { "math_id": 4, "text": "A\\subseteq C" }, { "math_id": 5, "text": "A\\circ B \\subseteq C\\circ B" }, { "math_id": 6, "text": "A\\circ B\\subseteq A" }, { "math_id": 7, "text": "A \\bullet B = (A^{c} \\circ B^{c})^{c}" }, { "math_id": 8, "text": "\\bullet" }, { "math_id": 9, "text": "(A\\ominus B)" }, { "math_id": 10, "text": "(A\\ominus B " }, { "math_id": 11, "text": "(A\\ominus B)\\oplus B " }, { "math_id": 12, "text": "A\\circ B" }, { "math_id": 13, "text": "n " }, { "math_id": 14, "text": "F " }, { "math_id": 15, "text": "B " }, { "math_id": 16, "text": "O_R^{(n)}(F) = R_F^{D}[ (F\\ominus nB)]," }, { "math_id": 17, "text": "(F\\ominus nB) " }, { "math_id": 18, "text": "R_F^{D}[ (F\\ominus nB)] = D_F^{(k)}[ (F\\ominus nB)], " }, { "math_id": 19, "text": "D " }, { "math_id": 20, "text": "k " }, { "math_id": 21, "text": " D_F^{(k)}[ (F\\ominus nB)] = D_F^{(k-1)}[ (F\\ominus nB)]. " }, { "math_id": 22, "text": "D_F^{(1)}[ (F\\ominus nB)] = ([ (F\\ominus nB)]\\oplus B)\\cap F " }, { "math_id": 23, "text": "(F\\ominus nB)\\subseteq F. " }, { "math_id": 24, "text": "R_F^{D}[ (F\\ominus nB)] = D_F^{(k)}[ (F\\ominus nB)] " }, { "math_id": 25, "text": "k = 9 " } ]
https://en.wikipedia.org/wiki?curid=645748
645751
Closing (morphology)
In mathematical morphology, the closing of a set (binary image) "A" by a structuring element "B" is the erosion of the dilation of that set, formula_0 where formula_1 and formula_2 denote the dilation and erosion, respectively. In image processing, closing is, together with opening, the basic workhorse of morphological noise removal. Opening removes small objects, while closing removes small holes. Example. Perform Dilation ( formula_3 ): Suppose A is the following 11 x 11 matrix and B is the following 3 x 3 matrix: 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 1 1 1 0 0 1 1 1 1 0 0 1 1 1 0 0 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 0 0 0 1 1 0 1 1 1 0 1 1 1 1 0 0 0 1 1 0 1 1 1 0 1 0 0 1 0 0 0 1 1 0 1 1 1 0 1 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 For each pixel in A that has a value of 1, superimpose B, with the center of B aligned with the corresponding pixel in A. Each pixel of every superimposed B is included in the dilation of A by B. The dilation of A by B is given by this 11 x 11 matrix. formula_4 is given by : 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Now, Perform Erosion on the result: (formula_3) formula_5 formula_3 is the following 11 x 11 matrix and B is the following 3 x 3 matrix: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Assuming that the origin B is at its center, for each pixel in formula_3 superimpose the origin of B, if B is completely contained by A the pixel is retained, else deleted. Therefore the Erosion of formula_3 by B is given by this 11 x 11 matrix. (formula_3) formula_5 is given by: 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 0 0 0 1 1 0 0 1 1 1 1 0 0 0 1 1 0 0 1 1 1 1 0 0 0 1 1 0 0 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 Therefore Closing Operation fills small holes and smoothes the object by filling narrow gaps.
[ { "math_id": 0, "text": "A\\bullet B = (A\\oplus B)\\ominus B, \\, " }, { "math_id": 1, "text": "\\oplus" }, { "math_id": 2, "text": "\\ominus" }, { "math_id": 3, "text": "A\\oplus B" }, { "math_id": 4, "text": "A \\oplus B" }, { "math_id": 5, "text": " \\ominus B" }, { "math_id": 6, "text": "(A\\bullet B)\\bullet B=A\\bullet B" }, { "math_id": 7, "text": "A\\subseteq C" }, { "math_id": 8, "text": "A\\bullet B \\subseteq C\\bullet B" }, { "math_id": 9, "text": "A\\subseteq A\\bullet B" } ]
https://en.wikipedia.org/wiki?curid=645751
64583610
Reversible solid oxide cell
A reversible solid oxide cell (rSOC) is a solid-state electrochemical device that is operated alternatively as a solid oxide fuel cell (SOFC) and a solid oxide electrolysis cell (SOEC). Similarly to SOFCs, rSOCs are made of a dense electrolyte sandwiched between two porous electrodes. Their operating temperature ranges from 600°C to 900°C, hence they benefit from enhanced kinetics of the reactions and increased efficiency with respect to low-temperature electrochemical technologies. When utilized as a fuel cell, the reversible solid oxide cell is capable of oxidizing one or more gaseous fuels to produce electricity and heat. When used as an electrolysis cell, the same device can consume electricity and heat to convert back the products of the oxidation reaction into valuable fuels. These gaseous fuels can be pressurized and stored for a later use. For this reason, rSOCs are recently receiving increased attention due to their potential as an energy storage solution on the seasonal scale. Technology description. Cell structure and working principle. Reversible solid oxide cells (rSOCs), as solid oxide fuel cells, are made of four main components: the electrolyte, the fuel and oxygen electrodes, and the interconnects. The electrodes are porous layers that favor the reactants diffusion inside their structure and catalyze electrochemical reactions. In the single technologies like SOFCs and SOECs, the electrodes serve a single purpose, hence they are called with their specific names. The anode is where the oxidation reaction occurs, while the cathode is where the reduction reaction takes place. In reversible solid oxide cells, on the other hand, both modalities can occur alternatively in the same device. For this reason, the generic names of "fuel electrode" and "oxygen electrode" are preferred instead. On the fuel electrode the reactions involving the fuel oxidation (SOFC modality) or the reduction of the products to produce the fuel (SOEC modality) takes place. On the oxygen electrode, oxygen reduction (SOFC modality) or oxygen ions oxidation to form oxygen gas (SOEC modality) takes place. State-of-the-art materials for rSOCs are those used for SOFCs. The most common fuel electrodes are made by a mixture of nickel, that serves as electronic conductor, and yttria-stabilized zirconia (YSZ), a ceramic material characterized by high conductivity to oxygen ions at elevated temperature. The most popular oxygen electrode materials are lanthanum strontium cobalt ferrite (LSCF) and lanthanum strontium chromite (LSC), perovskite materials able to catalyze oxygen reduction and oxide ion oxidation reactions. The electrolyte is a solid-state layer placed between the two electrodes. It is an electric insulator, it is impermeable to gas flow but permeable to oxygen ions flow. Hence, the main properties of this component are the high ion conductivity and the low electrical conductivity. When the rSOC is operated in SOFC mode, oxygen ions flow from the oxygen electrode to the fuel electrode, where the fuel oxidation occurs. In SOEC mode, the reactants are reduced in the anode with the production of oxygen ions, which flow towards the oxygen electrode. The most widespread material for electrolytes is YSZ. The interconnects are usually made of metallic materials. They provide or collect the electrons involved in the electrochemical reactions. In addition, they are shaped internally with gas channels to distribute the reactants over the cell surface. Polarization curve. The most common tool to characterize the performances of a reversible solid oxide cell is the polarization curve. In this chart, the current density is related to operating voltage of the cell. The usual convention is the one of positive current density for the fuel cell operation, and negative current density for the electrolysis operation. When the rSOC electrical circuit is not closed and no current is extracted or supplied to the cell, the operating voltage is the so-called open circuit voltage (OCV). If the composition of the gas in the fuel electrode and the oxygen electrode are the same for both modalities, the polarization curve for the SOEC mode and the SOFC have the same OCV. When some current density is extracted or supplied to the cell, the operating voltage starts to diverge from the OCV. This phenomenon is due to the polarization losses, which depend on three main phenomena: The sum of the polarization losses takes the name of overpotential. Other than the open circuit voltage, another fundamental theoretical voltage can be defined. The "thermoneutral voltage formula_0" depends on the enthalpy of the overall reaction taking place in the rSOC and the number of charges that are transferred within the electrochemical reactions. Its relationship with the operating voltage gives information about the heat demand or generation inside the cell. formula_1 During the electrolysis operation: The fuel cell operation, instead, is always exothermic. Chemistry. Various chemistries can be considered when dealing with reversible solid oxide cells, which in turn can influence their operating conditions and overall efficiency. Hydrogen. When hydrogen and steam are considered as reactants, the overall reaction takes this form: &lt;chem&gt;H2 + 1/2 O2 &lt;=&gt; H2O&lt;/chem&gt; where the forward reaction occurs during SOFC mode, and the backward reaction during SOEC mode. On the fuel electrode, hydrogen oxidation (forward reaction) takes in SOFC mode and water reduction (backward reaction) takes plain SOEC mode: &lt;chem&gt;H2 + O^2- &lt;=&gt; H2O + 2e-&lt;/chem&gt; On the oxygen electrode, oxygen reduction (forward reaction) occurs in SOFC mode and oxide ions oxidation (backward reaction) occurs in SOEC mode: &lt;chem&gt;O2 +2e- &lt;=&gt; O^2-&lt;/chem&gt; The thermoneutral voltage for steam electrolysis is equal to 1.29 V. Carbonaceous reactants. Differently than low-temperature electrochemical technologies, rSOCs can process also carbon containing species with reduced risk of catalyst poisoning. Methane can be internally reformed on the Ni particles to produce hydrogen, similarly to what happens in steam reforming reactors. Subsequently, the produced hydrogen can undergo the electro-oxidation. Moreover, when working in SOEC modality, water and carbon dioxide can be co-electrolyzed to generate hydrogen and carbon monoxide to form syngas mixtures with various composition. The reactions taking place on the oxygen electrode are the same considered for the hydrogen/steam case. Even if characterized by much slower kinetics with respect to the one involving hydrogen and steam, the direct electro-oxidation of carbon monoxide (forward reaction) or the direct electro-reduction of carbon dioxide (backward reaction) can be considered as well: &lt;chem&gt;CO + 1/2 O2 &lt;=&gt; CO2&lt;/chem&gt; The thermoneutral voltage of the &lt;chem&gt;CO2&lt;/chem&gt; electrolysis is equal to 1.48 V. One useful way to depict the cycling between SOFC and SOEC mode of the rSOC operation with carbonaceous reactants is the C-H-O ternary diagram. Each point in the diagram represents a gas mixture with a different number of carbon, hydrogen or oxygen atoms. When dealing with the operation on reversible solid oxide cells, three distinct regions can be distinguished in the graph. For different operating conditions (i.e., different temperature and pressure), distinct boundary lines between these regions can be drawn. The three regions are: In the operating region, the fuel mixture and the exhaust mixture can be depicted. These two points are connected by a line which runs through points characterized by a constant H/C ratio. In fact, during the rSOC operation in both modalities, the gases on the fuel electrode exchange with the oxygen electrode only oxygen atoms, while hydrogen and carbon are confined inside the fuel electrode. During the SOFC operation, the composition of the gas in the fuel electrode moves towards the boundary line of the fully oxidized region, increasing its oxygen content. During SOEC operation, on the other hand, the gas mixture evolves away from the fully oxidized region towards the carbon deposition region, while reducing its oxygen content. Ammonia. An alternative and promising chemistry for rSOCs is that one involving ammonia conversion to hydrogen and nitrogen. Ammonia has great potential as hydrogen carrier, due to its higher volumetric density with respect to hydrogen itself, and it can be directly fed to SOFCs. It has been demonstrated that ammonia-fed SOFCs operate through successive ammonia decomposition and hydrogen oxidation: &lt;chem&gt;2NH3 -&gt; N2 + 3H2&lt;/chem&gt; &lt;chem&gt;H2 + O^2- -&gt; H2O + 2e-&lt;/chem&gt; Ammonia decomposition has been demonstrated to be slightly more efficient than simple hydrogen oxidation, confirming the great potential of ammonia as a fuel other than an energy carrier. Unfortunately, ammonia cannot be directly synthesized on the fuel electrode of a rSOC, because the equilibrium reaction &lt;chem&gt; N2 + 3H2 &lt;=&gt; 2NH3 &lt;/chem&gt; is completely shifted towards the left at their higher than 600°C working temperature. For this reason, for clean ammonia production, hydrogen production via electrolysis must be coupled with nitrogen production from air with hydrogen oxidation and subsequent water separation. rSOC systems for energy storage. Reversible solid oxide cells are receiving increased attention as energy storage solutions for the weekly or the monthly scale. Other technologies for large scale electrical storage such as pumped-storage hydroelectricity and compressed air energy storage are characterized by geographical limitations. On the other hand, Li-ion batteries suffer from limited discharge capabilities. In this regard, hydrogen storage is a promising alternative, since the produced fuel can be compressed and stored for months. Among all hydrogen technologies, rSOCs are definitely the best candidates for producing and converting back hydrogen into electricity. Due to their high operating temperature, they are characterized by higher efficiency, compared to technologies like PEM fuel cells or PEM electrolyzers. Moreover, the possibility to operate both the fuel oxidation and the electrolysis on the same device is beneficial on the capacity factor of the system, helping at reducing its specific investment cost. Roundtrip efficiency. When dealing with rSOCs, the most important parameter to consider is the "roundtrip efficiency", which is a measure of the efficiency of the system considering both the charge (SOEC) and discharge (SOFC) preocesses. The roundtrip efficiency for the single cell can be defined as: formula_4 where formula_5 is the charge supplied or consumed during the reactions, and formula_6 is the operating voltage. If the assumption of no current or reactants leakage is made, the exchanged charges during the reactions can be assumed to be equal. Then, the roundtrip efficiency can be written as: formula_7 To maximize the roundtrip efficiency, the two operating voltages must be as close as possible. This condition can be achieved by operating the rSOC with low current densities in both modalities. In SOFC mode this is easily pursuable, while in SOEC mode a too low voltage may lead to an endothermic operation. If the operating voltage in SOEC mode is lower than the thermoneutral voltage, additional heat sources at high temperature are needed to sustain the reaction. These could come from waste industrial heat or from nuclear reactors. If not easily accessible, though, electrical heating is necessary. This can be supplied by external additions or by operating the cell with an operating voltage higher than the thermoneutral one. Both solutions, though, would inevitably lower the roundtrip efficiency of the rSOC. For this reason, in reversible operation, the thermoneutral voltage poses significant limitations in achieving high roundtrip efficiencies. On the other hand, the thermoneutral voltage is greatly affected by the reaction chemistry. It has been demonstrated that increasing the yield of methane in the electrolysis operation can substantially decreases the thermoneutral voltage and heat demand of the reaction. For conventional electrolyzers (operating at atmospheric pressure and 750°C), the methane content in the products is very low. It can be increased effectively by lowering the operating temperature to 600°C and increasing the operating pressure up to 10 bar. For example, the thermoneutral voltage is equal to 1.27 V at 750°C and 1 bar, while it becomes equal to 1.07 V at 600°C and 10 bar. In these conditions, the rSOC can even be operated in exothermic mode at reduced voltages, permitting to produce additional heat at high temperature. This result becomes very helpful in the design of high efficiency rSOC systems for energy storage purposes. System configurations. Single reversible solid oxide cells can be arranged in series to form stacks. Single stacks can be then arranged in modules to reach power capabilities in the order of kilowatts or megawatts. One of the most challenging aspects in designing large rSOC systems for energy storage purposes is the "thermal integration". When the rSOC is operated in electrolysis mode, thermal power is needed for the operation of the system. Thermal power must be provided at two different temperature levels. Heat is needed for water operation, and additional heat at high temperature may be needed if the SOEC modality is endothermic. The latter requirement can be avoided if the rSOC is operated with an exothermic reaction in SOEC modality, with a negative effect on the roundtrip efficiency. On the other hand, when the rSOC is operated in fuel cell mode, the reaction is characterized by a high exothermicity. A number of works in the scientific literature have proposed the exploitation of a Thermal energy storage (TES) to ease the thermal integration of the system. Excess heat from the SOFC operation can be recovered and stored in a TES, and later used for the SOEC operation. Thermal energy storage typologies and heat transfer fluids that have been considered for this purpose are those used for Concentrated solar power (CSP) technologies. Diathermic oil can be used to store heat at relatively low temperature (for instance, 180°C) and exploited for water evaporation. Alternatively, phase-change materials characterized by high fusion points can be used to store heat at high temperature and enable the endothermic operation in the electrolysis mode. In this case, usually, rSOCs operate at different temperature levels in the two modalities (for example, 850°C in SOFC mode and 800°C in SOEC mode). If carbonaceous chemistries are employed, the beneficial effect of methane synthesis inside the cell can be exploited to reduce the heat request of the electrolysis mode. In this regard, systems operating at high pressure and lower temperature (20 bar and 650°C) have been proposed to reduce or even eliminate the thermal power requirement of the rSOC system. Alternatively, the production of methane can be favored in external reactors. The "methanation" reaction is exothermic and favored at low temperature. &lt;chem&gt; CO + 3H2 &lt;=&gt; CH4 + H2O &lt;/chem&gt; formula_8. The syngas that produced by the co-electrolysis can undergo a further reaction in one or multiple methanation reactors to produce methane and generate low-temperature heat for water evaporation. In addition, the formation of methane in such systems may be beneficial to the size of the tanks used for storing the fuels. In fact, methane is characterized by a higher volumetric energy density than hydrogen in the gaseous form. When computing the roundtrip efficiencies of rSOC systems, the definition must take into account the net electric consumption (or additional electric production) of other components inside the system. The set of these component is regarded as "balance of plant (BOP)", and may comprehend pumps, compressors, expanders or fans, needed for fluid circulation and processing inside the system. Therefore, the system roundtrip efficiency can be defined as: formula_9 where: The roundtrip efficiencies achievable with rSOC systems operating with steam and hydrogen can reach values in the order of 60%. On the other hand, systems exploiting the beneficial effects of methane formation, either inside the rSOC or in external reactors, can reach rountrip efficiencies in the order of 70% and beyond. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " V_{TN} " }, { "math_id": 1, "text": " V_{TN} = \\frac{\\Delta H^0}{zF} " }, { "math_id": 2, "text": " V_{SOEC} < V_{TN} " }, { "math_id": 3, "text": " V_{SOEC} > V_{TN} " }, { "math_id": 4, "text": " \\eta_{RT,cell} = \\frac{Q_{FC}V_{FC}}{Q_{EC}V_{EC}} " }, { "math_id": 5, "text": "Q" }, { "math_id": 6, "text": "V" }, { "math_id": 7, "text": " \\eta_{RT,cell} = \\frac{V_{FC}}{V_{EC}} " }, { "math_id": 8, "text": "\\qquad \\Delta H^0=-206\\, \\mathrm{kJ/mol}" }, { "math_id": 9, "text": " \\eta_{RT,system} = \\frac{P_{el,FC}+P_{BOP,FC}}{P_{el,EC}+P_{BOP,EC}} " }, { "math_id": 10, "text": "P _{el,FC}" }, { "math_id": 11, "text": "P _{el,EC}" }, { "math_id": 12, "text": "P _{BOP,FC}" }, { "math_id": 13, "text": "P _{BOP,EC}" } ]
https://en.wikipedia.org/wiki?curid=64583610
64592988
Leonard Gross
American mathematician (born 1931) Leonard Gross (born February 24, 1931) is an American mathematician and Professor Emeritus of Mathematics at Cornell University. Gross has made fundamental contributions to mathematics and the mathematically rigorous study of quantum field theory. Education and career. Leonard Gross graduated from James Madison High School in December 1948. He was awarded an Emil Schweinberg scholarship that enabled him to attend college. He studied at City College of New York for one term and then studied electrical engineering at Cooper Union for two years. He then transferred to the University of Chicago, where he obtained a master's degree in physics and mathematics (1954) and a Ph.D. in mathematics (1958). Gross taught at Yale University and was awarded a National Science Foundation Fellowship in 1959. He joined the faculty of the mathematics department of Cornell University in 1960. Gross was a member of the Institute for Advanced Study in 1959 and in 1983 and has held other visiting positions. He has supervised 35 doctoral students. Gross serves on the editorial boards of the "Journal of Functional Analysis", and "Potential Analysis". Research. Gross's scientific work has centered on the mathematically rigorous study of quantum field theories and related mathematical theories such as statistical mechanics. His early works developed the foundations of integration on infinite-dimensional spaces and analytic tools needed for quantum fields corresponding to classical fields described by linear equations. His later works have been devoted to Yang–Mills theory and related mathematical theories such as analysis on loop groups. Abstract Wiener spaces. Gross's earliest mathematical works were on integration and harmonic analysis on infinite-dimensional spaces. These ideas, and especially the need for a structure within which potential theory in infinite dimensions could be studied, culminated in Gross's construction of abstract Wiener spaces in 1965. This structure has since become a standard framework for infinite-dimensional analysis. Logarithmic Sobolev inequalities. Gross was one of the initiators of the study of logarithmic Sobolev inequalities, which he discovered in 1967 for his work in constructive quantum field theory and published later in two foundational papers established these inequalities for the Bosonic and Fermionic cases. The inequalities were named by Gross, who established the inequalities in dimension-independent form, a key feature especially in the context of applications to infinite-dimensional settings such as for quantum field theories. Gross's logarithmic Sobolev inequalities proved to be of great significance well beyond their original intended scope of application, for example in the proof of the Poincaré conjecture by Grigori Perelman. Analysis on loop groups and Lie groups. Gross has done important work in the study of loop groups, for example proving the Gross ergodicity theorem for the pinned Wiener measure under the action of the smooth loop group. This result led to the construction of a Fock-space decomposition for the formula_0-space of functions on a compact Lie group with respect to a heat kernel measure. This decomposition has then led to many other developments in the study of harmonic analysis on Lie groups in which the Gaussian measure on Euclidean space is replaced by a heat kernel measure. Quantum Yang–Mills theory. Yang–Mills theory has been another focus of Gross's works. Since 2013, Gross and Nelia Charalambous have made a deep study of the Yang–Mills heat equation and related questions. Honors. Gross was a Guggenheim Fellow in 1974–1975. He was elected to the American Academy of Arts and Sciences in 2004 and named a Fellow of the American Mathematical Society in the inaugural class of 2013. He was recipient of the Humboldt Prize in 1996. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L^2" } ]
https://en.wikipedia.org/wiki?curid=64592988
64593287
Parbelos
Plane region bounded by three parabolas The parbelos is a figure similar to the arbelos but instead of three half circles it uses three parabola segments. More precisely the parbelos consists of three parabola segments, that have a height that is one fourth of the width at their bases. The two smaller parabola segments are placed next to each other with their bases on a common line and the largest parabola is placed over the two smaller ones such that its width is the sum of the widths of the smaller ones (see graphic). The parbelos has a number of properties which are somewhat similar or even identical to the properties of the Arbelos. For instance, the following two properties are identical to those of the arbelos: The quadrilateral formula_0 formed by the inner cusp formula_1 and the midpoints formula_2 of the three parabola arcs is a parallelogram the area of which relates to the area of the parbelos as follows: formula_3 The four tangents at the three cusps of the parabola intersect in four points, which form a rectangle being called the tangent rectangle. The circumcircle of the tangent rectangle intersects the base side of the outer parabola segment in its midpoint, which is the focus of the outer parabola. One diagonal of the tangent rectangle lies on a tangent to the outer parabola and its common point with it is identical to its point of intersection with perpendicular to the base at the inner cusp. For the area of the tangent rectangle the following equation holds: formula_4
[ { "math_id": 0, "text": "BM_2MM_1" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "M, M_1, M_2" }, { "math_id": 3, "text": "F_{\\text{parallelogram}}=\\frac{3}{4}F_{\\text{parbelos}}" }, { "math_id": 4, "text": "F_{\\text{rectangle}}=\\frac{3}{2}F_{\\text{parbelos}}" } ]
https://en.wikipedia.org/wiki?curid=64593287
64593665
Plane–plane intersection
In analytic geometry, the intersection of two planes in three-dimensional space is a line. Formulation. The line of intersection between two planes formula_0 and formula_1 where formula_2 are normalized is given by formula_3 where formula_4 formula_5 Derivation. This is found by noticing that the line must be perpendicular to both plane normals, and so parallel to their cross product formula_6 (this cross product is zero if and only if the planes are parallel, and are therefore non-intersecting or entirely coincident). The remainder of the expression is arrived at by finding an arbitrary point on the line. To do so, consider that any point in space may be written as formula_7, since formula_8 is a basis. We wish to find a point which is on both planes (i.e. on their intersection), so insert this equation into each of the equations of the planes to get two simultaneous equations which can be solved for formula_9 and formula_10. If we further assume that formula_11 and formula_12 are orthonormal then the closest point on the line of intersection to the origin is formula_13. If that is not the case, then a more complex procedure must be used. Dihedral angle. Given two intersecting planes described by formula_14 and formula_15, the dihedral angle between them is defined to be the angle formula_16 between their normal directions: formula_17 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Pi_1 : \\boldsymbol {n}_1 \\cdot \\boldsymbol r = h_1" }, { "math_id": 1, "text": "\\Pi_2 : \\boldsymbol {n}_2 \\cdot \\boldsymbol r = h_2" }, { "math_id": 2, "text": "\\boldsymbol {n}_i" }, { "math_id": 3, "text": " \\boldsymbol {r} = (c_1 \\boldsymbol {n}_1 + c_2 \\boldsymbol {n}_2) + \\lambda (\\boldsymbol {n}_1 \\times \\boldsymbol {n}_2) " }, { "math_id": 4, "text": " c_1 = \\frac{ h_1 - h_2(\\boldsymbol {n}_1 \\cdot \\boldsymbol {n}_2) }{ 1 - (\\boldsymbol {n}_1 \\cdot \\boldsymbol {n}_2)^2 } " }, { "math_id": 5, "text": " c_2 = \\frac{ h_2 - h_1(\\boldsymbol {n}_1 \\cdot \\boldsymbol {n}_2) }{ 1 - (\\boldsymbol {n}_1 \\cdot \\boldsymbol {n}_2)^2 }." }, { "math_id": 6, "text": "\\boldsymbol {n}_1 \\times \\boldsymbol {n}_2" }, { "math_id": 7, "text": "\\boldsymbol r = c_1\\boldsymbol {n}_1 + c_2\\boldsymbol {n}_2 + \\lambda(\\boldsymbol {n}_1 \\times \\boldsymbol {n}_2)" }, { "math_id": 8, "text": "\\{ \\boldsymbol {n}_1, \\boldsymbol {n}_2, (\\boldsymbol {n}_1 \\times \\boldsymbol {n}_2) \\}" }, { "math_id": 9, "text": "c_1" }, { "math_id": 10, "text": "c_2" }, { "math_id": 11, "text": "\\boldsymbol {n}_1" }, { "math_id": 12, "text": "\\boldsymbol {n}_2" }, { "math_id": 13, "text": "\\boldsymbol r_0 = h_1\\boldsymbol {n}_1 + h_2\\boldsymbol {n}_2" }, { "math_id": 14, "text": "\\Pi_1 : a_1 x + b_1 y + c_1 z + d_1 = 0" }, { "math_id": 15, "text": "\\Pi_2 : a_2 x + b_2 y + c_2 z + d_2 = 0" }, { "math_id": 16, "text": "\\alpha" }, { "math_id": 17, "text": "\\cos\\alpha = \\frac{\\hat n_1\\cdot \\hat n_2}{|\\hat n_1||\\hat n_2|} = \\frac{a_1 a_2 + b_1 b_2 + c_1 c_2}{\\sqrt{a_1^2+b_1^2+c_1^2}\\sqrt{a_2^2+b_2^2+c_2^2}}. " } ]
https://en.wikipedia.org/wiki?curid=64593665
64597
Ludwig von Bertalanffy
Austrian biologist and systems theorist (1901–1972) Karl Ludwig von Bertalanffy (19 September 1901 – 12 June 1972) was an Austrian biologist known as one of the founders of general systems theory (GST). This is an interdisciplinary practice that describes systems with interacting components, applicable to biology, cybernetics and other fields. Bertalanffy proposed that the classical laws of thermodynamics might be applied to closed systems, but not necessarily to "open systems" such as living things. His mathematical model of an organism's growth over time, published in 1934, is still in use today. Bertalanffy grew up in Austria and subsequently worked in Vienna, London, Canada, and the United States. Biography. Ludwig von Bertalanffy was born and grew up in the little village of Atzgersdorf (now Liesing) near Vienna. Ludwig's mother Caroline Agnes Vogel was seventeen when she married the thirty-four-year-old Gustav. Ludwig von Bertalanffy grew up as an only child educated at home by private tutors until he was ten and his parents divorced, both remarried outside the Catholic Church in civil ceremonies. When he arrived at his Gymnasium (a form of grammar school) he was already well habituated in learning by reading, and he continued to study on his own. His neighbour, the famous biologist Paul Kammerer, became a mentor and an example to the young Ludwig. The Bertalanffy family had roots in the 16th century nobility of Hungary which included several scholars and court officials. His grandfather Charles Joseph von Bertalanffy (1833–1912) had settled in Austria and was a state theatre director in Klagenfurt, Graz and Vienna, which were important sites in imperial Austria. Ludwig's father Gustav von Bertalanffy (1861–1919) was a prominent railway administrator. On his mother's side Ludwig's grandfather Joseph Vogel was an imperial counsellor and a wealthy Vienna publisher. In 1918, Bertalanffy started his studies at the university level in philosophy and art history, first at the University of Innsbruck and then at the University of Vienna. Ultimately, Bertalanffy had to make a choice between studying philosophy of science and biology; he chose the latter because, according to him, one could always become a philosopher later, but not a biologist. In 1926 he finished his PhD thesis ("Fechner und das Problem der Integration höherer Ordnung", translated title: "Fechner and the Problem of Higher-Order Integration") on the psychologist and philosopher Gustav Theodor Fechner. For the next six years he concentrated on a project of "theoretical biology" which focused on the philosophy of biology. He received his habilitation in 1934 in "theoretical biology". Bertalanffy was appointed Privatdozent at the University of Vienna in 1934. The post yielded little income, and Bertalanffy faced continuing financial difficulties. He applied for promotion to the status of associate professor, but funding from the Rockefeller Foundation enabled him to make a trip to Chicago in 1937 to work with Nicolas Rashevsky. He was also able to visit the Marine Biological Laboratory in Massachusetts. Bertalanffy was still in the US when he heard of the Anschluss in March 1938. However, his attempts to remain in the US failed, and he returned to Vienna in October of that year. Within a month of his return, he joined the Nazi Party, which facilitated his promotion to professor at the University of Vienna in 1940. During the Second World War, he linked his "organismic" philosophy of biology to the dominant Nazi ideology, principally that of the Führerprinzip. Following the defeat of Nazism, Bertalanffy found denazification problematic and left Vienna in 1948. He moved to the University of London (1948–49); the Université de Montréal (1949); the University of Ottawa (1950–54); the University of Southern California (1955–58); the Menninger Foundation (1958–60); the University of Alberta (1961–68); and the State University of New York at Buffalo (SUNY) (1969–72). In 1972, he died from a heart attack. Family life. Bertalanffy met his wife, Maria, in April 1924 in the Austrian Alps. They were hardly ever apart for the next forty-eight years. She wanted to finish studying but never did, instead devoting her life to Bertalanffy's career. Later, in Canada, she would work both for him and with him in his career, and after his death she compiled two of Bertalanffy's last works. They had one child, a son who followed in his father's footsteps by making his profession in the field of cancer research. Work. Today, Bertalanffy is considered to be a founder and one of the principal authors of the interdisciplinary school of thought known as general systems theory, which was pioneered by Alexander Bogdanov. According to Weckowicz (1989), he "occupies an important position in the intellectual history of the twentieth century. His contributions went beyond biology, and extended into cybernetics, education, history, philosophy, psychiatry, psychology and sociology. Some of his admirers even believe that this theory will one day provide a conceptual framework for all these disciplines". Individual growth model. The individual growth model published by Ludwig von Bertalanffy in 1934 is widely used in biological models and exists in a number of permutations. In its simplest version the so-called Bertalanffy growth equation is expressed as a differential equation of length ("L") over time ("t"): formula_0 when formula_1 is the Bertalanffy growth rate and formula_2 the ultimate length of the individual. This model was proposed earlier by August Friedrich Robert Pūtter (1879-1929), writing in 1920. The dynamic energy budget theory provides a mechanistic explanation of this model in the case of isomorphs that experience a constant food availability. The inverse of the Bertalanffy growth rate appears to depend linearly on the ultimate length, when different food levels are compared. The intercept relates to the maintenance costs, the slope to the rate at which reserve is mobilized for use by metabolism. The ultimate length equals the maximum length at high food availabilities. Bertalanffy equation. The Bertalanffy equation is the equation that describes the growth of a biological organism. The equation was offered by Ludwig von Bertalanffy in 1969. formula_3 Here W is organism weight, t is the time, S is the area of organism surface, and V is a physical volume of the organism. The coefficients formula_4 and formula_5 are (by Bertalanffy's definition) the "coefficient of anabolism" and "coefficient of catabolism" respectively. The solution of the Bertalanffy equation is the function: formula_6 where formula_7 and formula_8 are the certain constants. Bertalanffy couldn't explain the meaning of the parameters formula_9 (the coefficient of anabolism) and formula_5 (coefficient of catabolism) in his works, and that caused a fair criticism from biologists. But the Bertalanffy equation is a special case of the Tetearing equation, that is a more general equation of the growth of a biological organism. The Tetearing equation determines the physical meaning of the coefficients formula_9 and formula_5. Bertalanffy module. To honour Bertalanffy, ecological systems engineer and scientist Howard T. Odum named the storage symbol of his General Systems Language as the Bertalanffy module (see image right). General system theory. In the late 1920s, the Soviet philosopher Alexander Bogdanov pioneered "Tektology", whom Johann Plenge referred to as the theory of "general systems". However, in the West, Bertalanffy is widely recognized for the development of a theory known as general system theory (GST). The theory attempted to provide alternatives to conventional models of organization. GST defined new foundations and developments as a generalized theory of systems with applications to numerous areas of study, emphasizing holism over reductionism, organism over mechanism. Foundational to GST are the inter-relationships between elements which all together form the whole. Publications. The first "articles" from Bertalanffy on general systems theory: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L'(t) = r_B \\left( L_\\infty - L(t) \\right)" }, { "math_id": 1, "text": "r_B" }, { "math_id": 2, "text": "L_\\infty" }, { "math_id": 3, "text": "\n \\frac{dW}{dt}= \\eta S- k V\n" }, { "math_id": 4, "text": "\\eta" }, { "math_id": 5, "text": " k " }, { "math_id": 6, "text": "\nW(t)=\\Big(\\eta\\,c_1 -c_2\\,e^{-\\tfrac{k}{3}t}\\Big)^3\\,, \n" }, { "math_id": 7, "text": "c_1" }, { "math_id": 8, "text": "c_2" }, { "math_id": 9, "text": "\\eta " } ]
https://en.wikipedia.org/wiki?curid=64597
64602455
Logarithmic Sobolev inequalities
Class of inequalities In mathematics, logarithmic Sobolev inequalities are a class of inequalities involving the norm of a function "f", its logarithm, and its gradient formula_0. These inequalities were discovered and named by Leonard Gross, who established them in dimension-independent form, in the context of constructive quantum field theory. Similar results were discovered by other mathematicians before and many variations on such inequalities are known. Gross proved the inequality: formula_1 where formula_2 is the formula_3-norm of formula_4, with formula_5 being standard Gaussian measure on formula_6 Unlike classical Sobolev inequalities, Gross's log-Sobolev inequality does not have any dimension-dependent constant, which makes it applicable in the infinite-dimensional limit. In particular, a probability measure formula_7 on formula_8 is said to satisfy the log-Sobolev inequality with constant formula_9 if for any smooth function "f" formula_10 where formula_11 is the entropy functional. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nabla f " }, { "math_id": 1, "text": "\n\\int_{\\mathbb{R}^n}\\big|f(x)\\big|^2 \\log\\big|f(x)\\big| \\,d\\nu(x) \\leq \\int_{\\mathbb{R}^n}\\big|\\nabla f(x)\\big|^2 \\,d\\nu(x) +\\|f\\|_2^2\\log \\|f\\|_2,\n" }, { "math_id": 2, "text": " \\|f\\|_2" }, { "math_id": 3, "text": " L^2(\\nu)" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "\\nu" }, { "math_id": 6, "text": " \\mathbb{R}^n. " }, { "math_id": 7, "text": "\\mu" }, { "math_id": 8, "text": "\\mathbb{R}^n" }, { "math_id": 9, "text": "C>0" }, { "math_id": 10, "text": "\n\\operatorname{Ent}_\\mu(f^2) \\le C \\int_{\\mathbb{R}^n} \\big|\\nabla f(x)\\big|^2\\,d\\mu(x),\n" }, { "math_id": 11, "text": "\\operatorname{Ent}_\\mu(f^2) = \\int_{\\mathbb{R}^n} f^2\\log\\frac{f^2}{\\int_{\\mathbb{R}^n}f^2\\,d\\mu(x)}\\,d\\mu(x)" } ]
https://en.wikipedia.org/wiki?curid=64602455
64604101
Physical properties of soil
The physical properties of soil, in order of decreasing importance for ecosystem services such as crop production, are texture, structure, bulk density, porosity, consistency, temperature, colour and resistivity. Soil texture is determined by the relative proportion of the three kinds of soil mineral particles, called soil separates: sand, silt, and clay. At the next larger scale, soil structures called peds or more commonly "soil aggregates" are created from the soil separates when iron oxides, carbonates, clay, silica and humus, coat particles and cause them to adhere into larger, relatively stable secondary structures. Soil bulk density, when determined at standardized moisture conditions, is an estimate of soil compaction. Soil porosity consists of the void part of the soil volume and is occupied by gases or water. Soil consistency is the ability of soil materials to stick together. Soil temperature and colour are self-defining. Resistivity refers to the resistance to conduction of electric currents and affects the rate of corrosion of metal and concrete structures which are buried in soil. These properties vary through the depth of a soil profile, i.e. through soil horizons. Most of these properties determine the aeration of the soil and the ability of water to infiltrate and to be held within the soil. Texture. The mineral components of soil are sand, silt and clay, and their relative proportions determine a soil's texture. Properties that are influenced by soil texture include porosity, permeability, infiltration, shrink-swell rate, water-holding capacity, and susceptibility to erosion. In the illustrated USDA textural classification triangle, the only soil in which neither sand, silt nor clay predominates is called loam. While even pure sand, silt or clay may be considered a soil, from the perspective of conventional agriculture a loam soil with a small amount of organic material is considered "ideal", inasmuch as fertilizers or manure are currently used to mitigate nutrient losses due to crop yields in the long term. The mineral constituents of a loam soil might be 40% sand, 40% silt and the balance 20% clay by weight. Soil texture affects soil behaviour, in particular, its retention capacity for nutrients (e.g., cation exchange capacity) and water. Sand and silt are the products of physical and chemical weathering of the parent rock; clay, on the other hand, is most often the product of the precipitation of the dissolved parent rock as a secondary mineral, except when derived from the weathering of mica. It is the surface area to volume ratio (specific surface area) of soil particles and the unbalanced ionic electric charges within those that determine their role in the fertility of soil, as measured by its cation exchange capacity. Sand is least active, having the least specific surface area, followed by silt; clay is the most active. Sand's greatest benefit to soil is that it resists compaction and increases soil porosity, although this property stands only for pure sand, not for sand mixed with smaller minerals which fill the voids among sand grains. Silt is mineralogically like sand but with its higher specific surface area it is more chemically and physically active than sand. But it is the clay content of soil, with its very high specific surface area and generally large number of negative charges, that gives a soil its high retention capacity for water and nutrients. Clay soils also resist wind and water erosion better than silty and sandy soils, as the particles bond tightly to each other, and that with a strong mitigation effect of organic matter. Sand is the most stable of the mineral components of soil; it consists of rock fragments, primarily quartz particles, ranging in size from in diameter. Silt ranges in size from . Clay cannot be resolved by optical microscopes as its particles are or less in diameter and a thickness of only 10 angstroms (10−10 m). In medium-textured soils, clay is often washed downward through the soil profile (a process called eluviation) and accumulates in the subsoil (a process called illuviation). There is no clear relationship between the size of soil mineral components and their mineralogical nature: sand and silt particles can be calcareous as well as siliceous, while textural clay () can be made of very fine quartz particles as well as of multi-layered secondary minerals. Soil mineral components belonging to a given textural class may thus share properties linked to their specific surface area (e.g. moisture retention) but not those linked to their chemical composition (e.g. cation exchange capacity). Soil components larger than are classed as rock and gravel and are removed before determining the percentages of the remaining components and the textural class of the soil, but are included in the name. For example, a sandy loam soil with 20% gravel would be called gravelly sandy loam. When the organic component of a soil is substantial, the soil is called organic soil rather than mineral soil. A soil is called organic if: Structure. The clumping of the soil textural components of sand, silt and clay causes aggregates to form and the further association of those aggregates into larger units creates soil structures called peds (a contraction of the word pedolith). The adhesion of the soil textural components by organic substances, iron oxides, carbonates, clays, and silica, the breakage of those aggregates from expansion-contraction caused by freezing-thawing and wetting-drying cycles, and the build-up of aggregates by soil animals, microbial colonies and root tips shape soil into distinct geometric forms. The peds evolve into units which have various shapes, sizes and degrees of development. A soil clod, however, is not a ped but rather a mass of soil that results from mechanical disturbance of the soil such as cultivation. Soil structure affects aeration, water movement, conduction of heat, plant root growth and resistance to erosion. Water, in turn, has a strong effect on soil structure, directly via the dissolution and precipitation of minerals, the mechanical destruction of aggregates (slaking) and indirectly by promoting plant, animal and microbial growth. Soil structure often gives clues to its texture, organic matter content, biological activity, past soil evolution, human use, and the chemical and mineralogical conditions under which the soil formed. While texture is defined by the mineral component of a soil and is an innate property of the soil that does not change with agricultural activities, soil structure can be improved or destroyed by the choice and timing of farming practices. Soil structural classes: At the largest scale, the forces that shape a soil's structure result from swelling and shrinkage that initially tend to act horizontally, causing vertically oriented prismatic peds. This mechanical process is mainly exemplified in the development of vertisols. Clayey soil, due to its differential drying rate with respect to the surface, will induce horizontal cracks, reducing columns to blocky peds. Roots, rodents, worms, and freezing-thawing cycles further break the peds into smaller peds of a more or less spherical shape. At a smaller scale, plant roots extend into voids (macropores) and remove water causing macroporosity to increase and microporosity to decrease, thereby decreasing aggregate size. At the same time, root hairs and fungal hyphae create microscopic tunnels (micropores) that break up peds. At an even smaller scale, soil aggregation continues as bacteria and fungi exude sticky polysaccharides which bind soil into smaller peds. The addition of the raw organic matter that bacteria and fungi feed upon encourages the formation of this desirable soil structure. At the lowest scale, the soil chemistry affects the aggregation or dispersal of soil particles. The clay particles contain polyvalent cations, such as aluminium, which give the faces of clay layers localized negative charges. At the same time, the edges of the clay plates have a slight positive charge, due to the sorption of aluminium from the soil solution to exposed hydroxyl groups, thereby allowing the edges to adhere to the negative charges on the faces of other clay particles or to flocculate (form clumps). On the other hand, when monovalent ions, such as sodium, invade and displace the polyvalent cations (single displacement reaction), they weaken the positive charges on the edges, while the negative surface charges are relatively strengthened. This leaves negative charge on the clay faces that repel other clay, causing the particles to push apart, and by doing so deflocculate clay suspensions. As a result, the clay disperses and settles into voids between peds, causing those to close. In this way the open structure of the soil is destroyed and the soil is made impenetrable to air and water. Such sodic soil (also called haline soil) tends to form columnar peds near the surface. Density. Soil particle density is typically 2.60 to 2.75 grams per cm3 and is usually unchanging for a given soil. Soil particle density is lower for soils with high organic matter content, and is higher for soils with high iron-oxides content. Soil bulk density is equal to the dry mass of the soil divided by the volume of the soil; i.e., it includes air space and organic materials of the soil volume. Thereby soil bulk density is always less than soil particle density and is a good indicator of soil compaction. The soil bulk density of cultivated loam is about 1.1 to 1.4 g/cm3 (for comparison water is 1.0 g/cm3). Contrary to particle density, soil bulk density is highly variable for a given soil, with a strong causal relationship with soil biological activity and management strategies. However, it has been shown that, depending on species and the size of their aggregates (faeces), earthworms may either increase or decrease soil bulk density. A lower bulk density by itself does not indicate suitability for plant growth due to the confounding influence of soil texture and structure. A high bulk density is indicative of either soil compaction or a mixture of soil textural classes in which small particles fill the voids among coarser particles. Hence the positive correlation between the fractal dimension of soil, considered as a porous medium, and its bulk density, that explains the poor hydraulic conductivity of silty clay loam in the absence of a faunal structure. Porosity. Pore space is that part of the bulk volume of soil that is not occupied by either mineral or organic matter but is open space occupied by either gases or water. In a productive, medium-textured soil the total pore space is typically about 50% of the soil volume. Pore size varies considerably; the smallest pores (cryptopores; &lt;0.1 μm) hold water too tightly for use by plant roots; plant-available water is held in ultramicropores, micropores and mesopores (0.1–75 μm); and macropores (&gt;75 μm) are generally air-filled when the soil is at field capacity. Soil texture determines total volume of the smallest pores; clay soils have smaller pores, but more total pore space than sands, despite a much lower permeability. Soil structure has a strong influence on the larger pores that affect soil aeration, water infiltration and drainage. Tillage has the short-term benefit of temporarily increasing the number of pores of largest size, but these can be rapidly degraded by the destruction of soil aggregation. The pore size distribution affects the ability of plants and other organisms to access water and oxygen; large, continuous pores allow rapid transmission of air, water and dissolved nutrients through soil, and small pores store water between rainfall or irrigation events. Pore size variation also compartmentalizes the soil pore space such that many microbial and faunal organisms are not in direct competition with one another, which may explain not only the large number of species present, but the fact that functionally redundant organisms (organisms with the same ecological niche) can co-exist within the same soil. Consistency. Consistency is the ability of soil to stick to itself or to other objects (cohesion and adhesion, respectively) and its ability to resist deformation and rupture. It is of approximate use in predicting cultivation problems and the engineering of foundations. Consistency is measured at three moisture conditions: air-dry, moist, and wet. In those conditions the consistency quality depends upon the clay content. In the wet state, the two qualities of stickiness and plasticity are assessed. A soil's resistance to fragmentation and crumbling is assessed in the dry state by rubbing the sample. Its resistance to shearing forces is assessed in the moist state by thumb and finger pressure. Additionally, the cemented consistency depends on cementation by substances other than clay, such as calcium carbonate, silica, oxides and salts; moisture content has little effect on its assessment. The measures of consistency border on subjective compared to other measures such as pH, since they employ the apparent feel of the soil in those states. The terms used to describe the soil consistency in three moisture states and a last not affected by the amount of moisture are as follows: Soil consistency is useful in estimating the ability of soil to support buildings and roads. More precise measures of soil strength are often made prior to construction. Temperature. Soil temperature depends on the ratio of the energy absorbed to that lost. Soil has a mean annual temperature from -10 to 26 °C according to biomes. Soil temperature regulates seed germination, breaking of seed dormancy, plant and root growth and the availability of nutrients. Soil temperature has important seasonal, monthly and daily variations, fluctuations in soil temperature being much lower with increasing soil depth. Heavy mulching (a type of soil cover) can slow the warming of soil in summer, and, at the same time, reduce fluctuations in surface temperature. Most often, agricultural activities must adapt to soil temperatures by: Soil temperatures can be raised by drying soils or the use of clear plastic mulches. Organic mulches slow the warming of the soil. There are various factors that affect soil temperature, such as water content, soil color, and relief (slope, orientation, and elevation), and soil cover (shading and insulation), in addition to air temperature. The color of the ground cover and its insulating properties have a strong influence on soil temperature. Whiter soil tends to have a higher albedo than blacker soil cover, which encourages whiter soils to have lower soil temperatures. The specific heat of soil is the energy required to raise the temperature of soil by 1 °C. The specific heat of soil increases as water content increases, since the heat capacity of water is greater than that of dry soil. The specific heat of pure water is ~ 1 calorie per gram, the specific heat of dry soil is ~ 0.2 calories per gram, hence, the specific heat of wet soil is ~ 0.2 to 1 calories per gram (0.8 to 4.2 kJ per kilogram). Also, a tremendous energy (~584 cal/g or 2442 kJ/kg at 25 °C) is required to evaporate water (known as the heat of vaporization). As such, wet soil usually warms more slowly than dry soil – wet surface soil is typically 3 to 6 °C colder than dry surface soil. Soil heat flux refers to the rate at which heat energy moves through the soil in response to a temperature difference between two points in the soil. The heat flux density is the amount of energy that flows through soil per unit area per unit time and has both magnitude and direction. For the simple case of conduction into or out of the soil in the vertical direction, which is most often applicable the heat flux density is: formula_0 In SI units formula_1 is the heat flux density, in SI the units are W·m−2 formula_2 is the soils' conductivity, W·m−1·K−1. The thermal conductivity is sometimes a constant, otherwise an average value of conductivity for the soil condition between the surface and the point at depth is used. formula_3 is the temperature difference (temperature gradient) between the two points in the soil between which the heat flux density is to be calculated. In SI the units are kelvin, K. formula_4 is the distance between the two points within the soil, at which the temperatures are measured and between which the heat flux density is being calculated. In SI the units are meters m, and where x is measured positive downward. Heat flux is in the direction opposite the temperature gradient, hence the minus sign. That is to say, if the temperature of the surface is higher than at depth x, the negative sign will result in a positive value for the heat flux q, and which is interpreted as the heat being conducted into the soil. Soil temperature is important for the survival and early growth of seedlings. Soil temperatures affect the anatomical and morphological character of root systems. All physical, chemical, and biological processes in soil and roots are affected in particular because of the increased viscosities of water and protoplasm at low temperatures. In general, climates that do not preclude survival and growth of white spruce above ground are sufficiently benign to provide soil temperatures able to maintain white spruce root systems. In some northwestern parts of the range, white spruce occurs on permafrost sites and although young unlignified roots of conifers may have little resistance to freezing, the root system of containerized white spruce was not affected by exposure to a temperature of 5 to 20 °C. Optimum temperatures for tree root growth range between 10 °C and 25 °C in general and for spruce in particular. In 2-week-old white spruce seedlings that were then grown for 6 weeks in soil at temperatures of 15 °C, 19 °C, 23 °C, 27 °C, and 31 °C; shoot height, shoot dry weight, stem diameter, root penetration, root volume, and root dry weight all reached maxima at 19 °C. However, whereas strong positive relationships between soil temperature (5 °C to 25 °C) and growth have been found in trembling aspen and balsam poplar, white and other spruce species have shown little or no changes in growth with increasing soil temperature. Such insensitivity to soil low temperature may be common among a number of western and boreal conifers. Soil temperatures are increasing worldwide under the influence of present-day global climate warming, with opposing views about expected effects on carbon capture and storage and feedback loops to climate change Most threats are about permafrost thawing and attended effects on carbon destocking and ecosystem collapse. Colour. Soil colour is often the first impression one has when viewing soil. Striking colours and contrasting patterns are especially noticeable. The Red River of the South carries sediment eroded from extensive reddish soils like Port Silt Loam in Oklahoma. The Yellow River in China carries yellow sediment from eroding loess soils. Mollisols in the Great Plains of North America are darkened and enriched by organic matter. Podsols in boreal forests have highly contrasting layers due to acidity and leaching. In general, color is determined by the organic matter content, drainage conditions, and degree of oxidation. Soil color, while easily discerned, has little use in predicting soil characteristics. It is of use in distinguishing boundaries of horizons within a soil profile, determining the origin of a soil's parent material, as an indication of wetness and waterlogged conditions, and as a qualitative means of measuring organic, iron oxide and clay contents of soils. Color is recorded in the Munsell color system as for instance 10YR3/4 "Dusky Red", with 10YR as "hue", 3 as "value" and 4 as "chroma". Munsell color dimensions (hue, value and chroma) can be averaged among samples and treated as quantitative parameters, displaying significant correlations with various soil and vegetation properties. Soil color is primarily influenced by soil mineralogy. Many soil colours are due to various iron minerals. The development and distribution of colour in a soil profile result from chemical and biological weathering, especially redox reactions. As the primary minerals in soil parent material weather, the elements combine into new and colourful compounds. Iron forms secondary minerals of a yellow or red colour, organic matter decomposes into black and brown humic compounds, and manganese and sulfur can form black mineral deposits. These pigments can produce various colour patterns within a soil. Aerobic conditions produce uniform or gradual colour changes, while reducing environments (anaerobic) result in rapid colour flow with complex, mottled patterns and points of colour concentration. Resistivity. Soil resistivity is a measure of a soil's ability to retard the conduction of an electric current. The electrical resistivity of soil can affect the rate of corrosion of metallic structures in contact with the soil. Higher moisture content or increased electrolyte concentration can lower resistivity and increase conductivity, thereby increasing the rate of corrosion. Soil resistivity values typically range from about 1 to 100000 Ω·m, extreme values being for saline soils and dry soils overlaying crystalline rocks, respectively. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "q_x = - k \\frac{\\delta T}{\\delta x}" }, { "math_id": 1, "text": "q" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "\\delta T" }, { "math_id": 4, "text": "\\delta x" } ]
https://en.wikipedia.org/wiki?curid=64604101
646088
Sparse grid
Sparse grids are numerical techniques to represent, integrate or interpolate high dimensional functions. They were originally developed by the Russian mathematician Sergey A. Smolyak, a student of Lazar Lyusternik, and are based on a sparse tensor product construction. Computer algorithms for efficient implementations of such grids were later developed by Michael Griebel and Christoph Zenger. Curse of dimensionality. The standard way of representing multidimensional functions are tensor or full grids. The number of basis functions or nodes (grid points) that have to be stored and processed depend exponentially on the number of dimensions. The curse of dimensionality is expressed in the order of the integration error that is made by a quadrature of level formula_0, with formula_1 points. The function has regularity formula_2, i.e. is formula_2 times differentiable. The number of dimensions is formula_3. formula_4 Smolyak's quadrature rule. Smolyak found a computationally more efficient method of integrating multidimensional functions based on a univariate quadrature rule formula_5. The formula_3-dimensional Smolyak integral formula_6 of a function formula_7 can be written as a recursion formula with the tensor product. formula_8 The index to formula_9 is the level of the discretization. If a 1-dimension integration on level formula_10 is computed by the evaluation of formula_11 points, the error estimate for a function of regularity formula_2 will be formula_12
[ { "math_id": 0, "text": "l" }, { "math_id": 1, "text": "N_{l}" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "d" }, { "math_id": 4, "text": "|E_l| = O(N_l^{-\\frac{r}{d}})" }, { "math_id": 5, "text": "Q^{(1)}" }, { "math_id": 6, "text": "Q^{(d)}" }, { "math_id": 7, "text": "f" }, { "math_id": 8, "text": "Q_l^{(d)} f = \\left(\\sum_{i=1}^l \\left(Q_i^{(1)}-Q_{i-1}^{(1)}\\right)\\otimes Q_{l-i+1}^{(d-1)}\\right)f" }, { "math_id": 9, "text": "Q" }, { "math_id": 10, "text": "i" }, { "math_id": 11, "text": "O(2^{i})" }, { "math_id": 12, "text": "|E_l| = O\\left(N_l^{-r}\\left(\\log N_l\\right)^{(d-1)(r+1)}\\right)" } ]
https://en.wikipedia.org/wiki?curid=646088
646116
K3 surface
Type of smooth complex surface of kodaira dimension 0 &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; Dans la seconde partie de mon rapport, il s'agit des variétés kählériennes dites K3, ainsi nommées en l'honneur de Kummer, Kähler, Kodaira et de la belle montagne K2 au Cachemire.&lt;br&gt;&lt;br&gt; "In the second part of my report, we deal with the Kähler varieties known as K3, named in honor of Kummer, Kähler, Kodaira and of the beautiful mountain K2 in Kashmir." André , describing the reason for the name "K3 surface" In mathematics, a complex analytic K3 surface is a compact connected complex manifold of dimension 2 with а trivial canonical bundle and irregularity zero. An (algebraic) K3 surface over any field means a smooth proper geometrically connected algebraic surface that satisfies the same conditions. In the Enriques–Kodaira classification of surfaces, K3 surfaces form one of the four classes of minimal surfaces of Kodaira dimension zero. A simple example is the Fermat quartic surface formula_0 in complex projective 3-space. Together with two-dimensional compact complex tori, K3 surfaces are the Calabi–Yau manifolds (and also the hyperkähler manifolds) of dimension two. As such, they are at the center of the classification of algebraic surfaces, between the positively curved del Pezzo surfaces (which are easy to classify) and the negatively curved surfaces of general type (which are essentially unclassifiable). K3 surfaces can be considered the simplest algebraic varieties whose structure does not reduce to curves or abelian varieties, and yet where a substantial understanding is possible. A complex K3 surface has real dimension 4, and it plays an important role in the study of smooth 4-manifolds. K3 surfaces have been applied to Kac–Moody algebras, mirror symmetry and string theory. It can be useful to think of complex algebraic K3 surfaces as part of the broader family of complex analytic K3 surfaces. Many other types of algebraic varieties do not have such non-algebraic deformations. Definition. There are several equivalent ways to define K3 surfaces. The only compact complex surfaces with trivial canonical bundle are K3 surfaces and compact complex tori, and so one can add any condition excluding the latter to define K3 surfaces. For example, it is equivalent to define a complex analytic K3 surface as a simply connected compact complex manifold of dimension 2 with a nowhere-vanishing holomorphic 2-form. (The latter condition says exactly that the canonical bundle is trivial.) There are also some variants of the definition. Over the complex numbers, some authors consider only the algebraic K3 surfaces. (An algebraic K3 surface is automatically projective.) Or one may allow K3 surfaces to have du Val singularities (the canonical singularities of dimension 2), rather than being smooth. Calculation of the Betti numbers. The Betti numbers of a complex analytic K3 surface are computed as follows. (A similar argument gives the same answer for the Betti numbers of an algebraic K3 surface over any field, defined using l-adic cohomology.) By definition, the canonical bundle formula_1 is trivial, and the irregularity "q"("X") (the dimension formula_2 of the coherent sheaf cohomology group formula_3) is zero. By Serre duality, formula_4 As a result, the arithmetic genus (or holomorphic Euler characteristic) of "X" is: formula_5 On the other hand, the Riemann–Roch theorem (Noether's formula) says: formula_6 where formula_7 is the "i"-th Chern class of the tangent bundle. Since formula_8 is trivial, its first Chern class formula_9 is zero, and so formula_10. Next, the exponential sequence formula_11 gives an exact sequence of cohomology groups formula_12, and so formula_13. Thus the Betti number formula_14 is zero, and by Poincaré duality, formula_15 is also zero. Finally, formula_10 is equal to the topological Euler characteristic formula_16 Since formula_17 and formula_18, it follows that formula_19. The Picard lattice. The Picard group Pic("X") of a complex analytic K3 surface "X" means the abelian group of complex analytic line bundles on "X". For an algebraic K3 surface, Pic("X") means the group of algebraic line bundles on "X". The two definitions agree for a complex algebraic K3 surface, by Jean-Pierre Serre's GAGA theorem. The Picard group of a K3 surface "X" is always a finitely generated free abelian group; its rank is called the Picard number formula_34. In the complex case, Pic("X") is a subgroup of formula_23. It is an important feature of K3 surfaces that many different Picard numbers can occur. For "X" a complex algebraic K3 surface, formula_34 can be any integer between 1 and 20. In the complex analytic case, formula_34 may also be zero. (In that case, "X" contains no closed complex curves at all. By contrast, an algebraic surface always contains many continuous families of curves.) Over an algebraically closed field of characteristic "p" &gt; 0, there is a special class of K3 surfaces, supersingular K3 surfaces, with Picard number 22. The Picard lattice of a K3 surface means the abelian group Pic("X") together with its intersection form, a symmetric bilinear form with values in the integers. (Over formula_35, the intersection form means the restriction of the intersection form on formula_36. Over a general field, the intersection form can be defined using the intersection theory of curves on a surface, by identifying the Picard group with the divisor class group.) The Picard lattice of a K3 surface is always "even", meaning that the integer formula_37 is even for each formula_38. The Hodge index theorem implies that the Picard lattice of an algebraic K3 surface has signature formula_39. Many properties of a K3 surface are determined by its Picard lattice, as a symmetric bilinear form over the integers. This leads to a strong connection between the theory of K3 surfaces and the arithmetic of symmetric bilinear forms. As a first example of this connection: a complex analytic K3 surface is algebraic if and only if there is an element formula_38 with formula_40. Roughly speaking, the space of all complex analytic K3 surfaces has complex dimension 20, while the space of K3 surfaces with Picard number formula_34 has dimension formula_41 (excluding the supersingular case). In particular, algebraic K3 surfaces occur in 19-dimensional families. More details about moduli spaces of K3 surfaces are given below. The precise description of which lattices can occur as Picard lattices of K3 surfaces is complicated. One clear statement, due to Viacheslav Nikulin and David Morrison, is that every even lattice of signature formula_39 with formula_42 is the Picard lattice of some complex projective K3 surface. The space of such surfaces has dimension formula_41. Elliptic K3 surfaces. An important subclass of K3 surfaces, easier to analyze than the general case, consists of the K3 surfaces with an elliptic fibration formula_43. "Elliptic" means that all but finitely many fibers of this morphism are smooth curves of genus 1. The singular fibers are unions of rational curves, with the possible types of singular fibers classified by Kodaira. There are always some singular fibers, since the sum of the topological Euler characteristics of the singular fibers is formula_44. A general elliptic K3 surface has exactly 24 singular fibers, each of type formula_45 (a nodal cubic curve). Whether a K3 surface is elliptic can be read from its Picard lattice. Namely, in characteristic not 2 or 3, a K3 surface "X" has an elliptic fibration if and only if there is a nonzero element formula_38 with formula_46. (In characteristic 2 or 3, the latter condition may also correspond to a quasi-elliptic fibration.) It follows that having an elliptic fibration is a codimension-1 condition on a K3 surface. So there are 19-dimensional families of complex analytic K3 surfaces with an elliptic fibration, and 18-dimensional moduli spaces of projective K3 surfaces with an elliptic fibration. Example: Every smooth quartic surface "X" in formula_29 that contains a line "L" has an elliptic fibration formula_47, given by projecting away from "L". The moduli space of all smooth quartic surfaces (up to isomorphism) has dimension 19, while the subspace of quartic surfaces containing a line has dimension 18. Rational curves on K3 surfaces. In contrast to positively curved varieties such as del Pezzo surfaces, a complex algebraic K3 surface "X" is not uniruled; that is, it is not covered by a continuous family of rational curves. On the other hand, in contrast to negatively curved varieties such as surfaces of general type, "X" contains a large discrete set of rational curves (possibly singular). In particular, Fedor Bogomolov and David Mumford showed that every curve on "X" is linearly equivalent to a positive linear combination of rational curves. Another contrast to negatively curved varieties is that the Kobayashi metric on a complex analytic K3 surface "X" is identically zero. The proof uses that an algebraic K3 surface "X" is always covered by a continuous family of images of elliptic curves. (These curves are singular in "X", unless "X" happens to be an elliptic K3 surface.) A stronger question that remains open is whether every complex K3 surface admits a nondegenerate holomorphic map from formula_48 (where "nondegenerate" means that the derivative of the map is an isomorphism at some point). The period map. Define a marking of a complex analytic K3 surface "X" to be an isomorphism of lattices from formula_36 to the K3 lattice formula_49. The space "N" of marked complex K3 surfaces is a non-Hausdorff complex manifold of dimension 20. The set of isomorphism classes of complex analytic K3 surfaces is the quotient of "N" by the orthogonal group formula_50, but this quotient is not a geometrically meaningful moduli space, because the action of formula_50 is far from being properly discontinuous. (For example, the space of smooth quartic surfaces is irreducible of dimension 19, and yet every complex analytic K3 surface in the 20-dimensional family "N" has arbitrarily small deformations which are isomorphic to smooth quartics.) For the same reason, there is not a meaningful moduli space of compact complex tori of dimension at least 2. The period mapping sends a K3 surface to its Hodge structure. When stated carefully, the Torelli theorem holds: a K3 surface is determined by its Hodge structure. The period domain is defined as the 20-dimensional complex manifold formula_51 The period mapping formula_52 sends a marked K3 surface "X" to the complex line formula_53. This is surjective, and a local isomorphism, but not an isomorphism (in particular because "D" is Hausdorff and "N" is not). However, the global Torelli theorem for K3 surfaces says that the quotient map of sets formula_54 is bijective. It follows that two complex analytic K3 surfaces "X" and "Y" are isomorphic if and only if there is a Hodge isometry from formula_36 to formula_55, that is, an isomorphism of abelian groups that preserves the intersection form and sends formula_56 to formula_57. Moduli spaces of projective K3 surfaces. A polarized K3 surface "X" of genus "g" is defined to be a projective K3 surface together with an ample line bundle "L" such that "L" is primitive (that is, not 2 or more times another line bundle) and formula_58. This is also called a polarized K3 surface of degree 2"g"−2. Under these assumptions, "L" is basepoint-free. In characteristic zero, Bertini's theorem implies that there is a smooth curve "C" in the linear system |"L"|. All such curves have genus "g", which explains why ("X","L") is said to have genus "g". The vector space of sections of "L" has dimension "g" + 1, and so "L" gives a morphism from "X" to projective space formula_59. In most cases, this morphism is an embedding, so that "X" is isomorphic to a surface of degree 2"g"−2 in formula_59. There is an irreducible coarse moduli space formula_60 of polarized complex K3 surfaces of genus "g" for each formula_61; it can be viewed as a Zariski open subset of a Shimura variety for the group "SO"(2,19). For each "g", formula_60 is a quasi-projective complex variety of dimension 19. Shigeru Mukai showed that this moduli space is unirational if formula_62 or formula_63. In contrast, Valery Gritsenko, Klaus Hulek and Gregory Sankaran showed that formula_60 is of general type if formula_64 or formula_65. A survey of this area was given by . The different 19-dimensional moduli spaces formula_60 overlap in an intricate way. Indeed, there is a countably infinite set of codimension-1 subvarieties of each formula_60 corresponding to K3 surfaces of Picard number at least 2. Those K3 surfaces have polarizations of infinitely many different degrees, not just 2"g"–2. So one can say that infinitely many of the other moduli spaces formula_66 meet formula_60. This is imprecise, since there is not a well-behaved space containing all the moduli spaces formula_60. However, a concrete version of this idea is the fact that any two complex algebraic K3 surfaces are deformation-equivalent through algebraic K3 surfaces. More generally, a quasi-polarized K3 surface of genus "g" means a projective K3 surface with a primitive nef and big line bundle "L" such that formula_58. Such a line bundle still gives a morphism to formula_59, but now it may contract finitely many (−2)-curves, so that the image "Y" of "X" is singular. (A (−2)-curve on a surface means a curve isomorphic to formula_67 with self-intersection −2.) The moduli space of quasi-polarized K3 surfaces of genus "g" is still irreducible of dimension 19 (containing the previous moduli space as an open subset). Formally, it works better to view this as a moduli space of K3 surfaces "Y" with du Val singularities. The ample cone and the cone of curves. A remarkable feature of algebraic K3 surfaces is that the Picard lattice determines many geometric properties of the surface, including the convex cone of ample divisors (up to automorphisms of the Picard lattice). The ample cone is determined by the Picard lattice as follows. By the Hodge index theorem, the intersection form on the real vector space formula_68 has signature formula_39. It follows that the set of elements of formula_69 with positive self-intersection has two connected components. Call the positive cone the component that contains any ample divisor on "X". Case 1: There is no element "u" of Pic("X") with formula_70. Then the ample cone is equal to the positive cone. Thus it is the standard round cone. Case 2: Otherwise, let formula_71, the set of roots of the Picard lattice. The orthogonal complements of the roots form a set of hyperplanes which all go through the positive cone. Then the ample cone is a connected component of the complement of these hyperplanes in the positive cone. Any two such components are isomorphic via the orthogonal group of the lattice Pic("X"), since that contains the reflection across each root hyperplane. In this sense, the Picard lattice determines the ample cone up to isomorphism. A related statement, due to Sándor Kovács, is that knowing one ample divisor "A" in Pic("X") determines the whole cone of curves of "X". Namely, suppose that "X" has Picard number formula_72. If the set of roots formula_73 is empty, then the closed cone of curves is the closure of the positive cone. Otherwise, the closed cone of curves is the closed convex cone spanned by all elements formula_74 with formula_75. In the first case, "X" contains no (−2)-curves; in the second case, the closed cone of curves is the closed convex cone spanned by all (−2)-curves. (If formula_76, there is one other possibility: the cone of curves may be spanned by one (−2)-curve and one curve with self-intersection 0.) So the cone of curves is either the standard round cone, or else it has "sharp corners" (because every (−2)-curve spans an "isolated" extremal ray of the cone of curves). Automorphism group. K3 surfaces are somewhat unusual among algebraic varieties in that their automorphism groups may be infinite, discrete, and highly nonabelian. By a version of the Torelli theorem, the Picard lattice of a complex algebraic K3 surface "X" determines the automorphism group of "X" up to commensurability. Namely, let the Weyl group "W" be the subgroup of the orthogonal group "O"(Pic("X")) generated by reflections in the set of roots formula_73. Then "W" is a normal subgroup of "O"(Pic("X")), and the automorphism group of "X" is commensurable with the quotient group "O"(Pic("X"))/"W". A related statement, due to Hans Sterk, is that Aut("X") acts on the nef cone of "X" with a rational polyhedral fundamental domain. Relation to string duality. K3 surfaces appear almost ubiquitously in string duality and provide an important tool for the understanding of it. String compactifications on these surfaces are not trivial, yet they are simple enough to analyze most of their properties in detail. The type IIA string, the type IIB string, the E8×E8 heterotic string, the Spin(32)/Z2 heterotic string, and M-theory are related by compactification on a K3 surface. For example, the Type IIA string compactified on a K3 surface is equivalent to the heterotic string compactified on a 4-torus (). History. Quartic surfaces in formula_29 were studied by Ernst Kummer, Arthur Cayley, Friedrich Schur and other 19th-century geometers. More generally, Federigo Enriques observed in 1893 that for various numbers "g", there are surfaces of degree 2"g"−2 in formula_59 with trivial canonical bundle and irregularity zero. In 1909, Enriques showed that such surfaces exist for all formula_77, and Francesco Severi showed that the moduli space of such surfaces has dimension 19 for each "g". André gave K3 surfaces their name (see the quotation above) and made several influential conjectures about their classification. Kunihiko Kodaira completed the basic theory around 1960, in particular making the first systematic study of complex analytic K3 surfaces which are not algebraic. He showed that any two complex analytic K3 surfaces are deformation-equivalent and hence diffeomorphic, which was new even for algebraic K3 surfaces. An important later advance was the proof of the Torelli theorem for complex algebraic K3 surfaces by Ilya Piatetski-Shapiro and Igor Shafarevich (1971), extended to complex analytic K3 surfaces by Daniel Burns and Michael Rapoport (1975). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^4+y^4+z^4+w^4=0" }, { "math_id": 1, "text": " K_X = \\Omega^2_X" }, { "math_id": 2, "text": "h^1(X,O_X)" }, { "math_id": 3, "text": "H^1(X,O_X)" }, { "math_id": 4, "text": "h^2(X,\\mathcal{O}_X)=h^0(X,K_X)=1." }, { "math_id": 5, "text": "\\chi(X,\\mathcal{O}_X):=\\sum_i (-1)^i h^i(X,\\mathcal{O}_X)=1-0+1=2." }, { "math_id": 6, "text": "\\chi(X,\\mathcal{O}_X) = \\frac{1}{12} \\left(c_1(X)^2+c_2(X)\\right)," }, { "math_id": 7, "text": "c_i(X)" }, { "math_id": 8, "text": "K_X" }, { "math_id": 9, "text": "c_1(K_X)=-c_1(X)" }, { "math_id": 10, "text": "c_2(X)=24" }, { "math_id": 11, "text": "0\\to \\Z_X\\to O_X\\to O_X^*\\to 0" }, { "math_id": 12, "text": "0\\to H^1(X,\\Z) \\to H^1(X,O_X)" }, { "math_id": 13, "text": "H^1(X,\\Z)=0" }, { "math_id": 14, "text": "b_1(X)" }, { "math_id": 15, "text": "b_3(X)" }, { "math_id": 16, "text": "\\chi(X)=\\sum_i (-1)^ib_i(X)." }, { "math_id": 17, "text": "b_0(X)=b_4(X)=1" }, { "math_id": 18, "text": "b_1(X)=b_3(X)=0" }, { "math_id": 19, "text": "b_2(X)=22" }, { "math_id": 20, "text": "H^2(X;\\Z) " }, { "math_id": 21, "text": "H^0(X;\\Omega_X^2)\\cong \\mathbb{C}" }, { "math_id": 22, "text": "H^1(X,\\Omega_X) \\cong \\mathbb{C}^{20}" }, { "math_id": 23, "text": "H^2(X,\\Z)\\cong\\Z^{22}" }, { "math_id": 24, "text": "\\operatorname{II}_{3,19}" }, { "math_id": 25, "text": "E_8(-1)^{\\oplus 2}\\oplus U^{\\oplus 3}" }, { "math_id": 26, "text": "E_8" }, { "math_id": 27, "text": "S^2\\times S^2" }, { "math_id": 28, "text": "\\mathbf{P}^2" }, { "math_id": 29, "text": "\\mathbf{P}^3" }, { "math_id": 30, "text": "a\\mapsto -a" }, { "math_id": 31, "text": "A/(\\pm 1)" }, { "math_id": 32, "text": "\\mathbf{P}^4" }, { "math_id": 33, "text": "\\mathbf{P}^5" }, { "math_id": 34, "text": "\\rho" }, { "math_id": 35, "text": "\\Complex" }, { "math_id": 36, "text": "H^2(X,\\Z)" }, { "math_id": 37, "text": "u^2" }, { "math_id": 38, "text": "u\\in\\operatorname{Pic}(X)" }, { "math_id": 39, "text": "(1,\\rho-1)" }, { "math_id": 40, "text": "u^2>0" }, { "math_id": 41, "text": "20-\\rho" }, { "math_id": 42, "text": "\\rho\\leq 11" }, { "math_id": 43, "text": "X\\to\\mathbf{P}^1" }, { "math_id": 44, "text": "\\chi(X)=24" }, { "math_id": 45, "text": "I_1" }, { "math_id": 46, "text": "u^2=0" }, { "math_id": 47, "text": "X\\to \\mathbf{P}^1" }, { "math_id": 48, "text": "\\C^2" }, { "math_id": 49, "text": "\\Lambda=E_8(-1)^{\\oplus 2}\\oplus U^{\\oplus 3}" }, { "math_id": 50, "text": "O(\\Lambda)" }, { "math_id": 51, "text": "D=\\{u\\in P(\\Lambda\\otimes\\Complex): u^2=0,\\, u\\cdot\\overline{u} > 0\\}." }, { "math_id": 52, "text": "N\\to D" }, { "math_id": 53, "text": "H^0(X,\\Omega^2)\\subset H^2(X,\\Complex)\\cong \\Lambda\\otimes\\Complex" }, { "math_id": 54, "text": "N/O(\\Lambda)\\to D/O(\\Lambda)" }, { "math_id": 55, "text": "H^2(Y,\\Z)" }, { "math_id": 56, "text": "H^0(X,\\Omega^2)\\subset H^2(X,\\Complex)" }, { "math_id": 57, "text": "H^0(Y,\\Omega^2)" }, { "math_id": 58, "text": "c_1(L)^2=2g-2" }, { "math_id": 59, "text": "\\mathbf{P}^g" }, { "math_id": 60, "text": "\\mathcal{F}_g" }, { "math_id": 61, "text": "g\\geq 2" }, { "math_id": 62, "text": "g\\leq 13" }, { "math_id": 63, "text": "g=18,20" }, { "math_id": 64, "text": "g\\geq 63" }, { "math_id": 65, "text": "g=47,51,55,58,59,61" }, { "math_id": 66, "text": "\\mathcal{F}_h" }, { "math_id": 67, "text": "\\mathbf{P}^1" }, { "math_id": 68, "text": "N^1(X):=\\operatorname{Pic}(X)\\otimes\\R" }, { "math_id": 69, "text": "N^1(X)" }, { "math_id": 70, "text": "u^2=-2" }, { "math_id": 71, "text": "\\Delta=\\{u\\in\\operatorname{Pic}(X):u^2=-2\\}" }, { "math_id": 72, "text": "\\rho\\geq 3" }, { "math_id": 73, "text": "\\Delta" }, { "math_id": 74, "text": "u\\in\\Delta" }, { "math_id": 75, "text": "A\\cdot u>0" }, { "math_id": 76, "text": "\\rho=2" }, { "math_id": 77, "text": "g\\geq 3" } ]
https://en.wikipedia.org/wiki?curid=646116
646120
Hyperkähler manifold
In differential geometry, a hyperkähler manifold is a Riemannian manifold formula_0 endowed with three integrable almost complex structures formula_1 that are Kähler with respect to the Riemannian metric formula_2 and satisfy the quaternionic relations formula_3. In particular, it is a hypercomplex manifold. All hyperkähler manifolds are Ricci-flat and are thus Calabi–Yau manifolds. Hyperkähler manifolds were defined by Eugenio Calabi in 1979. Early history. Marcel Berger's 1955 paper on the classification of Riemannian holonomy groups first raised the issue of the existence of non-symmetric manifolds with holonomy Sp("n")·Sp(1).Interesting results were proved in the mid-1960s in pioneering work by Edmond Bonan and Kraines who have independently proven that any such manifold admits a parallel 4-form formula_4.The long awaited analog of strong Lefschetz theorem was published in 1982 : formula_5 Equivalent definition in terms of holonomy. Equivalently, a hyperkähler manifold is a Riemannian manifold formula_0 of dimension formula_6 whose holonomy group is contained in the compact symplectic group Sp("n"). Indeed, if formula_7 is a hyperkähler manifold, then the tangent space "T""x""M" is a quaternionic vector space for each point "x" of "M", i.e. it is isomorphic to formula_8 for some integer formula_9, where formula_10 is the algebra of quaternions. The compact symplectic group Sp("n") can be considered as the group of orthogonal transformations of formula_8 which are linear with respect to "I", "J" and "K". From this, it follows that the holonomy group of the Riemannian manifold formula_0 is contained in Sp("n"). Conversely, if the holonomy group of a Riemannian manifold formula_0 of dimension formula_6 is contained in Sp("n"), choose complex structures "I""x", "Jx" and "K""x" on "T""x""M" which make "T""x""M" into a quaternionic vector space. Parallel transport of these complex structures gives the required complex structures formula_1 on "M" making formula_7 into a hyperkähler manifold. Two-sphere of complex structures. Every hyperkähler manifold formula_7 has a 2-sphere of complex structures with respect to which the metric formula_2 is Kähler. Indeed, for any real numbers formula_11 such that formula_12 the linear combination formula_13 is a complex structures that is Kähler with respect to formula_2. If formula_14 denotes the Kähler forms of formula_15, respectively, then the Kähler form of formula_16 is formula_17 Holomorphic symplectic form. A hyperkähler manifold formula_7, considered as a complex manifold formula_18, is holomorphically symplectic (equipped with a holomorphic, non-degenerate, closed 2-form). More precisely, if formula_14 denotes the Kähler forms of formula_15, respectively, then formula_19 is holomorphic symplectic with respect to formula_20. Conversely, Shing-Tung Yau's proof of the Calabi conjecture implies that a compact, Kähler, holomorphically symplectic manifold formula_21 is always equipped with a compatible hyperkähler metric. Such a metric is unique in a given Kähler class. Compact hyperkähler manifolds have been extensively studied using techniques from algebraic geometry, sometimes under the name "holomorphically symplectic manifolds". The holonomy group of any Calabi–Yau metric on a simply connected compact holomorphically symplectic manifold of complex dimension formula_22 with formula_23 is exactly Sp("n"); and if the simply connected Calabi–Yau manifold instead has formula_24, it is just the Riemannian product of lower-dimensional hyperkähler manifolds. This fact immediately follows from the Bochner formula for holomorphic forms on a Kähler manifold, together the Berger classification of holonomy groups; ironically, it is often attributed to Bogomolov, who incorrectly went on to claim in the same paper that compact hyperkähler manifolds actually do not exist! Examples. For any integer formula_25, the space formula_8 of formula_9-tuples of quaternions endowed with the flat Euclidean metric is a hyperkähler manifold. The first non-trivial example discovered is the Eguchi–Hanson metric on the cotangent bundle formula_26 of the two-sphere. It was also independently discovered by Eugenio Calabi, who showed the more general statement that cotangent bundle formula_27 of any complex projective space has a complete hyperkähler metric. More generally, Birte Feix and Dmitry Kaledin showed that the cotangent bundle of any Kähler manifold has a hyperkähler structure on a neighbourhood of its zero section, although it is generally incomplete. Due to Kunihiko Kodaira's classification of complex surfaces, we know that any compact hyperkähler 4-manifold is either a K3 surface or a compact torus formula_28. (Every Calabi–Yau manifold in 4 (real) dimensions is a hyperkähler manifold, because SU(2) is isomorphic to Sp(1).) As was discovered by Beauville, the Hilbert scheme of k points on a compact hyperkähler 4-manifold is a hyperkähler manifold of dimension 4k. This gives rise to two series of compact examples: Hilbert schemes of points on a K3 surface and generalized Kummer varieties. Non-compact, complete, hyperkähler 4-manifolds which are asymptotic to H/"G", where H denotes the quaternions and "G" is a finite subgroup of Sp(1), are known as asymptotically locally Euclidean, or ALE, spaces. These spaces, and various generalizations involving different asymptotic behaviors, are studied in physics under the name gravitational instantons. The Gibbons–Hawking ansatz gives examples invariant under a circle action. Many examples of noncompact hyperkähler manifolds arise as moduli spaces of solutions to certain gauge theory equations which arise from the dimensional reduction of the anti-self dual Yang–Mills equations: instanton moduli spaces, monopole moduli spaces, spaces of solutions to Nigel Hitchin's self-duality equations on Riemann surfaces, space of solutions to Nahm equations. Another class of examples are the Nakajima quiver varieties, which are of great importance in representation theory. Cohomology. show that the cohomology of any compact hyperkähler manifold embeds into the cohomology of a torus, in a way that preserves the Hodge structure. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(M, g)" }, { "math_id": 1, "text": "I, J, K" }, { "math_id": 2, "text": "g" }, { "math_id": 3, "text": "I^2=J^2=K^2=IJK=-1" }, { "math_id": 4, "text": "\\Omega" }, { "math_id": 5, "text": " \n \\Omega^{n-k}\\wedge\\bigwedge^{2k}T^*M=\\bigwedge^{4n-2k}T^*M." }, { "math_id": 6, "text": "4n" }, { "math_id": 7, "text": "(M, g, I, J, K)" }, { "math_id": 8, "text": "\\mathbb{H}^n" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "\\mathbb{H}" }, { "math_id": 11, "text": "a, b, c" }, { "math_id": 12, "text": "a^2 + b^2 + c^2 = 1 \\, " }, { "math_id": 13, "text": "aI + bJ + cK \\, " }, { "math_id": 14, "text": "\\omega_I, \\omega_J, \\omega_K" }, { "math_id": 15, "text": "(g, I), (g, J), (g, K)" }, { "math_id": 16, "text": "aI + bJ + cK" }, { "math_id": 17, "text": "a \\omega_I + b \\omega_J + c \\omega_K." }, { "math_id": 18, "text": "(M, I)" }, { "math_id": 19, "text": "\\Omega := \\omega_J + i\\omega_K" }, { "math_id": 20, "text": "I" }, { "math_id": 21, "text": "(M,I,\\Omega)" }, { "math_id": 22, "text": "2n" }, { "math_id": 23, "text": "H^{2,0}(M)=1" }, { "math_id": 24, "text": "H^{2,0}(M)\\geq 2" }, { "math_id": 25, "text": "n \\ge 1" }, { "math_id": 26, "text": "T^*S^2" }, { "math_id": 27, "text": "T^*\\mathbb{CP}^n" }, { "math_id": 28, "text": "T^4" } ]
https://en.wikipedia.org/wiki?curid=646120
64620077
Closed graph theorem (functional analysis)
Theorems connecting continuity to closure of graphs In mathematics, particularly in functional analysis, the closed graph theorem is a result connecting the continuity of a linear operator to a topological property of their graph. Precisely, the theorem states that a linear operator between two Banach spaces is continuous if and only if the graph of the operator is closed (such an operator is called a closed linear operator; see also closed graph property). One of important questions in functional analysis is the question of the continuity (or boundedness) of a given linear operator. The closed graph theorem gives one answer to that question. Explanation. Let formula_0 be a linear operator between Banach spaces (or more generally Fréchet spaces). Then the continuity of formula_1 means that formula_2 for each convergent sequence formula_3. On the other hand, the closedness of the graph of formula_1 means that for each convergent sequence formula_3 such that formula_4, we have formula_5. Hence, the closed graph theorem says that in order to check the continuity of formula_1, one can show formula_6 under the additional assumption that formula_7 is convergent. In fact, for the graph of "T" to be closed, it is enough that if formula_8, then formula_9. Indeed, assuming that condition holds, if formula_10, then formula_11 and formula_12. Thus, formula_5; i.e., formula_13 is in the graph of "T". Note, to check the closedness of a graph, it’s not even necessary to use the norm topology: if the graph of "T" is closed in some topology coarser than the norm topology, then it is closed in the norm topology. In practice, this works like this: "T" is some operator on some function space. One shows "T" is continuous with respect to the distribution topology; thus, the graph is closed in that topology, which implies closedness in the norm topology and then "T" is a bounded by the closed graph theorem (when the theorem applies). See for an explicit example. Statement. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem —  If formula_0 is a linear operator between Banach spaces (or more generally Fréchet spaces), then the following are equivalent: The usual proof of the closed graph theorem employs the open mapping theorem. It simply uses a general recipe of obtaining the closed graph theorem from the open mapping theorem; see (this deduction is formal and does not use linearity; the linearity is needed to appeal to the open mapping theorem which relies on the linearity.) In fact, the open mapping theorem can in turn be deduced from the closed graph theorem as follows. As noted in , it is enough to prove the open mapping theorem for a continuous linear operator that is bijective (not just surjective). Let "T" be such an operator. Then by continuity, the graph formula_15 of "T" is closed. Then formula_16 under formula_17. Hence, by the closed graph theorem, formula_18 is continuous; i.e., "T" is an open mapping. Since the closed graph theorem is equivalent to the open mapping theorem, one knows that the theorem fails without the completeness assumption. But more concretely, an operator with closed graph that is not bounded (see unbounded operator) exists and thus serves as a counterexample. Example. The Hausdorff–Young inequality says that the Fourier transformation formula_19 is a well-defined bounded operator with operator norm one when formula_20. This result is usually proved using the Riesz–Thorin interpolation theorem and is highly nontrivial. The closed graph theorem can be used to prove a soft version of this result; i.e., the Fourier transformation is a bounded operator with the unknown operator norm. Here is how the argument would go. Let "T" denote the Fourier transformation. First we show formula_21 is a continuous linear operator for "Z" = the space of tempered distributions on formula_22. Second, we note that "T" maps the space of Schwarz functions to itself (in short, because smoothness and rapid decay transform to rapid decay and smoothness, respectively). This implies that the graph of "T" is contained in formula_23 and formula_24 is defined but with unknown bounds. Since formula_21 is continuous, the graph of formula_24 is closed in the distribution topology; thus in the norm topology. Finally, by the closed graph theorem, formula_24 is a bounded operator. Generalization. Complete metrizable codomain. The closed graph theorem can be generalized from Banach spaces to more abstract topological vector spaces in the following ways. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem —  A linear operator from a barrelled space formula_25 to a Fréchet space formula_26 is continuous if and only if its graph is closed. Between F-spaces. There are versions that does not require formula_26 to be locally convex. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem —  A linear map between two F-spaces is continuous if and only if its graph is closed. This theorem is restated and extend it with some conditions that can be used to determine if a graph is closed: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem —  If formula_0 is a linear map between two F-spaces, then the following are equivalent: Complete pseudometrizable codomain. Every metrizable topological space is pseudometrizable. A pseudometrizable space is metrizable if and only if it is Hausdorff. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Closed Graph Theorem —  Also, a closed linear map from a locally convex ultrabarrelled space into a complete pseudometrizable TVS is continuous. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Closed Graph Theorem —  A closed and bounded linear map from a locally convex infrabarreled space into a complete pseudometrizable locally convex space is continuous. Codomain not complete or (pseudo) metrizable. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem —  Suppose that formula_0 is a linear map whose graph is closed. If formula_25 is an inductive limit of Baire TVSs and formula_26 is a webbed space then formula_1 is continuous. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Closed Graph Theorem —  A closed surjective linear map from a complete pseudometrizable TVS onto a locally convex ultrabarrelled space is continuous. An even more general version of the closed graph theorem is &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — Suppose that formula_25 and formula_26 are two topological vector spaces (they need not be Hausdorff or locally convex) with the following property: If formula_34 is any closed subspace of formula_35 and formula_36 is any continuous map of formula_34 onto formula_37 then formula_36 is an open mapping. Under this condition, if formula_0 is a linear map whose graph is closed then formula_1 is continuous. Borel graph theorem. The Borel graph theorem, proved by L. Schwartz, shows that the closed graph theorem is valid for linear maps defined on and valued in most spaces encountered in analysis. Recall that a topological space is called a Polish space if it is a separable complete metrizable space and that a Souslin space is the continuous image of a Polish space. The weak dual of a separable Fréchet space and the strong dual of a separable Fréchet-Montel space are Souslin spaces. Also, the space of distributions and all Lp-spaces over open subsets of Euclidean space as well as many other spaces that occur in analysis are Souslin spaces. The Borel graph theorem states: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Borel Graph Theorem —  Let formula_38 be linear map between two locally convex Hausdorff spaces formula_25 and formula_39 If formula_25 is the inductive limit of an arbitrary family of Banach spaces, if formula_26 is a Souslin space, and if the graph of formula_36 is a Borel set in formula_40 then formula_36 is continuous. An improvement upon this theorem, proved by A. Martineau, uses K-analytic spaces. A topological space formula_25 is called a formula_41 if it is the countable intersection of countable unions of compact sets. A Hausdorff topological space formula_26 is called K-analytic if it is the continuous image of a formula_41 space (that is, if there is a formula_41 space formula_25 and a continuous map of formula_25 onto formula_26). Every compact set is K-analytic so that there are non-separable K-analytic spaces. Also, every Polish, Souslin, and reflexive Fréchet space is K-analytic as is the weak dual of a Frechet space. The generalized Borel graph theorem states: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Generalized Borel Graph Theorem — Let formula_38 be a linear map between two locally convex Hausdorff spaces formula_25 and formula_39 If formula_25 is the inductive limit of an arbitrary family of Banach spaces, if formula_26 is a K-analytic space, and if the graph of formula_36 is closed in formula_40 then formula_36 is continuous. Related results. If formula_42 is closed linear operator from a Hausdorff locally convex TVS formula_25 into a Hausdorff finite-dimensional TVS formula_26 then formula_43 is continuous. References. Notes &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T : X \\to Y" }, { "math_id": 1, "text": "T" }, { "math_id": 2, "text": "Tx_i \\to Tx" }, { "math_id": 3, "text": "x_i \\to x" }, { "math_id": 4, "text": "Tx_i \\to y" }, { "math_id": 5, "text": "y = Tx" }, { "math_id": 6, "text": "T x_i \\to Tx" }, { "math_id": 7, "text": "Tx_i" }, { "math_id": 8, "text": "x_i \\to 0, \\, Tx_i \\to y" }, { "math_id": 9, "text": "y = 0" }, { "math_id": 10, "text": "(x_i, Tx_i) \\to (x, y)" }, { "math_id": 11, "text": "x_i - x \\to 0" }, { "math_id": 12, "text": "T(x_i - x) \\to y - Tx" }, { "math_id": 13, "text": "(x, y)" }, { "math_id": 14, "text": "X \\times Y." }, { "math_id": 15, "text": "\\Gamma_T" }, { "math_id": 16, "text": "\\Gamma_T \\simeq \\Gamma_{T^{-1}}" }, { "math_id": 17, "text": "(x, y) \\mapsto (y, x)" }, { "math_id": 18, "text": "T^{-1}" }, { "math_id": 19, "text": "\\widehat{\\cdot} : L^p(\\mathbb{R}^n) \\to L^{p'}(\\mathbb{R}^n)" }, { "math_id": 20, "text": "1/p + 1/p' = 1" }, { "math_id": 21, "text": "T : L^p \\to Z" }, { "math_id": 22, "text": " \\mathbb{R}^n" }, { "math_id": 23, "text": "L^p \\times L^{p'}" }, { "math_id": 24, "text": "T : L^p \\to L^{p'}" }, { "math_id": 25, "text": "X" }, { "math_id": 26, "text": "Y" }, { "math_id": 27, "text": "x_{\\bull} = \\left(x_i\\right)_{i=1}^{\\infty} \\to x" }, { "math_id": 28, "text": "T\\left(x_{\\bull}\\right) := \\left(T\\left(x_i\\right)\\right)_{i=1}^{\\infty}" }, { "math_id": 29, "text": "y \\in Y," }, { "math_id": 30, "text": "y = T(x)." }, { "math_id": 31, "text": "x_{\\bull} = \\left(x_i\\right)_{i=1}^{\\infty} \\to 0" }, { "math_id": 32, "text": "T\\left(x_{\\bull}\\right)" }, { "math_id": 33, "text": "y = 0." }, { "math_id": 34, "text": "G" }, { "math_id": 35, "text": "X \\times Y" }, { "math_id": 36, "text": "u" }, { "math_id": 37, "text": "X," }, { "math_id": 38, "text": "u : X \\to Y" }, { "math_id": 39, "text": "Y." }, { "math_id": 40, "text": "X \\times Y," }, { "math_id": 41, "text": "K_{\\sigma\\delta}" }, { "math_id": 42, "text": "F : X \\to Y" }, { "math_id": 43, "text": "F" } ]
https://en.wikipedia.org/wiki?curid=64620077
6462018
Brake force
Brake force, also known as brake power, is a measure of force applied by the brakes of a vehicle in order to decelerate it. It is one of the main components in determining a vehicle's stopping distance. Formula for brake force. Brakes convert the kinetic energy of a vehicle into heat over the distance traveled by said vehicle. Thus, we can find the brake force of a vehicle through the formula: formula_0 where: Railways. In the case of railways, it is important that staff are aware of the brake force of a train so sufficient brake power will be available to bring the train to a halt within the required distance from a given speed. In simple terms the brake force of a train should be relative to the sum of the brake force that can be exerted by all the vehicles in the train relative to the weight of the train, excluding problems that may occur such as wheels locking and sliding under braking. Modern freight wagons typically have brakes that can be operated from the locomotive, these are sometimes referred to as "fitted" freights. Older wagons typically were not fitted with brakes that could be operated from the locomotive, sometimes these are referred to as "unfitted freights". These unfitted freights would typically have brake vans attached to provide additional braking force and operated at a reduced field. While very early passenger trains might have had brakes that would have been applied by a brakesman riding in say every second carriage modern passenger vehicles have brakes that will be applied to all vehicles. There are a certain of cases in modern practice where operating at higher at higher speeds can could the brake force above a certain speed would be insufficient to stop the train within the required distance. Some cases arise because the brake force to weight ratio of a locomotive of typically 80 to 120 tonnes in weight is often less than that of passenger vehicle in the 40 tonne range; and it may be that locomotives running by themselves require several coaches to be attached if the train is to run at its maximum permitted speed with a suitable brake force to weight ratio. Another issue that can arise is when a locomotive is hauling a set of coaches, typically a multiple unit, where it is unable to work the brakes of the unit. In this case additional braking vehicles may be attached or the train may need to run at reduced speed. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F_b={mv_i^2\\over 2d}" }, { "math_id": 1, "text": "F_b" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "v_i" }, { "math_id": 4, "text": "d" } ]
https://en.wikipedia.org/wiki?curid=6462018
646233
Secure multi-party computation
Subfield of cryptography Secure multi-party computation (also known as secure computation, multi-party computation (MPC) or privacy-preserving computation) is a subfield of cryptography with the goal of creating methods for parties to jointly compute a function over their inputs while keeping those inputs private. Unlike traditional cryptographic tasks, where cryptography assures security and integrity of communication or storage and the adversary is outside the system of participants (an eavesdropper on the sender and receiver), the cryptography in this model protects participants' privacy from each other. The foundation for secure multi-party computation started in the late 1970s with the work on mental poker, cryptographic work that simulates game playing/computational tasks over distances without requiring a trusted third party. Traditionally, cryptography was about concealing content, while this new type of computation and protocol is about concealing partial information about data while computing with the data from many sources, and correctly producing outputs. By the late 1980s, Michael Ben-Or, Shafi Goldwasser and Avi Wigderson, and independently David Chaum, Claude Crépeau, and Ivan Damgård, had published papers showing "how to securely compute any function in the secure channels setting". History. Special purpose protocols for specific tasks started in the late 1970s. Later, secure computation was formally introduced as secure two-party computation (2PC) in 1982 (for the so-called Millionaires' Problem, a specific problem which is a Boolean predicate), and in generality (for any feasible computation) in 1986 by Andrew Yao. The area is also referred to as Secure Function Evaluation (SFE). The two party case was followed by a generalization to the multi-party by Oded Goldreich, Silvio Micali, and Avi Wigderson. The computation is based on secret sharing of all the inputs and zero-knowledge proofs for a potentially malicious case, where the majority of honest players in the malicious adversary case assure that bad behavior is detected and the computation continues with the dishonest person eliminated or his input revealed. This work suggested the very basic general scheme to be followed by essentially all future multi-party protocols for secure computing. This work introduced an approach, known as GMW paradigm, for compiling a multi-party computation protocol which is secure against semi-honest adversaries to a protocol that is secure against malicious adversaries. This work was followed by the first robust secure protocol which tolerates faulty behavior graciously without revealing anyone's output via a work which invented for this purpose the often used `share of shares idea' and a protocol that allows one of the parties to hide its input unconditionally. The GMW paradigm was considered to be inefficient for years because of huge overheads that it brings to the base protocol. However, it is shown that it is possible to achieve efficient protocols, and it makes this line of research even more interesting from a practical perspective. The above results are in a model where the adversary is limited to polynomial time computations, and it observes all communications, and therefore the model is called the `computational model'. Further, the protocol of oblivious transfer was shown to be complete for these tasks. The above results established that it is possible under the above variations to achieve secure computation when the majority of users are honest. The next question to solve was the case of secure communication channels where the point-to-point communication is not available to the adversary; in this case it was shown that solutions can be achieved with up to 1/3 of the parties being misbehaving and malicious, and the solutions apply no cryptographic tools (since secure communication is available). Adding a broadcast channel allows the system to tolerate up to 1/2 misbehaving minority, whereas connectivity constraints on the communication graph were investigated in the book Perfectly Secure Message Transmission. Over the years, the notion of general purpose multi-party protocols became a fertile area to investigate basic and general protocol issues properties on, such as universal composability or mobile adversary as in proactive secret sharing. Since the late 2000s, and certainly since 2010 and on, the domain of general purpose protocols has moved to deal with efficiency improvements of the protocols with practical applications in mind. Increasingly efficient protocols for MPC have been proposed, and MPC can be now considered as a practical solution to various real-life problems (especially ones that only require linear sharing of the secrets and mainly local operations on the shares with not much interactions among the parties), such as distributed voting, private bidding and auctions, sharing of signature or decryption functions and private information retrieval. The first large-scale and practical application of multi-party computation was the execution of an electronic double auction in the Danish Sugar Beet Auction, which took place in January 2008. Obviously, both theoretical notions and investigations, and applied constructions are needed (e.g., conditions for moving MPC into part of day by day business was advocated and presented in). In 2020, a number of companies working with secure-multiparty computation founded the MPC alliance with the goal of "accelerate awareness, acceptance, and adoption of MPC technology." Definition and overview. In an MPC, a given number of participants, p1, p2, ..., pN, each have private data, respectively d1, d2, ..., dN. Participants want to compute the value of a public function on that private data: F(d1, d2, ..., dN) while keeping their own inputs secret. For example, suppose we have three parties Alice, Bob and Charlie, with respective inputs x, y and z denoting their salaries. They want to find out the highest of the three salaries, without revealing to each other how much each of them makes. Mathematically, this translates to them computing: F(x, y, z) max(x, y, z) If there were some trusted outside party (say, they had a mutual friend Tony who they knew could keep a secret), they could each tell their salary to Tony, he could compute the maximum, and tell that number to all of them. The goal of MPC is to design a protocol, where, by exchanging messages only with each other, Alice, Bob, and Charlie can still learn F(x, y, z) without revealing who makes what and without having to rely on Tony. They should learn no more by engaging in their protocol than they would learn by interacting with an incorruptible, perfectly trustworthy Tony. In particular, all that the parties can learn is what they can learn from the output and their own input. So in the above example, if the output is z, then Charlie learns that his z is the maximum value, whereas Alice and Bob learn (if x, y and z are distinct), that their input is not equal to the maximum, and that the maximum held is equal to z. The basic scenario can be easily generalised to where the parties have several inputs and outputs, and the function outputs different values to different parties. Informally speaking, the most basic properties that a multi-party computation protocol aims to ensure are: There are a wide range of practical applications, varying from simple tasks such as coin tossing to more complex ones like electronic auctions (e.g. compute the market clearing price), electronic voting, or privacy-preserving data mining. A classical example is the Millionaires' Problem: two millionaires want to know who is richer, in such a way that neither of them learns the net worth of the other. A solution to this situation is essentially to securely evaluate the comparison function. Security definitions. A multi-party computation protocol must be secure to be effective. In modern cryptography, the security of a protocol is related to a security proof. The security proof is a mathematical proof where the security of a protocol is reduced to that of the security of its underlying primitives. Nevertheless, it is not always possible to formalize the cryptographic protocol security verification based on the party knowledge and the protocol correctness. For MPC protocols, the environment in which the protocol operates is associated with the Real World/Ideal World Paradigm. The parties can't be said to learn nothing, since they need to learn the output of the operation, and the output depends on the inputs. In addition, the output correctness is not guaranteed, since the correctness of the output depends on the parties’ inputs, and the inputs have to be assumed to be correct. The Real World/Ideal World Paradigm states two worlds: (i) In the ideal-world model, there exists an incorruptible trusted party to whom each protocol participant sends its input. This trusted party computes the function on its own and sends back the appropriate output to each party. (ii) In contrast, in the real-world model, there is no trusted party and all the parties can do is to exchange messages with each other. A protocol is said to be secure if one can learn no more about each party's private inputs in the real world than one could learn in the ideal world. In the ideal world, no messages are exchanged between parties, so real-world exchanged messages cannot reveal any secret information. The Real World/Ideal World Paradigm provides a simple abstraction of the complexities of MPC to allow the construction of an application under the pretense that the MPC protocol at its core is actually an ideal execution. If the application is secure in the ideal case, then it is also secure when a real protocol is run instead. The security requirements on an MPC protocol are stringent. Nonetheless, in 1987 it was demonstrated that any function can be securely computed, with security for malicious adversaries and the other initial works mentioned before. Despite these publications, MPC was not designed to be efficient enough to be used in practice at that time. Unconditionally or information-theoretically secure MPC is closely related and builds on to the problem of secret sharing, and more specifically verifiable secret sharing (VSS), which many secure MPC protocols use against active adversaries. Unlike traditional cryptographic applications, such as encryption or signature, one must assume that the adversary in an MPC protocol is one of the players engaged in the system (or controlling internal parties). That corrupted party or parties may collude in order to breach the security of the protocol. Let formula_0 be the number of parties in the protocol and formula_1 the number of parties who can be adversarial. The protocols and solutions for the case of formula_2 (i.e., when an honest majority is assumed) are different from those where no such assumption is made. This latter case includes the important case of two-party computation where one of the participants may be corrupted, and the general case where an unlimited number of participants are corrupted and collude to attack the honest participants. Adversaries faced by the different protocols can be categorized according to how willing they are to deviate from the protocol. There are essentially two types of adversaries, each giving rise to different forms of security (and each fits into different real world scenario): Security against active adversaries typically leads to a reduction in efficiency. Covert security is an alternative that aims to allow greater efficiency in exchange for weakening the security definition; it is applicable to situations where active adversaries are willing to cheat but only if they are not caught. For example, their reputation could be damaged, preventing future collaboration with other honest parties. Thus, protocols that are covertly secure provide mechanisms to ensure that, if some of the parties do not follow the instructions, then it will be noticed with high probability, say 75% or 90%. In a way, covert adversaries are active ones forced to act passively due to external non-cryptographic (e.g. business) concerns. This mechanism sets a bridge between both models in the hope of finding protocols which are efficient and secure enough in practice. Like many cryptographic protocols, the security of an MPC protocol can rely on different assumptions: The set of honest parties that can execute a computational task is related to the concept of access structure. Adversary structures can be static, where the adversary chooses its victims before the start of the multi-party computation, or dynamic, where it chooses its victims during the course of execution of the multi-party computation making the defense harder. An adversary structure can be defined as a threshold structure or as a more complex structure. In a threshold structure the adversary can corrupt or read the memory of a number of participants up to some threshold. Meanwhile, in a complex structure it can affect certain predefined subsets of participants, modeling different possible collusions. Protocols. There are major differences between the protocols proposed for two party computation (2PC) and multi-party computation (MPC). Also, often for special purpose protocols of importance a specialized protocol that deviates from the generic ones has to be designed (voting, auctions, payments, etc.) Two-party computation. The two party setting is particularly interesting, not only from an applications perspective but also because special techniques can be applied in the two party setting which do not apply in the multi-party case. Indeed, secure multi-party computation (in fact the restricted case of secure function evaluation, where only a single function is evaluated) was first presented in the two-party setting. The original work is often cited as being from one of the two papers of Yao; although the papers do not actually contain what is now known as Yao's garbled circuit protocol. Yao's basic protocol is secure against semi-honest adversaries and is extremely efficient in terms of number of rounds, which is constant, and independent of the target function being evaluated. The function is viewed as a Boolean circuit, with inputs in binary of fixed length. A Boolean circuit is a collection of gates connected with three different types of wires: circuit-input wires, circuit-output wires and intermediate wires. Each gate receives two input wires and it has a single output wire which might be fan-out (i.e. be passed to multiple gates at the next level). Plain evaluation of the circuit is done by evaluating each gate in turn; assuming the gates have been topologically ordered. The gate is represented as a truth table such that for each possible pair of bits (those coming from the input wires' gate) the table assigns a unique output bit; which is the value of the output wire of the gate. The results of the evaluation are the bits obtained in the circuit-output wires. Yao explained how to garble a circuit (hide its structure) so that two parties, sender and receiver, can learn the output of the circuit and nothing else. At a high level, the sender prepares the garbled circuit and sends it to the receiver, who obliviously evaluates the circuit, learning the encodings corresponding to both his and the sender's output. He then just sends back the sender's encodings, allowing the sender to compute his part of the output. The sender sends the mapping from the receivers output encodings to bits to the receiver, allowing the receiver to obtain their output. In more detail, the garbled circuit is computed as follows. The main ingredient is a double-keyed symmetric encryption scheme. Given a gate of the circuit, each possible value of its input wires (either 0 or 1) is encoded with a random number (label). The values resulting from the evaluation of the gate at each of the four possible pair of input bits are also replaced with random labels. The garbled truth table of the gate consists of encryptions of each output label using its inputs labels as keys. The position of these four encryptions in the truth table is randomized so no information on the gate is leaked. To correctly evaluate each garbled gate the encryption scheme has the following two properties. Firstly, the ranges of the encryption function under any two distinct keys are disjoint (with overwhelming probability). The second property says that it can be checked efficiently whether a given ciphertext has been encrypted under a given key. With these two properties the receiver, after obtaining the labels for all circuit-input wires, can evaluate each gate by first finding out which of the four ciphertexts has been encrypted with his label keys, and then decrypting to obtain the label of the output wire. This is done obliviously as all the receiver learns during the evaluation are encodings of the bits. The sender's (i.e. circuit creators) input bits can be just sent as encodings to the evaluator; whereas the receiver's (i.e. circuit evaluators) encodings corresponding to his input bits are obtained via a 1-out-of-2 oblivious transfer (OT) protocol. A 1-out-of-2 OT protocol enables the sender possessing of two values C1 and C2 to send the one requested by the receiver (b a value in {1,2}) in such a way that the sender does not know what value has been transferred, and the receiver only learns the queried value. If one is considering malicious adversaries, further mechanisms to ensure correct behavior of both parties need to be provided. By construction it is easy to show security for the sender if the OT protocol is already secure against malicious adversary, as all the receiver can do is to evaluate a garbled circuit that would fail to reach the circuit-output wires if he deviated from the instructions. The situation is very different on the sender's side. For example, he may send an incorrect garbled circuit that computes a function revealing the receiver's input. This would mean that privacy no longer holds, but since the circuit is garbled the receiver would not be able to detect this. However, it is possible to efficiently apply Zero-Knowledge proofs to make this protocol secure against malicious adversaries with a small overhead comparing to the semi-honest protocol. Multi-party protocols. Most MPC protocols, as opposed to 2PC protocols and especially under the unconditional setting of private channels, make use of secret sharing. In the secret sharing based methods, the parties do not play special roles (as in Yao, of creator and evaluator). Instead, the data associated with each wire is shared amongst the parties, and a protocol is then used to evaluate each gate. The function is now defined as a "circuit" over a finite field, as opposed to the binary circuits used for Yao. Such a circuit is called an arithmetic circuit in the literature, and it consists of addition and multiplication "gates" where the values operated on are defined over a finite field. Secret sharing allows one to distribute a secret among a number of parties by distributing shares to each party. Two types of secret sharing schemes are commonly used; Shamir secret sharing and additive secret sharing. In both cases the shares are random elements of a finite field that add up to the secret in the field; intuitively, security is achieved because any non-qualifying set of shares looks randomly distributed. Secret sharing schemes can tolerate an adversary controlling up to "t" parties out of "n" total parties, where "t" varies based on the scheme, the adversary can be passive or active, and different assumptions are made on the power of the adversary. The Shamir secret sharing scheme is secure against a passive adversary when formula_3 and an active adversary when formula_4 while achieving information-theoretic security, meaning that even if the adversary has unbounded computational power, they cannot learn any information about the secret underlying a share. The BGW protocol, which defines how to compute addition and multiplication on secret shares, is often used to compute functions with Shamir secret shares. Additive secret sharing schemes can tolerate the adversary controlling all but one party, that is formula_5, while maintaining security against a passive and active adversary with unbounded computational power. Some protocols require a setup phase, which may only be secure against a computationally bounded adversary. A number of systems have implemented various forms of MPC with secret sharing schemes. The most popular is SPDZ, which implements MPC with additive secret shares and is secure against active adversaries. Other protocols. In 2014 a "model of fairness in secure computation in which an adversarial party that aborts on receiving output is forced to pay a mutually predefined monetary penalty" has been described for the Bitcoin network or for fair lottery, and has been successfully implemented in Ethereum. Practical MPC systems. Many advances have been made on 2PC and MPC systems in recent years. Yao-based protocols. One of the main issues when working with Yao-based protocols is that the function to be securely evaluated (which could be an arbitrary program) must be represented as a circuit, usually consisting of XOR and AND gates. Since most real-world programs contain loops and complex data structures, this is a highly non-trivial task. The Fairplay system was the first tool designed to tackle this problem. Fairplay comprises two main components. The first of these is a compiler enabling users to write programs in a simple high-level language, and output these programs in a Boolean circuit representation. The second component can then garble the circuit and execute a protocol to securely evaluate the garbled circuit. As well as two-party computation based on Yao's protocol, Fairplay can also carry out multi-party protocols. This is done using the BMR protocol, which extends Yao's passively secure protocol to the active case. In the years following the introduction of Fairplay, many improvements to Yao's basic protocol have been created, in the form of both efficiency improvements and techniques for active security. These include techniques such as the free XOR method, which allows for much simpler evaluation of XOR gates, and garbled row reduction, reducing the size of garbled tables with two inputs by 25%. The approach that so far seems to be the most fruitful in obtaining active security comes from a combination of the garbling technique and the "cut-and-choose" paradigm. This combination seems to render more efficient constructions. To avoid the aforementioned problems with respect to dishonest behaviour, many garblings of the same circuit are sent from the constructor to the evaluator. Then around half of them (depending on the specific protocol) are opened to check consistency, and if so a vast majority of the unopened ones are correct with high probability. The output is the majority vote of all the evaluations. Here the majority output is needed. If there is disagreement on the outputs the receiver knows the sender is cheating, but he cannot complain as otherwise this would leak information on his input. This approach for active security was initiated by Lindell and Pinkas. This technique was implemented by Pinkas et al. in 2009, This provided the first actively secure two-party evaluation of the Advanced Encryption Standard (AES) circuit, regarded as a highly complex (consisting of around 30,000 AND and XOR gates), non-trivial function (also with some potential applications), taking around 20 minutes to compute and requiring 160 circuits to obtain a formula_6 cheating probability. As many circuits are evaluated, the parties (including the receiver) need to commit to their inputs to ensure that in all the iterations the same values are used. The experiments of Pinkas et al. reported show that the bottleneck of the protocol lies in the consistency checks. They had to send over the net about 6,553,600 commitments to various values to evaluate the AES circuit. In recent results the efficiency of actively secure Yao-based implementations was improved even further, requiring only 40 circuits, and a much smaller number of commitments, to obtain formula_6 cheating probability. The improvements come from new methodologies for performing cut-and-choose on the transmitted circuits. More recently, there has been a focus on highly parallel implementations based on garbled circuits, designed to be run on CPUs with many cores. Kreuter, et al. describe an implementation running on 512 cores of a powerful cluster computer. Using these resources they could evaluate the 4095-bit edit distance function, whose circuit comprises almost 6 billion gates. To accomplish this they developed a custom, better optimized circuit compiler than Fairplay and several new optimizations such as pipelining, whereby transmission of the garbled circuit across the network begins while the rest of the circuit is still being generated. The time to compute AES was reduced to 1.4 seconds per block in the active case, using a 512-node cluster machine, and 115 seconds using one node. Shelat and Shen improve this, using commodity hardware, to 0.52 seconds per block. The same paper reports on a throughput of 21 blocks per second, but with a latency of 48 seconds per block. Meanwhile, another group of researchers has investigated using consumer-grade GPUs to achieve similar levels of parallelism. They utilize oblivious transfer extensions and some other novel techniques to design their GPU-specific protocol. This approach seems to achieve comparable efficiency to the cluster computing implementation, using a similar number of cores. However, the authors only report on an implementation of the AES circuit, which has around 50,000 gates. On the other hand, the hardware required here is far more accessible, as similar devices may already be found in many people's desktop computers or games consoles. The authors obtain a timing of 2.7 seconds per AES block on a standard desktop, with a standard GPU. If they allow security to decrease to something akin to covert security, they obtain a run time of 0.30 seconds per AES block. In the passive security case there are reports of processing of circuits with 250 million gates, and at a rate of 75 million gates per second. Implementations of secure multi-party computation data analyses. One of the primary applications of secure multi-party computation is allowing analysis of data that is held by multiple parties, or blind analysis of data by third parties without allowing the data custodian to understand the kind of data analysis being performed. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "t" }, { "math_id": 2, "text": "t < n/2" }, { "math_id": 3, "text": "t < \\frac{n}{2}" }, { "math_id": 4, "text": "t < \\frac{n}{3}" }, { "math_id": 5, "text": "t < n" }, { "math_id": 6, "text": "2^{-40}" } ]
https://en.wikipedia.org/wiki?curid=646233
64643407
Section formula
Geometric formula for finding the ratio in which a line segment is divided by a point In coordinate geometry, the Section formula is a formula used to find the ratio in which a line segment is divided by a point internally or externally. It is used to find out the centroid, incenter and excenters of a triangle. In physics, it is used to find the center of mass of systems, equilibrium points, etc. Internal Divisions. If point P (lying on AB) divides the line segment AB joining the points formula_0 and formula_1 in the ratio m:n, then formula_2 The ratio m:n can also be written as formula_3, or formula_4, where formula_5. So, the coordinates of point formula_6 dividing the line segment joining the points formula_0 and formula_1 are: formula_7 formula_8 formula_9 Similarly, the ratio can also be written as formula_10, and the coordinates of P are formula_11. Proof. Triangles formula_12. formula_13 External Divisions. If a point P (lying on the extension of AB) divides AB in the ratio m:n then formula_14 Proof. Triangles formula_15 (Let C and D be two points where A &amp; P and B &amp; P intersect respectively). Therefore ∠ACP = ∠BDP formula_16 Midpoint formula. The midpoint of a line segment divides it internally in the ratio formula_17. Applying the Section formula for internal division: formula_18 Derivation. formula_19 formula_20 formula_21 Centroid. The centroid of a triangle is the intersection of the medians and divides each median in the ratio formula_22. Let the vertices of the triangle be formula_23, formula_24 and formula_25. So, a median from point A will intersect BC at formula_26. Using the section formula, the centroid becomes: formula_27 In 3-Dimensions. Let A and B be two points with Cartesian coordinates "(x1, y1, z1)" and "(x2, y2, z2)" and P be a point on the line through A and B. If formula_28. Then the section formula gives the coordinates of P as formula_29 If, instead, P is a point on the line such that formula_30, its coordinates are formula_31. In vectors. The position vector of a point P dividing the line segment joining the points A and B whose position vectors are formula_32 and formula_33 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{A}(x_1,y_1)" }, { "math_id": 1, "text": "\\mathrm{B}(x_2,y_2)" }, { "math_id": 2, "text": " P = \\left(\\frac{mx_2 + nx_1}{m + n},\\frac{my_2 + ny_1}{m + n}\\right)" }, { "math_id": 3, "text": "m/n:1" }, { "math_id": 4, "text": "k:1" }, { "math_id": 5, "text": "k=m/n" }, { "math_id": 6, "text": "P" }, { "math_id": 7, "text": "\\left(\\frac{mx_2 + nx_1}{m + n}, \\frac{my_2 + ny_1}{m + n}\\right) " }, { "math_id": 8, "text": "=\\left(\\frac{\\frac{m}{n}x_2 +x_1}{\\frac{m}{n}+1},\\frac{\\frac{m}{n}y_2 +y_1}{\\frac{m}{n}+1} \\right )" }, { "math_id": 9, "text": "=\\left ( \\frac{kx_2 +x_1}{k +1},\\frac{ky_2 + y_1}{k +1} \\right )" }, { "math_id": 10, "text": "k:(1-k)" }, { "math_id": 11, "text": "((1-k)x_1 + kx_2, (1-k)y_1 + ky_2) " }, { "math_id": 12, "text": "PAQ\\sim BPC " }, { "math_id": 13, "text": "\\begin{align}\n\\frac{AP}{BP}=\\frac{AQ}{CP}=\\frac{PQ}{BC}\\\\\n\\frac{m}{n}=\\frac{x-x_1}{x_2-x}=\\frac{y-y_1}{y_2-y}\\\\\nmx_2-mx=nx-nx_1,my_2-my=ny-ny_1\\\\\nmx+nx=mx_2+nx_1, my+ny=my_2+ny_1\\\\\n(m+n)x=mx_2+nx_1, (m+n)y=my_2+ny_1\\\\\nx=\\frac{mx_2 + nx_1}{m + n}, y=\\frac{my_2 + ny_1}{m + n}\\\\\n\\end{align}\n" }, { "math_id": 14, "text": "P = \\left(\\dfrac{mx_2 - nx_1}{m - n}, \\dfrac{my_2 - ny_1}{m - n}\\right)" }, { "math_id": 15, "text": "PAC\\sim PBD " }, { "math_id": 16, "text": "\\begin{align}\n\\frac{AB}{BP}=\\frac{AC}{BD}=\\frac{PC}{PD}\\\\\n\\frac{m}{n}=\\frac{x-x_1}{x-x_2}=\\frac{y-y_1}{y-y_2}\\\\\nmx-mx_2=nx-nx_1,my-my_2=ny-ny_1\\\\\nmx-nx=mx_2-nx_1, my-ny=my_2-ny_1\\\\\n(m-n)x=mx_2-nx_1, (m-n)y=my_2-ny_1\\\\\nx=\\frac{mx_2 - nx_1}{m - n}, y=\\frac{my_2 - ny_1}{m - n}\\\\\n\\end{align}\n" }, { "math_id": 17, "text": "1:1" }, { "math_id": 18, "text": "P = \\left(\\dfrac{x_1 + x_2}{2}, \\dfrac{y_1 + y_2}{2} \\right)" }, { "math_id": 19, "text": "P = \\left(\\dfrac{mx_2 + nx_1}{m + n}, \\dfrac{my_2 + ny_1}{m + n}\\right) " }, { "math_id": 20, "text": "= \\left ( \\frac{1\\cdot x_1 + 1\\cdot x_2}{1+1},\\frac{1 \\cdot y_1 + 1\\cdot y_2}{1+1} \\right )" }, { "math_id": 21, "text": "=\\left(\\dfrac{x_1 + x_2}{2}, \\dfrac{y_1 + y_2}{2} \\right)" }, { "math_id": 22, "text": "2:1" }, { "math_id": 23, "text": "A(x_1, y_1)" }, { "math_id": 24, "text": "B(x_2, y_2)" }, { "math_id": 25, "text": "C(x_3, y_3)" }, { "math_id": 26, "text": "\\left(\\frac{x_2 + x_3}{2}, \\frac{y_2 + y_3}{2}\\right)" }, { "math_id": 27, "text": "\n\\left(\\frac{x_1 + x_2 + x_3}{3},\\frac{y_1 + y_2 + y_3}{3} \\right)" }, { "math_id": 28, "text": "AP:PB =m:n" }, { "math_id": 29, "text": "\\left ( \\frac{mx_2 + nx_1}{m +n} ,\\frac{my_2 + ny_1}{m+n}, \\frac{mz_2 + nz_1}{m+n} \\right )" }, { "math_id": 30, "text": "AP:PB = k:1-k" }, { "math_id": 31, "text": "((1-k)x_1 + kx_2, (1-k)y_1 + ky_2, (1-k)z_1 + kz_2)" }, { "math_id": 32, "text": "\\vec{a}" }, { "math_id": 33, "text": "\\vec{b}" }, { "math_id": 34, "text": "m:n" }, { "math_id": 35, "text": "\\frac{n\\vec{a} + m\\vec{b}}{m+n}" }, { "math_id": 36, "text": "\\frac{m\\vec{b} - n\\vec{a}}{m-n}" } ]
https://en.wikipedia.org/wiki?curid=64643407
6464661
Peptide library
A peptide library is a tool for studying proteins. Peptide libraries typically contain a large number of peptides that have a systematic combination of amino acids. Usually, solid phase synthesis, e.g. resin as a flat surface or beads, is used for peptide library generation. Peptide libraries are a popular tool for experiments in drug design, protein–protein interactions, and other biochemical and pharmaceutical applications. Synthetic peptide libraries are synthesized without utilizing biological systems such as phage or in vitro translation. There are at least five subtypes of synthetic peptide libraries that differ from each other by the design of the library and/or the method used for the synthesis of the library. The subtypes include: Solid phase peptide synthesis is limited to a peptide chain length of approximately 70 amino acids and is generally unsuitable for the study of larger proteins. Many libraries utilize peptide chains much shorter than 70 amino acids. For 20 encoded amino acids at maximally 70 positions, this results in an upper limit of 2070, or more than 10 quindecillion (1x1091), possible combinations, not accounting for the potential use of amino acids with post-translational modifications or amino acids not encoded in the genetic code, such as selenocysteine and pyrrolysine. Peptide libraries generally encompass only a fraction of this diversity, selected for depending on the needs of the experiment, for instance keeping some amino acids constant at certain positions. Large random peptide libraries are often used for the synthesis of certain peptide molecules, such as ultra-large chemical libraries for the discovery of high-affinity peptide binders. Any increase in the library size severely affects parameters, such as the synthesis scale, the number of library members, the sequence deconvolution and peptide structure elucidation. To mitigate these technical challenges, an algorithm-supported approach to peptide library design may use molecular mass and amino acid diversity to simplify the laborious permutation identification in complex mixtures when using mass spectrometry. This approach is used to avoid mass redundancy. Biological reagent companies, such as Pepscan, ProteoGenix, Mimotopes, GenScript and many others, manufacture customized peptide libraries. Example. A peptide chain of 10 residues in length is used in native chemical ligation with a larger recombinantly expressed protein. With 7 possibilities at Residue 2 and 20 possibilities at Residue 3, the total would be formula_0 or 140 different polypeptides in the library. This peptide library would be useful for analyzing the effect of the post-translational modification acetylation on lysine which neutralizes the positive charge. Having the library of different peptides at residue 2 and 3 would let the investigator see if some change in chemical properties in the N-terminal tail of the ligated protein makes the protein more useful or useful in a different way. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "20\\times7" } ]
https://en.wikipedia.org/wiki?curid=6464661
64649083
Convex drawing
Planar graph with convex polygon faces In graph drawing, a convex drawing of a planar graph is a drawing that represents the vertices of the graph as points in the Euclidean plane and the edges as straight line segments, in such a way that all of the faces of the drawing (including the outer face) have a convex boundary. The boundary of a face may pass straight through one of the vertices of the graph without turning; a strictly convex drawing asks in addition that the face boundary turns at each vertex. That is, in a strictly convex drawing, each vertex of the graph is also a vertex of each convex polygon describing the shape of each incident face. Every polyhedral graph has a strictly convex drawing, for instance obtained as the Schlegel diagram of a convex polyhedron representing the graph. For these graphs, a convex (but not necessarily strictly convex) drawing can be found within a grid whose length on each side is linear in the number of vertices of the graph, in linear time. However, strictly convex drawings may require larger grids; for instance, for any polyhedron such as a pyramid in which one face has a linear number of vertices, a strictly convex drawing of its graph requires a grid of cubic area. A linear-time algorithm can find strictly convex drawings of polyhedral graphs in a grid whose length on each side is quadratic. Other graphs that are not polyhedral can also have convex drawings, or strictly convex drawings. Some graphs, such as the complete bipartite graph formula_0, have convex drawings but not strictly convex drawings. A combinatorial characterization for the graphs with convex drawings is known, and they can be recognized in linear time, but the grid dimensions needed for their drawings and an efficient algorithm for constructing small convex grid drawings of these graphs are not known in all cases. Convex drawings should be distinguished from convex embeddings, in which each vertex is required to lie within the convex hull of its neighboring vertices. Convex embeddings can exist in dimensions other than two, do not require their graph to be planar, and even for planar embeddings of planar graphs do not necessarily force the outer face to be convex. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K_{2,3}" } ]
https://en.wikipedia.org/wiki?curid=64649083
64651350
Nautyca
Jeffery Kofi Gordor, better known as Nautyca, is a Ghanaian singer-songwriter. Early life and career. Nautyca was born on 6 December 1992 in Anloga, Volta Region of Ghana and grew up in Tema. He started as a rapper and later developed into high life singing. His debut single "Social Media" which featured Sarkodie became a hit few days after release. Discography. Singles formula_0 Social Media (2019) formula_1 Dane (2020) Awards and nominations. Nautyca was crowned the Rising Artist of the Year 2019 at the Youth Excellence Awards (YEA). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\bull" }, { "math_id": 1, "text": "\\bullet" } ]
https://en.wikipedia.org/wiki?curid=64651350
64656
Hue
Property of a color In color theory, hue is one of the main properties (called color appearance parameters) of a color, defined technically in the CIECAM02 model as "the degree to which a stimulus can be described as similar to or different from stimuli that are described as red, orange, yellow, green, blue, violet," within certain theories of color vision. Hue can typically be represented quantitatively by a single number, often corresponding to an angular position around a central or neutral point or axis on a color space coordinate diagram (such as a chromaticity diagram) or color wheel, or by its dominant wavelength or by that of its complementary color. The other color appearance parameters are colorfulness, saturation (also known as intensity or chroma), lightness, and brightness. Usually, colors with the same hue are distinguished with adjectives referring to their lightness or colorfulness - for example: "light blue", "pastel blue", "vivid blue", and "cobalt blue". Exceptions include brown, which is a dark orange. In painting, a hue is a "pure" pigment—one without tint or shade (added white or black pigment, respectively). The human brain first processes hues in areas in the extended V4 called globs. Deriving a hue. The concept of a color system with a hue was explored as early as 1830 with Philipp Otto Runge's color sphere. The Munsell color system from the 1930s was a great step forward, as it was realized that perceptual uniformity means the color space can no longer be a sphere. As a convention, the hue for red is set to 0° for most color spaces with a hue. Opponent color spaces. In opponent color spaces in which two of the axes are perceptually orthogonal to lightness, such as the CIE 1976 ("L"*, "a"*, "b"*) (CIELAB) and 1976 ("L"*, "u"*, "v"*) (CIELUV) color spaces, hue may be computed together with chroma by converting these coordinates from rectangular form to polar form. Hue is the angular component of the polar representation, while chroma is the radial component. Specifically, in CIELAB formula_0 while, analogously, in CIELUV formula_1 where, atan2 is a two-argument inverse tangent. Defining hue in terms of RGB. Preucil describes a color hexagon, similar to a trilinear plot described by Evans, Hanson, and Brewer, which may be used to compute hue from RGB. To place red at 0°, green at 120°, and blue at 240°, formula_2 Equivalently, one may solve formula_3 Preucil used a polar plot, which he termed a color circle. Using R, G, and B, one may compute hue angle using the following scheme: determine which of the six possible orderings of R, G, and B prevail, then apply the formula given in the table below. In each case the formula contains the fraction formula_4, where "H" is the highest of R, G, and B; "L" is the lowest, and "M" is the mid one between the other two. This is referred to as the "Preucil hue error" and was used in the computation of mask strength in photomechanical color reproduction. Hue angles computed for the Preucil circle agree with the hue angle computed for the Preucil hexagon at integer multiples of 30° (red, yellow, green, cyan, blue, magenta, and the colors midway between contiguous pairs) and differ by approximately 1.2° at odd integer multiples of 15° (based on the circle formula), the maximal divergence between the two. The process of converting an RGB color into an HSL color space or HSV color space is usually based on a 6-piece piecewise mapping, treating the HSV cone as a hexacone, or the HSL double cone as a double hexacone. The formulae used are those in the table above. One might notice that the HSL/HSV hue "circle" do not appear to all be of the same brightness. This is a known issue of this RGB-based derivation of hue. Usage in art. Manufacturers of pigments use the word hue, for example, "cadmium yellow (hue)" to indicate that the original pigmentation ingredient, often toxic, has been replaced by safer (or cheaper) alternatives whilst retaining the hue of the original. Replacements are often used for chromium, cadmium and alizarin. Hue vs. dominant wavelength. Dominant wavelength (or sometimes equivalent wavelength) is a physical analog to the perceptual attribute hue. On a chromaticity diagram, a line is drawn from a white point through the coordinates of the color in question, until it intersects the spectral locus. The wavelength at which the line intersects the spectrum locus is identified as the color's dominant wavelength if the point is on the same side of the white point as the spectral locus, and as the color's complementary wavelength if the point is on the opposite side. Hue difference notation. There are two main ways in which hue difference is quantified. The first is the simple difference between the two hue angles. The symbol for this expression of hue difference is formula_5 in CIELAB and formula_6 in CIELUV. The other is computed as the residual total color difference after Lightness and Chroma differences have been accounted for; its symbol is formula_7 in CIELAB and formula_8 in CIELUV. Names and other notations. There exists some correspondence, more or less precise, between hue values and color terms (names). One approach in color science is to use traditional color terms but try to give them more precise definitions. See spectral color#Spectral color terms for names of highly saturated colors with the hue from ≈ 0° (red) up to ≈ 275° (violet), and line of purples#Table of highly-saturated purple colors for color terms of the remaining part of the color wheel. Alternative approach is to use a systematic notation. It can be a standard angle notation for certain color model such as HSL/HSV mentioned above, CIELUV, or CIECAM02. Alphanumeric notations such as of Munsell color system, NCS, and Pantone Matching System are also used. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h_{ab} = \\mathrm{atan2}(b^*, a^*)," }, { "math_id": 1, "text": "h_{uv} = \\mathrm{atan2}(v^*, u^*) = \\mathrm{atan2}(v', u')," }, { "math_id": 2, "text": " h_{rgb} = \\mathrm{atan2}\\left( \\sqrt{3} \\cdot (G - B), 2 \\cdot R - G - B \\right). " }, { "math_id": 3, "text": " \\tan( h_{rgb}) = \\frac{\\sqrt{3}\\cdot (G - B)}{2\\cdot R - G - B}. " }, { "math_id": 4, "text": "\\frac{M - L}{H - L}" }, { "math_id": 5, "text": "\\Delta h_{ab}" }, { "math_id": 6, "text": "\\Delta h_{uv}" }, { "math_id": 7, "text": "\\Delta H^*_{ab}" }, { "math_id": 8, "text": "\\Delta H^*_{uv}" } ]
https://en.wikipedia.org/wiki?curid=64656
64656081
Convex embedding
In geometric graph theory, a convex embedding of a graph is an embedding of the graph into a Euclidean space, with its vertices represented as points and its edges as line segments, so that all of the vertices outside a specified subset belong to the convex hull of their neighbors. More precisely, if formula_0 is a subset of the vertices of the graph, then a convex formula_0-embedding embeds the graph in such a way that every vertex either belongs to formula_0 or is placed within the convex hull of its neighbors. A convex embedding into formula_1-dimensional Euclidean space is said to be in general position if every subset formula_2 of its vertices spans a subspace of dimension formula_3. Convex embeddings were introduced by W. T. Tutte in 1963. Tutte showed that if the outer face formula_4 of a planar graph is fixed to the shape of a given convex polygon in the plane, and the remaining vertices are placed by solving a system of linear equations describing the behavior of ideal springs on the edges of the graph, then the result will be a convex formula_4-embedding. More strongly, every face of an embedding constructed in this way will be a convex polygon, resulting in a convex drawing of the graph. Beyond planarity, convex embeddings gained interest from a 1988 result of Nati Linial, László Lovász, and Avi Wigderson that a graph is k-vertex-connected if and only if it has a formula_5-dimensional convex formula_2-embedding in general position, for some formula_2 of formula_6 of its vertices, and that if it is k-vertex-connected then such an embedding can be constructed in polynomial time by choosing formula_2 to be any subset of formula_6 vertices, and solving Tutte's system of linear equations. One-dimensional convex embeddings (in general position), for a specified set of two vertices, are equivalent to bipolar orientations of the given graph. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "d" }, { "math_id": 2, "text": "S" }, { "math_id": 3, "text": "\\min(d,|S|-1)" }, { "math_id": 4, "text": "F" }, { "math_id": 5, "text": "(k-1)" }, { "math_id": 6, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=64656081
64657822
Descartes on Polyhedra
Book by Pasquale Joseph Federico Descartes on Polyhedra: A Study of the "De solidorum elementis" is a book in the history of mathematics, concerning the work of René Descartes on polyhedra. Central to the book is the disputed priority for Euler's polyhedral formula between Leonhard Euler, who published an explicit version of the formula, and Descartes, whose "De solidorum elementis" includes a result from which the formula is easily derived. "Descartes on Polyhedra" was written by Pasquale Joseph Federico (1902–1982), and published posthumously by Springer-Verlag in 1982, with the assistance of Federico's widow Bianca M. Federico, as volume 4 of their book series Sources in the History of Mathematics and Physical Sciences. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries. Topics. The original Latin manuscript of "De solidorum elementis" was written circa 1630 by Descartes; reviewer Marjorie Senechal calls it "the first general treatment of polyhedra", Descartes' only work in this area, and unfinished, with its statements disordered and some incorrect. It turned up in Stockholm in Descartes' estate after his death in 1650, was soaked for three days in the Seine when the ship carrying it back to Paris was wrecked, and survived long enough for Gottfried Wilhelm Leibniz to copy it in 1676 before disappearing for good. Leibniz's copy, also lost, was rediscovered in Hannover around 1860. The first part of "Descartes on Polyhedra" relates this history, sketches the biography of Descartes, provides an eleven-page facsimile reproduction of Leibniz's copy, and gives a transcription, English translation, and commentary on this text, including explanations of some of its notation. In "De solidorum elementis", Descartes states (without proof) Descartes' theorem on total angular defect, a discrete version of the Gauss–Bonnet theorem according to which the angular defects of the vertices of a convex polyhedron (the amount by which the angles at that vertex fall short of the formula_0 angle surrounding any point on a flat plane) always sum to exactly formula_1. Descartes used this theorem to prove that the five Platonic solids are the only possible regular polyhedra. It is also possible to derive Euler's formula formula_2 relating the numbers of vertices, edges, and faces of a convex polyhedron from Descartes' theorem, and "De solidorum elementis" also includes a formula more closely resembling Euler's relating the number of vertices, faces, and plane angles of a polyhedron. Since the rediscovery of Descartes' manuscript, many scholars have argued that the credit for Euler's formula should go to Descartes rather than to Leonhard Euler, who published the formula (with an incorrect proof) in 1752. The second part of "Descartes on Polyhedra" reviews this debate, and compares the reasoning of Descartes and Euler on these topics. Ultimately, the book concludes that Descartes probably did not discover Euler's formula, and reviewers Senechal and H. S. M. Coxeter agree, writing that Descartes did not have a concept for the edges of a polyhedron, and without that could not have formulated Euler's formula itself. Subsequently, to this work, it was discovered that Francesco Maurolico had provided a more direct and much earlier predecessor to the work of Euler, an observation in 1537 (without proof of its more general applicability) that Euler's formula itself holds true for the five Platonic solids. The second part of Descartes' book, and the third part of "Descartes on Polyhedra", connects the theory of polyhedra to number theory. It concerns figurate numbers defined by Descartes from polyhedra, generalizing the classical Greek definitions of figurate numbers such as the square numbers and triangular numbers from two-dimensional polygons. In this part Descartes uses both the Platonic solids and some of the semiregular polyhedra, but not the snub polyhedra. Audience and reception. Reviewer F. A. Sherk, after noting the obvious relevance of "Descartes on Polyhedra" to historians of mathematics, recommends it as well to geometers and to amateur mathematicians. He writes that it provides a good introduction to some important topics in the mathematics of polyhedra, makes an interesting connection to number theory, and is easily readable without much background knowledge. Marjorie Senechal points out that, beyond the question of priority between Descartes and Euler, the book is also useful for illuminating what was known of geometry more generally at the time of Descartes. More briefly, reviewer L. Führer calls the book beautiful, readable, and lively, but expensive. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2\\pi" }, { "math_id": 1, "text": "4\\pi" }, { "math_id": 2, "text": "V-E+F=2" } ]
https://en.wikipedia.org/wiki?curid=64657822
64659061
Beryl May Dent
English mathematical physicist (1900–1977) Beryl May Dent (10 May 1900 – 9 August 1977) was an English mathematical physicist, technical librarian, and a programmer of early analogue and digital computers to solve electrical engineering problems. She was born in Chippenham, Wiltshire, the eldest daughter of schoolteachers. The family left Chippenham in 1901, after her father became head teacher of the then recently established Warminster County School. In 1923, she graduated from the University of Bristol with First Class Honours in applied mathematics. She was awarded the Ashworth Hallett scholarship by the university and was accepted as a postgraduate student at Newnham College, Cambridge. She returned to Bristol in 1925, after being appointed a researcher in the Physics Department at the University of Bristol, with her salary being paid by the Department of Scientific and Industrial Research. In 1927, John Lennard-Jones was appointed Professor of Theoretical physics, a chair being created for him, with Dent becoming his research assistant in theoretical physics. Lennard‑Jones pioneered the theory of interatomic and intermolecular forces at Bristol and she became one of his first collaborators. They published six papers together from 1926 to 1928, dealing with the forces between atoms and ions, that were to become the foundation of her master's thesis. Later work has shown that the results they obtained had direct application to atomic force microscopy by predicting that non-contact imaging is possible only at small tip-sample separations. In 1930, she joined Metropolitan-Vickers Electrical Company Ltd, Manchester, as a technical librarian for the scientific and technical staff of the research department. She became active in the Association of Special Libraries and Information Bureaux (ASLIB) and was honorary secretary to the founding committee for the Lancashire and Cheshire branch of the association. She served on various ASLIB committees and made conference presentations detailing different aspects of the company's library and information service. She continued to publish scientific papers, contributing numerical methods for solving differential equations by the use of the differential analyser that was built for the University of Manchester and Douglas Hartree. She was the first to develop a detailed reduced major axis method for the best fit of a series of data points. Later in her career she became leader of the computation section at Metropolitan-Vickers, and then a supervisor in the research department for the section that was investigating semiconducting materials. She joined the Women's Engineering Society and published papers on the application of digital computers to electrical design. She retired in 1960, with Isabel Hardwich, later a fellow and president of the Women's Engineering Society, replacing her as section leader for the women in the research department. In 1962, she moved with her mother and sister to Sompting, West Sussex, and died there in 1977. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Early life. Beryl May was born on (1900--)10 1900, at Penley Villa, Park Lane, Chippenham, Wiltshire, the eldest daughter of Agnes Dent (1869–), née Thornley, and Eustace Edward (1868–). She was baptised at StPaul's, Chippenham, on 8 June 1900.1 They had married at St Mary's Church, Goosnargh, near Preston, Lancashire, on 27July 1898. Her mother was educated at the Harris Institute, Preston, passing examinations in science and art. She was a teacher at Attercliffe School, in northeast Sheffield, before moving to Goosnargh School, near her hometown of Preston, where her elder brother and sister, John William and Mary Ann Thornley, were the head teachers. In March 1894, she had applied for the headship at Fairfield School, Cockermouth, making the shortlist, but the board decided to appoint a local candidate. On 18March 1889, Dent's father was appointed to a teaching assistant position at Portland Road School, in Halifax, West Yorkshire, after completing a teaching apprenticeship with the school board. In the same year, Florence Emily Dent, his elder sister, was appointed head teacher at West Vale girls' school, Stainland Road, Greetland, moving from the Higher Board School at Halifax. In August 1889, he obtained a first class pass in mathematics from the Halifax Mechanics' Institute. He enrolled on a degree course at University College, Aberystwyth, in the Education Day Training College. In January 1894, he was awarded a first by Aberystwyth, and a first in the external University of London examinations. His first teaching post was at Coopers' Company Grammar School, Bow, London, before moving to Chippenham, where he was a senior assistant teacher at the Chippenham County School. In October 1901, Dent's father left Chippenham to become head teacher of the then recently established Warminster County School, that adjoined the Athenaeum Theatre in Warminster. The family moved to Boreham Road, Warminster, where houses were built in the early 19th century. In April 1907, they moved to 22Portway, Warminster, situated a short distance from the County School and the Athenaeum.264 He was elected chair of the Warminster Urban District Council from 1920 to 1922, and remained as head teacher of the County School until his retirement in August 1929. Dent's father was also a regular cast member of the Warminster Operatic Society at the Athenaeum and other venues. Dent and her younger sister, Florence Mary, would often appear with him on stage in such operettas as "Snow White and the seven dwarfs" and the "Princess Ju‑Ju" ("The Golden Amulet"), a Japanese operetta in three acts by Clementine Ward. In "Princess Ju‑Ju", she played "La La", one of the three maidens attendant on the princess, and sang the first act solo, "She must be demure". In act two of the same musical, she performed in the fan dance, "Spirits of the Night". She also acted in a scene from Tennyson's "Princess" at the County School's prize giving day on 16December 1913. Education. Warminster County School (1909–1917). From 1909, Dent was educated at Warminster County School, where her father was head teacher. At school, she was close friends with her neighbour at Portway, Evelyn Mary Day, the eldest daughter of Henry George Day, a former butler to Colonel Charles Gathorne Gathorne‑Hardy, son of Gathorne Gathorne-Hardy, 1st Earl of Cranbrook. In August 1914, she passed the University of Oxford Junior Local Examination with First Class Honours, and on the strength of her examination result, she was awarded a scholarship by the school. In 1915, she passed the senior examination with second class honours and a distinction in French, and subsequently, her scholarship was renewed. She then joined the sixth form and won the school prize for French in December 1916. In March 1918, she applied for a scholarship in mathematics from Somerville College, one of the first two women's colleges in the University of Oxford. She was highly commended but was not awarded a scholarship nor an exhibition. University of Bristol (1919–1923). In 1918, Dent joined the Royal Aircraft Establishment (RAE) at Farnborough, Hampshire. The First World War opened new employment opportunities for women, and RAE was one of the first military establishments to recruit women into engineering, and mathematical and computational research.11610 In the same period that Dent was at RAE, Lorna Swain, then mathematics tutor at Newnham College in the University of Cambridge, worked at the establishment on the problem of aircraft propeller vibration.84 The Treasury reduced RAE's funding after the end of the war, and consequently, the number of resources and staff available to support research fell significantly.58 In 1919, she left RAE after being accepted on to the general Bachelor of Arts (BA) degree course at the University of Bristol. In June 1920, she passed her intermediate examination in French with supplementary courses in Latin, history, and mathematics.1 In the following academic year, Dent joined the honours course in mathematics and took an intermediate examination in physics.1 After spending the summer of 1921 at her parents' home in Warminster, she returned for the start of the 1921 to 1922 academic year to find that Paul Dirac had joined the mathematics course. The course of mathematics at Bristol University normally lasted three years, but because of Dirac's previous training, the Department of Mathematics had allowed him to join in the second year. They were taught applied mathematics by Henry Ronald Hassé, the then head of the Mathematics Department, and pure mathematics by Peter Fraser. Both of them had come from Cambridge; Fraser had been appointed in 1906 to the staff of the Bristol University College, soon to become the University of Bristol, and Hassé joined him in 1919 as professor of mathematics.111 Fraser introduced them to mathematical rigour, projective geometry, and rigorous proofs in differential and integral calculus. Dirac would later say that Peter Fraser was "the best teacher he had ever had." Dent studied four courses in pure mathematics: There was a choice of specialisation in the final year; applied or pure mathematics. As the only official, registered fee-paying student, Dent had the right to choose, and she settled on applied mathematics for the final year. The department could offer only one set of lectures so Dirac also had to follow the same course. Dent studied four courses in applied mathematics: Newnham College, University of Cambridge (1923–1924). In June 1923, Dent graduated with Dirac, gaining a Bachelor of Science (BSc) degree in applied mathematics with First Class Honours. On 7July 1923, she was awarded the Ashworth Hallett scholarship by the University of Bristol and was accepted as a postgraduate student at Newnham College in the University of Cambridge. On her death in 1922, Lilias Sophia Ashworth Hallett left one thousand pounds each to the University of Bristol and Girton College, University of Cambridge, to found scholarships for women. The University of Bristol scholarship was open to women graduates of a recognised college or university, and worth £45 at the time (). She spent a year at Cambridge, leaving in 1924 without further academic qualification. Before 1948, the University of Cambridge denied women graduates a degree, although in the same year as she left Cambridge, Katharine Margaret Wilson was the first woman to be awarded a PhD by the university. Career. High School for Girls, Barnsley (1924–1925). Dent spent the summer of 1924 at her parents' home in Warminster, playing mixed doubles tennis in a tournament organised by the local Women's Unionist Association. In September of the same year, she was appointed an assistant teacher in mathematics at the High School for Girls, in Barnsley, Huddersfield Road, on an annual salary of £250 (). Annie Rose Nuttall, the school's head teacher, was a former student at Newnham College. In the early 1920s, women who had studied university level mathematics faced limited employment prospects, as mathematics and engineering professions, other than perhaps school teaching, were dominated by men. Dent resigned her position on 31August 1925 after being appointed a demonstrator (research assistant) in the Department of Physics at the University of Bristol, with her salary being paid by the Department of Scientific and Industrial Research, the forerunner of the Science and Engineering Research Council (SERC).107 Department of Physics, University of Bristol (1925–1929). In 1924, the University of Bristol Council had set aside a portion of a bequest from Henry Herbert Wills for the Department of Physics where Arthur Mannering Tyndall was building up a staff for teaching and research in the H. H. Wills Physics Laboratory, Royal Fort House Gardens. From August 1925, John Lennard-Jones, of Trinity College, University of Cambridge, was elected reader in mathematical physics. In March 1927, Lennard‑Jones was appointed Professor of Theoretical physics, a chair being created for him, with Dent becoming his research assistant in theoretical physics.24 Lennard‑Jones pioneered the theory of interatomic and intermolecular forces at Bristol and Dent became one of his first collaborators. Lennard‑Jones and Dent published six papers together from 1926 to 1928, dealing with the forces between atoms and ions, with the objective of calculating theoretically the properties of carbonate and nitrate crystals. Dent's thesis for her master's degree, "Some theoretical determinations of crystal structure" (1927), was the basis of the three papers that followed in 1927; with Lennard‑Jones, "Some theoretical determinations of crystal parameters. CXVI", and with Lennard‑Jones and Sydney Chapman, "Structure of carbonate crystals" and "Part II. Structure of carbonate crystals". On 28June 1927, she was awarded a MSc degree for her thesis and research work. In 1927, the physics laboratory at Bristol had a surplus of funds, and so it was decided that the funds would be used to provide more technical help. Consequently, Dent was asked to combine her research duties with the post of part-time departmental librarian, the first appointment of librarian in the Department of Physics.26 In 1928, Lennard‑Jones and Dent published two papers, "Cohesion at a crystal surface", and with Sydney Chapman, "The change in lattice spacing at a crystal boundary", that studied the force fields on a thin crystal cleavage. Around this time, quantum mechanics was developed to become the standard formulation for atomic physics. Lennard‑Jones left Bristol in 1929 to study the subject for a year as a Rockefeller Fellow at the University of Göttingen. She wrote one last paper before leaving the physics department at Bristol: "The effect of boundary distortion on the surface energy of a crystal" (1929) examined the effect of the polarisation of surface ions in decreasing the surface energy of alkali halides. In November 1929, she was appointed to the position of technical librarian for the scientific and technical staff in the research department at Metropolitan-Vickers, Trafford Park, Manchester.14–15 In December 1929, Dent resigned her position at Bristol and it was accepted with regret by the university council. Marjorie Josephine Littleton, the daughter of a local Bristol councillor and a graduate of Girton College, University of Cambridge, was appointed as her replacement on the 1February 1930. Littleton was later Sir Neville Mott's co-author and research assistant in the physics department.517 In 1930, Lennard‑Jones returned to Bristol, as Dean of the Faculty of Science, and introduced the new quantum theories to the Bristol group. Metropolitan-Vickers, Trafford Park (1930–1960). Metropolitan-Vickers was a British heavy industrial firm, well-known for industrial electrical equipment and generators, street lighting, electronics, steam turbines, and diesel locomotives. They built the Metrovick 950, the first commercial transistorised computer. In 1917, a Research and Education Department was established at the Trafford Park site, when the care of the library came within the remit of James George Pearce. He made the library the centre of a new "technical intelligence" section.193 In the 1920s, the post of librarian was held by Lucy Stubbs, a former assistant librarian at the University of Birmingham, and past member of the first standing committee of ASLIB.228 Stubbs did not possess scientific qualifications, maintaining that a librarian, if assisted by other technical staff, did not need to understand science or engineering.193 In 1929, James Steele Park Paton reorganised and expanded the section with Dent succeeding Stubbs as technical librarian on 6January 1930.15 She joined the scientific and technical staff as was one of only two senior women in the research department,314 and in contrast to Stubbs, was employed principally for her technical skills.193 Dent was honorary secretary to the founding committee for the ASLIB Lancashire and Cheshire branch from 1931 to 1936. In 1932, the branch had twenty-six members and had organised four meetings, including one addressed by Sir Henry Tizard, the then President of ASLIB. After the war, it formed the basis for the Northern Branch of the association.412 Technical librarianship emerged as a new scientific career in interwar Britain and rapidly became one of the few types of professional industrial employment that was routinely open to both women and men.301 By 1933, Dent reported that the Metropolitan-Vickers library had three thousand engineering volumes and around the same number in pamphlets and patent specifications. Besides covering electrical subjects, the library covered accountancy, employment questions, and subjects of interest to the sales department. It also issued a weekly bulletin, scrutinised patents, handled patents taken out by research staff, and exchanged information with associated companies. Dent continued to publish papers in applied mathematics and contribute to papers on emerging computational technologies. In "On observations of points connected by a linear relation" (1935), she developed a detailed reduced major axis method for line fitting that built on the work of Robert Adcock and Charles Kummell. In 1937, David Myers, then at the Engineering Laboratory at the University of Oxford, asked Douglas Hartree and Arthur Porter to calculate the space charge limitation of secondary current in a triode.9196 The calculations relied on some initial numerical integrations that were carried out by Dent on a differential analyser. The results corresponded closely to those obtained experimentally by Myers at Oxford.9197 Her knowledge of higher mathematics meant that she was asked to check the mathematics in papers for publication by engineers at Metropolitan-Vickers. For example, Cyril Frederick Gradwell, a graduate of Trinity College, Cambridge, asked her to scrutinise the algebraic part of his work in "The Solution of a problem in disk bending occurring in connexion with gas turbines" (1950). She would later analyse the problem of stress distribution in a thick disk based on a method devised by Philip Pollock, for Richard William Bailey, the former director of the mechanical, metallurgical, and chemical sections of the research department at Metropolitan-Vickers.16 Dent was a delegate at the fourteenth International Conference on Documentation and was invited to the Government's conference dinner on 22September 1938 at the Great Dining Hall of Christ Church, Oxford. In 1939, she was elected to the editing committee of the ASLIB book list. In 1944, she was put in charge of the women working in the research department laboratory at Metropolitan-Vickers, and in 1946, she was promoted to section leader of the new computation section. Her role would bring her into contact with Audrey Stuckes, a materials science researcher in the department, and a graduate of Newnham College, who would later head the physics department at the University of Salford. In 1953, they collaborated on an investigation into the heating effects that occur when a current is passed through a semiconductor that has no barrier layer. Dent suggested methods to solve the equations and computed the numerical integrations. In the following year, she developed the Fourier analysis in "Regenerative Deflection as a Parametrically Excited Resonance Phenomenon" (1954), that calculated the optimal radial oscillations to maintain cyclotron resonance in a synchrocyclotron. The causes of axial spreading of the charged particle beam during extraction were also analysed. Dent joined the Women's Engineering Society and published papers on the application of digital computers to electrical design. With Brian Birtwistle, she wrote programs for the Ferranti Mark 1 (Mark 1) computer at the University of Manchester, that demonstrated that high-speed digital computers could provide considerable assistance to the electrical design engineer. Birtwistle would later have an extensive career in the computer industry, working at, amongst others, Honeywell Information Systems and ADP Network Services. In 1958, she carried out computer calculations for the mechanical engineering team at the Nuclear Power Group, Radbroke Hall. Their paper outlined a procedure for calculating the theoretical deflection (bending) of a circular grid of support girders for a graphite neutron moderator in a gas-cooled reactor. A general expression was derived from the central deflection of the grid and the maximum bending moment on the central cross-beam for a range of grid diameters. In 1959, and a year from retirement, Dent modelled a proposed Zeta circuit on the Mark 1 computer, for Eric Hartill's paper on constructing a high-power pulse transformer and circuit. The cost of the computation was about two thousand pounds (), corresponding to around eighty hours of machine time. She retired from Metropolitan-Vickers in May 1960, with Isabel Hardwich, later a fellow and president of the Women's Engineering Society, replacing her as section leader for the women in the research department.232243 Personal life. In the 1920s, Dent was living at Clifton Hill House, the university hall of residence for women in Clifton. May Christophera Staveley was her warden and tutor at Clifton Hill House, and Dent returned to Bristol on 22December 1934 for Staveley's funeral. Dent was a member of the Clifton Hill House Old Students Association, and secretary and treasurer of the group of former Clifton Hill House students.1, 9 She would later write "I was very sorry indeed to leave Bristol and have many happy memories of my time there. I shall miss living at the [Clifton Hill House] Hall very much."15 In 1926, Dent was elected treasurer of the University of Bristol's Convocation, the university's alumni association. In 1927, she was one of eleven people elected to the standing committee of the Convocation62 She later represented the Manchester branch of the association. Around 1926, Dent was appointed honorary secretary of the Bristol Cheeloo Association. The association's aim was to raise sufficient funds to support a chair of chemistry at Cheeloo University. In an effort to publicise the cause and raise money, she presented to the local branch of the Women's International League in October 1928. In July 1929, in Dent's last year at Bristol, she went on holiday to North Devon with friends that included Gertrude Roxbee, known as "Rox", who had graduated with Dent in 1923 with a BSc in botany.12–13 After moving to Manchester in January 1930, Dent found shared lodgings at 10Montrose Avenue, West Didsbury, in the same house as Roxbee who, at that time, was a teacher at Whalley Range High School.15, 50 At weekends, she would ramble to Hebden Bridge, and with Roxbee, learnt to figure skate at the Ice Palace, a former ice rink on Derby Street in Cheetham Hill.54 In September 1930, she returned to Bristol for the ninety-eighth conference of the British Association for the Advancement of Science (British Association), meeting her friends at an alumnae association lunch.93 In the afternoon of the 4September 1930, she toured Avonmouth Docks as a conference member,94 and in the evening, was invited to a reception held by Walter Bryant, the then lord mayor of Bristol, at the Bristol Museum &amp; Art Gallery.94. On the following day, she visited an aircraft manufacturer at Whitchurch Airport and attended a garden party at Wills Hall.94 On the Monday of the conference, Dent was in the audience to see Paul Dirac present his paper on the proton and the structure of matter.94 She would later comment:94 &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I heard a striking paper by Dirac, who was a student with me, who is now a very famous person, as I always knew he would be... I now go about boasting that I was in the same class! Dent's father died on (1954--)24 1954, at their shared home, 529King's Road, Stretford, with the funeral service taking place at StMatthew's Church, Stretford. She had close links to StMatthew's; from 1956 to 1962, she served as a school manager for StMatthew's Church of England Primary School at Poplar Road, Stretford. Later life and death. In 1962, Dent and her mother moved from Stretford to 1Cokeham Road, Sompting, a village in the coastal Adur District of West Sussex, between Lancing and Worthing. Her mother died on (1967--)5 1967 and was cremated at the Downs Crematorium on 10April 1967. Dent's sister, Florence Mary, also lived in the house until her death on . After a brief period as a teacher at a prep school in Malmesbury, Wiltshire, Florence worked as a secretary for a marine insurance firm attached to Lloyd's of London at 12 Leadenhall Street, commuting into London from Harrow each day.57–59 Beryl May Dent died at Worthing Hospital on after a long period of disablement. The funeral service was held on 12August 1977 at St Mary's Church, Sompting, followed by cremation. Her ashes were interred at Worthing Crematorium, in the Gardens of Rest, towards the Spring Glades, and her entry in the book of remembrance at the crematorium states: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Beryl May Dent 1900 – A real Christian loved by all – 1977 There is also a memorial to her at the Church of StMary the Blessed Virgin, Sompting. The bishop's chair, situated close to the altar, bears a brass plaque with the following inscription: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;In loving memory of BERYL DENT 1900 – 1977 Her Christian faith is perhaps not unexpected, given her father's work for the church in Warminster, and the era she grew up in, where religion pervaded social and political life. However, it is notable that she remained a committed Christian while pursuing a scientific career. Legacy. An archive of Dent's papers, that relate to her life and work in the 1920s in the physics department at the University of Bristol, is held in the Special Collections of the University of Bristol Arts and Social Sciences Library, in Tyndall Avenue, Bristol. Included in that archive is a series of letterbooks, written in the 1930s by members of the Clifton Hill House Old Students' Association, that include news and photographs of Dent, her family, and friends. Atomic force microscopy. In 1928, Lennard‑Jones and Dent published two papers, "Cohesion at a crystal surface" and "The change in lattice spacing at a crystal boundary", that for the first time, outlined a calculation of the potential of the electric field in a vacuum, produced by a thin sodium chloride crystal surface. They gave an expression for the electric potential produced by a system of point charges in vacuum (although not a real cubic sodium chloride ionic lattice). The expression for the potential in vacuum, formula_0, at the point r = {x, y, z}, "near" the cubic lattice of point ions with different signs, the charge formula_1, and the period "a" (a crystalline solid is distinguished by the fact that the atoms making up the crystal are arranged in a periodic fashion), can be represented in the form:797 formula_2 is the lateral vector that fixes the observation point coordinates in the sample plane. formula_3 is the reciprocal lattice vector. "s" is the number of planes to be calculated inside the crystal; "s" set to zero would calculate the surface plane. The expression sums the set of potential static charges for the surface and lower planes of the crystal lattice. Lennard‑Jones and Dent showed that this expression forms a rapidly convergent Fourier series.797 Harold Eugene Buckley, a crystallographic researcher at the University of Manchester until his death in 1959,481 had suggested that their results should be treated with caution. For example, the contraction a crystal plane would suffer under the conditions prescribed would not be the same as that of a similar plane with a solid mass of crystal behind it. Another difficulty arises because calculation of crystal surface field force fields are so great that simplifying assumptions have to be made to render them capable of a solution. Michael Jaycock and Geoffrey Parfitt, then respectively senior lecturer in surface and colloid chemistry at Loughborough University of Technology and professor of chemical engineering at Carnegie Mellon University, concurred with Buckley, noting that "an ideal crystal, in which the ionic positions at the surface were identical to those achieved in the bulk crystal... is obviously extremely improbable." However, they acknowledged that the Lennard‑Jones and Dent model was singularly elegant, and like most researchers working before the advent of modern computers, they were limited in what could be attempted computationally. Nonetheless, Lennard‑Jones and Dent demonstrated that the force exerted on a single ion, by a surface with evenly distributed positive and negative ions, decreases very rapidly with increasing distance.14 Later work by Jason Cleveland, Manfred Radmacher, and Paul Hansma, has shown that this result has direct application to atomic force microscopy by predicting that non-contact imaging is possible only at small tip-sample separations.543 Reduced major axis regression. The theoretical underpinnings of standard least squares regression analysis are based on the assumption that the independent variable (often labelled as "x") is measured without error as a design variable. The dependent variable (labeled "y") is modeled as having uncertainty or error. Both independent and dependent measurements may have multiple sources of error. Therefore, the underlying least squares regression assumptions can be violated. Reduced major axis (RMA) regression is specifically formulated to handle errors in both the "x" and "y" variables.1 If the estimate of the ratio of the error variance of "y" to the error variance of "x" is denoted by "𝜆", then the reduced major axis method assumes that "𝜆" can be approximated by the ratio of the total variances of "x" and "y". RMA minimizes both vertical and horizontal distances of the data points from the predicted line (by summing areas) rather than the least squares sum of squared vertical ("y"-axis) distances.2 In Dent's 1935 paper on linear regression, entitled "On observations of points connected by a linear relation", she admitted that when the variances in the "x" and "y" variables are unknown, "we cannot hope to find the true positions of the observed points, but only their most probable positions." However, by treating the probability of the errors in terms of Gaussian error functions, she contended that this expression may be regarded as "a function of the unknown quantities", or the likelihood function of the data distribution.106 Furthermore, she argued that maximising this function to obtain the maximum likelihood estimation,5 subject to the condition that the points are collinear, will give the parameters for the line of best fit. She then deduced formulae for the errors in estimating the centroid and the line inclination when the data consists of a single (unrepeated) observation.106 Maurice Kendall and Alan Stuart showed that the maximum likelihood estimator of a likelihood function, depending on a parameter formula_4, satisfies the following quadratic equation: where formula_5 and formula_6 are the formula_7 and formula_8 vectors in a covariance matrix giving the covariance between each pair of "x" and "y" variables. The superscript formula_9 indicates the transpose of the matrix. Using the quadratic formula to solve for the positive root (or zero) of (2):183 Inspection of (3) shows that as "𝜆" tends to plus infinity, the positive root tends to:183 Correspondingly, as "𝜆" tends to zero, the root tends to:183 Dent had solved the maximum likelihood estimator in the case where the covariance matrix is not known.1049 Dent's maximum likelihood estimator is the geometric mean of formula_10 and formula_11, equivalent to:184 Dennis Lindley repeated Dent's analysis and stated that Dent's geometric mean estimator is not a consistent estimator for the likelihood function, and that the gradient of the estimate will have a bias, and this remains true even if the number of observations tends to infinity.15 Subsequently, Theodore Anderson pointed out that the likelihood function has no maximum in this case, and therefore, there is no maximum likelihood estimator.3 Kenneth Alva Norton, a former consulting engineer with the then National Bureau of Standards, responded to Lindley, stating Lindley's own methods and assumptions lead to a biased prediction. Furthermore, Albert Madansky, late H. G. B. Alexander professor of business administration at University of Chicago Booth School of Business, noted that Lindley took the wrong root for the quadratic in (2) for the case where formula_12 is negative. Richard J. Smith has stated that Dent was the first to develop a RMA regression method for line fitting that built on the work of Robert Adcock in "A Problem in Least Squares" (1878) and Charles Kummell in "Reduction of observation equations which contain more than one observed quantity" (1879). It is now believed that she was the first to propose what is often called the geometric mean functional relationship estimator of slope, and that her essential arguments can be generalised to any number of variables.106 Moreover, although her solution has its theoretical limitations, it is of practical importance, as it likely represents the best a priori estimate if nothing is known about the true error distribution in the model. It is generally much less reasonable to assume that all the error, or residual scatter, is attributable to one of the variables.3 Electrical design using digital computers. In the 1950s, British electrical engineers would rarely use a digital computer, and if they did, it would be to solve some complicated equation outside the scope of analogue computers. To a certain extent, engineers were deterred by the difficulty and the time taken to program a particular problem. Furthermore, the varied and often unique problems that arise in electrical design practice, together with the degree of uncertainty of the numerical data of many problems, accentuated this tendency. On 10 April 1956, Dent and Brian Birtwistle presented their paper, "The digital computer as an aid to the electrical design engineer", to the Convention on Digital Computer Techniques at the Institution of Electrical Engineers. The paper was intended to show, by describing three relatively simple applications, that the digital computer could be a useful aid to the electrical design engineer. The three example problems were: The Ferranti Mark 1 computer at the University of Manchester was used for the calculations in the three problems. Dent was allowed to use the university's library of subroutines, from which the following were taken and incorporated into the programs: &lt;templatestyles src="Col-begin/styles.css"/&gt; The first problem of calculating the impulse voltage distribution on transformer windings took about five hours of machine time. Conversely, a hand calculation, using a method described by Thomas John Lewis in "The Transient Behaviour of Ladder Networks of the Type Representing Transformer and Machine Windings" (1954), took around three months.486 The use of a computer in the second problem allowed for a more accurate solution as it was possible to include nonlinear magnetic characteristics in the calculation. In the last problem, the torque and speed curves for the synchronous motors were calculated in around fifteen minutes.486 Their paper was one of the first to recognise that high-speed digital computers could provide considerable assistance to the electrical design engineer by carrying out automatically the optimum design of products. Significant research had been devoted to determining a transformer's internal transient voltage distribution. Early attempts were hampered by computational limitations encountered when solving large numbers of coupled differential equations with analogue computers. It was not until Dent, with Hartill and Miles, in "A method of analysis of transformer impulse voltage distribution using a digital computer" (1958), recognised the limitations of the analogue models and developed a digital computer model, and associated program, where non-uniformity in the transformer windings could be introduced and any input voltage applied. Publications. Publications detail. &lt;templatestyles src="Refbegin/styles.css" /&gt; See also. &lt;templatestyles src="Div col/styles.css"/&gt; Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varphi_{0}\\left(r\\right)" }, { "math_id": 1, "text": "e_{k}" }, { "math_id": 2, "text": "r_{\\parallel}=\\left\\{x,y\\right\\}" }, { "math_id": 3, "text": "k_{l,m}" }, { "math_id": 4, "text": "\\theta" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "y" }, { "math_id": 7, "text": "\\mathbf{X}" }, { "math_id": 8, "text": "\\mathbf{Y}" }, { "math_id": 9, "text": "T" }, { "math_id": 10, "text": "\\theta_{x}" }, { "math_id": 11, "text": "\\theta_{y}" }, { "math_id": 12, "text": "x^{T}y" } ]
https://en.wikipedia.org/wiki?curid=64659061
64659151
Independence complex
The independence complex of a graph is a mathematical object describing the independent sets of the graph. Formally, the independence complex of an undirected graph "G", denoted by I("G"), is an abstract simplicial complex (that is, a family of finite sets closed under the operation of taking subsets), formed by the sets of vertices in the independent sets of "G". Any subset of an independent set is itself an independent set, so I("G") is indeed closed under taking subsets. Every independent set in a graph is a clique in its complement graph, and vice versa. Therefore, the independence complex of a graph equals the clique complex of its complement graph, and vice versa. Homology groups. Several authors studied the relations between the properties of a graph "G" = ("V", "E"), and the homology groups of its independence complex I("G"). In particular, several properties related to the dominating sets in "G" guarantee that some reduced homology groups of I("G") are trivial. 1. The "total" "domination number" of G, denoted formula_0, is the minimum cardinality of a total dominating set of "G -" a set "S" such that every vertex of V is adjacent to a vertex of "S". If formula_1 then formula_2. 2. The "total domination number" of a subset "A" of "V" in G, denoted formula_3, is the minimum cardinality of a set "S" such that every vertex of "A" is adjacent to a vertex of "S". The "independence domination number" of G, denoted formula_4, is the maximum, over all independent sets "A" in "G", of formula_3. If formula_5, then formula_2. 3. The "domination number" of "G", denoted formula_6, is the minimum cardinality of a dominating set of G - a set "S" such that every vertex of V \ S is adjacent to a vertex of "S". Note that formula_7. If "G" is a chordal graph and formula_8 then formula_2. 4. The "induced matching number" of "G", denoted formula_9, is the largest cardinality of an induced matching in "G" - a matching that includes every edge connecting any two vertices in the subset. If there exists a subset "A" of "V" such that formula_10 then formula_2. This is a generalization of both properties 1 and 2 above. 5. The "non-dominating independence complex" of G, denoted I'("G"), is the abstract simplicial complex of the independent sets that are not dominating sets of "G". Obviously I'("G") is contained in I("G"); denote the inclusion map by formula_11. If "G" is a chordal graph then the induced map formula_12 is zero for all formula_13. This is a generalization of property 3 above. 6. The "fractional star-domination number" of G, denoted formula_14, is the minimum size of a fractional star-dominating set in "G". If formula_15 then formula_2. Related concepts. Meshulam's game is a game played on a graph "G", that can be used to calculate a lower bound on the homological connectivity of the independence complex of "G". The matching complex of a graph "G", denoted M("G"), is an abstract simplicial complex of the matchings in "G". It is the independence complex of the line graph of "G". The ("m","n")-chessboard complex is the matching complex on the complete bipartite graph "Km,n". It is the abstract simplicial complex of all sets of positions on an "m"-by-"n" chessboard, on which it is possible to put rooks without each of them threatening the other. The clique complex of G is the independence complex of the complement graph of "G". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma_0(G)" }, { "math_id": 1, "text": "\\gamma_0(G)>k" }, { "math_id": 2, "text": "\\tilde{H}_{k-1}(I(G))=0" }, { "math_id": 3, "text": "\\gamma_0(G,A)" }, { "math_id": 4, "text": "i \\gamma(G)" }, { "math_id": 5, "text": "i \\gamma(G) > k" }, { "math_id": 6, "text": "\\gamma(G)" }, { "math_id": 7, "text": " \\gamma_0(G)\\geq \\gamma(G) " }, { "math_id": 8, "text": "\\gamma(G)>k" }, { "math_id": 9, "text": "\\mu(G)" }, { "math_id": 10, "text": "\\gamma_0(G,A)>k+\\min[k, \\mu(G[A])]" }, { "math_id": 11, "text": "i: I'(G)\\to I(G)" }, { "math_id": 12, "text": "i_*: \\tilde{H}_k(I'(G))\\to \\tilde{H}_k(I(G))" }, { "math_id": 13, "text": "k\\geq -1" }, { "math_id": 14, "text": "\\gamma^*_s(G)" }, { "math_id": 15, "text": "\\gamma^*_s(G)>k" } ]
https://en.wikipedia.org/wiki?curid=64659151
64660183
High entropy oxide
Complex oxide molecules that contain five or more metal ions High-entropy oxides (HEOs) are complex oxides that contain five or more principal metal cations and have a single-phase crystal structure. The first HEO, (MgNiCuCoZn)0.2O in a rock salt structure, was reported in 2015 by Rost "et al". HEOs have been successfully synthesized in many structures, including fluorites, perovskites, and spinels. HEOs are currently being investigated for applications as functional materials. History. In the realm of high-entropy materials, HEOs are preceded by high-entropy alloys (HEAs), which were first reported by Yeh "et al." in 2004. HEAs are alloys of five or more principal metallic elements. Some HEAs have been shown to possess desirable mechanical properties, such as retaining strength/hardness at high temperatures. HEA research substantially accelerated in the 2010s. The first HEO, (MgNiCuCoZn)0.2O in a rock salt structure, was reported in 2015 by Rost "et al". Similar to HEAs, (MgNiCuCoZn)0.2O is a multicomponent single-phase material. The cation site in (MgNiCuCoZn)0.2O material is compositionally disordered, similar to HEAs. However, unlike HEAs, (MgNiCuCoZn)0.2O contains an ordered anion sublattice. Following the discovery of HEOs in 2015, the field rapidly expanded. Since the discovery of HEOs, the field of high-entropy materials has expanded to include high-entropy metal diborides, high-entropy carbides, high-entropy sulfides, and high-entropy alumino-silicides. Predicting HEO Formation. Principle of Entropy Stabilization. The formation of HEOs is based on the principle of entropy stabilization. Thermodynamics predicts that the structure which minimizes Gibbs free energy for a given temperature and pressure will form. The formula for Gibbs free energy is given by: formula_0 where "G" is Gibbs free energy, "H" is enthalpy, "T" is absolute temperature, and "S" is entropy. It can clearly be seen from this formula that a large entropy reduces Gibbs free energy and thus favors phase stability. It can also be seen that entropy becomes increasingly important in determining phase stability at higher temperatures. In a multi-component system, one component of entropy is the entropy of mixing (formula_1). For an ideal mixture, formula_1 takes the form: formula_2 where "R" is the ideal gas constant, "n" is the number of components, and "ci" is the atomic fraction of component "i". The value of formula_1 increases as the number of components increases. For a given number of components, formula_1 is maximized when the atomic fractions of the components approach equimolar amounts. Evidence for entropy stabilization is given by the original rock salt HEO (MgNiCuCoZn)0.2O. Single-phase (MgNiCuCoZn)0.2O may be prepared by solid-state reaction of CuO, CoO, NiO, MgO, and ZnO. Rost "et al." reported that under solid state reaction conditions that produce single-phase (MgNiCuCoZn)0.2O, the absence of any one of the five oxide precursors will result in a multi-phase sample, suggesting that configurational entropy stabilizes the material. Other Considerations. It can clearly be seen from the formula for Gibbs free energy that enthalpy reduction is another important indicator of phase stability. For an HEO to form, the enthalpy of formation must be sufficiently small to be overcome by configurational entropy. Furthermore, the discussion above assumes that the reaction kinetics allow for the thermodynamically favored phase to form. Synthesis Methods. Solid-State Reaction. Bulk samples of HEOs may be prepared by the solid-state reaction method. In this technique, oxide precursors are ball milled and pressed into a green body, which is sintered at a high temperature. The thermal energy provided accelerates diffusion within the green body, allowing new phases to form within the sample. Solid-state reactions are often carried out in the presence of air to allow oxygen-rich and oxygen-deficient mixtures to release and absorb oxygen from the atmosphere, respectively. Oxide precursors are not required to have the same crystal structure as the desired HEO for the solid-state reaction method to be effective. For example, CuO and ZnO may be used as precursors to synthesize (MgNiCuCoZn)0.2O. At room temperature, CuO has the tenorite structure and ZnO has the wurtzite structure. Polymeric Steric Entrapment. Polymeric steric entrapment is a wet chemistry technique for synthesizing oxides. It is based on similar principles as the sol–gel process, which has also been used to synthesize HEOs. Polymeric steric entrapment requires water-soluble compounds containing the desired metal cation (e.g., metal acetates, metal chlorides) to be placed in a solution with water and a water-soluble polymer (e.g., PVA, PEG). In solution, the cations are thoroughly mixed and held close together by the polymer chains. The water is driven off to produce a foam whose organic components are burned off with a calcining step, producing a fine and pure mixed oxide powder, which may be pressed into a green body and sintered. This method was first reported by Nguyen "et al." in 2011. In 2017, Kriven and Tseng reported the first polymeric steric entrapment HEO synthesis. Polymeric steric entrapment can be used to synthesize bulk HEO samples that are difficult to successfully synthesize the solid-state method. For example, Musico "et al." synthesized the high entropy cuprate (LaNdGdTbDy)0.4CuO4 using solid-state reaction and polymeric steric entrapment. X-ray diffraction of the sample prepared with solid-state reaction showed small inclusions of a second phase, and energy-dispersive X-ray spectroscopy showed inhomogeneous distributions of some cations. Neither impurity peaks nor evidence of inhomogeneous cation distribution was found in the sample of this material prepared with polymeric steric entrapment. Other Techniques. Other techniques that have been used to synthesize HEOs include: HEO Materials. The first HEOs synthesized had the rock-salt structure. Since then, the family of HEOs has expanded to include perovskite, spinel, fluorite, and other structures. Some of these structures, such as the perovskite structure, are notable in that they have two cation sites, each of which may independently possess compositional disorder. For example, high entropy perovskites (GdLaNdSmY)0.2MnO3 (A-site configurational entropy), Gd(CoCrFeMnNi)0.2O3 (B-site configurational entropy), and (GdLaNdSmY)0.2(CoCrFeMnNi)0.2O3 (A-site and B-site configurational entropy) have been synthesized. Properties and Applications. In contrast to HEAs, which are typically investigated for their mechanical properties, HEOs are often studied as functional materials. The original HEO, (MgNiCuCoZn)0.2O, has been investigated as a promising material for applications in energy production and storage, "e.g." as anode material in Li-ion batteries, or as large "k" dielectric material, or in catalysis. Low Thermal Conductivity. It has been shown that increasing the configurational entropy of a material reduces its lattice thermal conductivity. Correspondingly, HEOs typically have lower thermal conductivities than materials with the same crystal structure and only one cation per lattice site. The thermal conductivity of HEOs is usually greater than or comparable to the thermal conductivity of amorphous materials containing the same components. However, crystalline materials typically have higher elastic moduli than amorphous materials of the same components. The combination of these factors leads to HEOs occupying a unique region of the property space by having the highest elastic modulus to thermal conductivity ratios of all materials. Property Tunability Through Cation Selection. HEOs enhance functional property tunability through cation selection. Magnetic, catalytic, and thermophysical properties may be tuned by modifying the cation composition of a given HEO. Many material applications demand a highly specific set of properties. For example, thermal barrier coatings require thermal expansion coefficient matching with a metal surface, high-temperature phase stability, low thermal conductivity, and chemical inertness, among other properties. Due to their innate tunability, HEOs have been proposed as candidates for advanced material applications such as thermal barrier coatings. Terminology. The definition of high-entropy oxide is debated. In oxide literature, the term is commonly used to refer to any oxide with at least five principal cations. However, it has been suggested that this is a misnomer, as most reports neglect to calculate configurational entropy. Additionally, a survey of 10 HEOs found that only 3 were entropy-stabilized. It has been suggested that the term HEO be replaced with three terms: compositionally complex oxide, high-entropy oxide, and entropy-stabilized oxide. In this scheme, compositionally complex refers to materials with multiple elements occupying the same sublattice, high-entropy refers to materials where configurational entropy plays a role in stabilization, and entropy-stabilized refers to materials where entropy dominates the enthalpy term and is necessary for the formation of a crystalline phase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta G=\\Delta H - T\\Delta S" }, { "math_id": 1, "text": "\\Delta S_{mix}" }, { "math_id": 2, "text": "\\Delta S_{mix} = -R\\sum_{i=1}^n c_i\\ln c_i" } ]
https://en.wikipedia.org/wiki?curid=64660183
646651
Bisection method
Algorithm for finding a zero of a function In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is also called the interval halving method, the binary search method, or the dichotomy method. For polynomials, more elaborate methods exist for testing the existence of a root in an interval (Descartes' rule of signs, Sturm's theorem, Budan's theorem). They allow extending the bisection method into efficient algorithms for finding all real roots of a polynomial; see Real-root isolation. The method. The method is applicable for numerically solving the equation "f"("x") = 0 for the real variable "x", where "f" is a continuous function defined on an interval ["a", "b"] and where "f"("a") and "f"("b") have opposite signs. In this case "a" and "b" are said to bracket a root since, by the intermediate value theorem, the continuous function "f" must have at least one root in the interval ("a", "b"). At each step the method divides the interval in two parts/halves by computing the midpoint "c" = ("a"+"b") / 2 of the interval and the value of the function "f"("c") at that point. If "c" itself is a root then the process has succeeded and stops. Otherwise, there are now only two possibilities: either "f"("a") and "f"("c") have opposite signs and bracket a root, or "f"("c") and "f"("b") have opposite signs and bracket a root. The method selects the subinterval that is guaranteed to be a bracket as the new interval to be used in the next step. In this way an interval that contains a zero of "f" is reduced in width by 50% at each step. The process is continued until the interval is sufficiently small. Explicitly, if "f"("c")=0 then "c" may be taken as the solution and the process stops. Otherwise, if "f"("a") and "f"("c") have opposite signs, then the method sets "c" as the new value for "b", and if "f"("b") and "f"("c") have opposite signs then the method sets "c" as the new "a". In both cases, the new "f"("a") and "f"("b") have opposite signs, so the method is applicable to this smaller interval. Iteration tasks. The input for the method is a continuous function "f", an interval ["a", "b"], and the function values "f"("a") and "f"("b"). The function values are of opposite sign (there is at least one zero crossing within the interval). Each iteration performs these steps: When implementing the method on a computer, there can be problems with finite precision, so there are often additional convergence tests or limits to the number of iterations. Although "f" is continuous, finite precision may preclude a function value ever being zero. For example, consider "f"("x") = cos "x"; there is no floating-point value approximating "x" = π/2 that gives exactly zero. Additionally, the difference between "a" and "b" is limited by the floating point precision; i.e., as the difference between "a" and "b" decreases, at some point the midpoint of ["a", "b"] will be numerically identical to (within floating point precision of) either "a" or "b". Algorithm. The method may be written in pseudocode as follows: input: Function "f", endpoint values "a", "b", tolerance "TOL", maximum iterations "NMAX" conditions: "a" &lt; "b", either "f"("a") &lt; 0 and "f"("b") &gt; 0 or "f"("a") &gt; 0 and "f"("b") &lt; 0 output: value which differs from a root of "f"("x") = 0 by less than "TOL" "N" ← 1 while "N" ≤ "NMAX" do "// limit iterations to prevent infinite loop" "c" ← ("a" + "b")/2 "// new midpoint" if "f"("c") = 0 or ("b" – "a")/2 &lt; "TOL" then "// solution found" Output("c") Stop end if "N" ← "N" + 1 "// increment step counter" if sign("f"("c")) = sign("f"("a")) then "a" ← "c" else "b" ← "c" "// new interval" end while Output("Method failed.") "// max number of steps exceeded" Example: Finding the root of a polynomial. Suppose that the bisection method is used to find a root of the polynomial formula_0 First, two numbers formula_1 and formula_2 have to be found such that formula_3 and formula_4 have opposite signs. For the above function, formula_5 and formula_6 satisfy this criterion, as formula_7 and formula_8 Because the function is continuous, there must be a root within the interval [1, 2]. In the first iteration, the end points of the interval which brackets the root are formula_9 and formula_10, so the midpoint is formula_11 The function value at the midpoint is formula_12. Because formula_13 is negative, formula_5 is replaced with formula_14 for the next iteration to ensure that formula_15 and formula_16 have opposite signs. As this continues, the interval between formula_1 and formula_2 will become increasingly smaller, converging on the root of the function. See this happen in the table below. After 13 iterations, it becomes apparent that there is a convergence to about 1.521: a root for the polynomial. Analysis. The method is guaranteed to converge to a root of "f" if "f" is a continuous function on the interval ["a", "b"] and "f"("a") and "f"("b") have opposite signs. The absolute error is halved at each step so the method converges linearly. Specifically, if "c"1 = is the midpoint of the initial interval, and "c""n" is the midpoint of the interval in the "n"th step, then the difference between "c""n" and a solution "c" is bounded by formula_17 This formula can be used to determine, in advance, an upper bound on the number of iterations that the bisection method needs to converge to a root to within a certain tolerance. The number "n" of iterations needed to achieve a required tolerance ε (that is, an error guaranteed to be at most ε), is bounded by formula_18 where the initial bracket size formula_19 and the required bracket size formula_20 The main motivation to use the bisection method is that over the set of continuous functions, no other method can guarantee to produce an estimate cn to the solution c that in the "worst case" has an formula_21 absolute error with less than n1/2 iterations. This is also true under several common assumptions on function f and the behaviour of the function in the neighbourhood of the root. However, despite the bisection method being optimal with respect to worst case performance under absolute error criteria it is sub-optimal with respect to "average performance" under standard assumptions as well as "asymptotic performance". Popular alternatives to the bisection method, such as the secant method, Ridders' method or Brent's method (amongst others), typically perform better since they trade-off worst case performance to achieve higher orders of convergence to the root. And, a strict improvement to the bisection method can be achieved with a higher order of convergence without trading-off worst case performance with the ITP Method. Generalization to higher dimensions. The bisection method has been generalized to multi-dimensional functions. Such methods are called generalized bisection methods. Methods based on degree computation. Some of these methods are based on computing the topological degree. Characteristic bisection method. The characteristic bisection method uses only the signs of a function in different points. Lef "f" be a function from Rd to Rd, for some integer "d" ≥ 2. A characteristic polyhedron (also called an admissible polygon) of "f" is a polyhedron in R"d", having 2d vertices, such that in each vertex v, the combination of signs of "f"(v) is unique. For example, for "d"=2, a characteristic polyhedron of "f" is a quadrilateral with vertices (say) A,B,C,D, such that: A proper edge of a characteristic polygon is a edge between a pair of vertices, such that the sign vector differs by only a single sign. In the above example, the proper edges of the characteristic quadrilateral are AB, AC, BD and CD. A diagonal is a pair of vertices, such that the sign vector differs by all "d" signs. In the above example, the diagonals are AD and BC. At each iteration, the algorithm picks a proper edge of the polyhedron (say, A—B), and computes the signs of "f" in its mid-point (say, M). Then it proceeds as follows: Suppose the diameter (= length of longest proper edge) of the original characteristic polyhedron is D. Then, at least formula_22 bisections of edges are required so that the diameter of the remaining polygon will be at most ε.11,&amp;hairsp;Lemma.4.7 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " f(x) = x^3 - x - 2 \\,." }, { "math_id": 1, "text": " a " }, { "math_id": 2, "text": " b " }, { "math_id": 3, "text": "f(a)" }, { "math_id": 4, "text": "f(b)" }, { "math_id": 5, "text": " a = 1 " }, { "math_id": 6, "text": " b = 2 " }, { "math_id": 7, "text": " f(1) = (1)^3 - (1) - 2 = -2 " }, { "math_id": 8, "text": " f(2) = (2)^3 - (2) - 2 = +4 \\,." }, { "math_id": 9, "text": " a_1 = 1 " }, { "math_id": 10, "text": " b_1 = 2 " }, { "math_id": 11, "text": " c_1 = \\frac{2+1}{2} = 1.5 " }, { "math_id": 12, "text": " f(c_1) = (1.5)^3 - (1.5) - 2 = -0.125 " }, { "math_id": 13, "text": " f(c_1) " }, { "math_id": 14, "text": " a = 1.5 " }, { "math_id": 15, "text": " f(a) " }, { "math_id": 16, "text": " f(b) " }, { "math_id": 17, "text": "|c_n-c|\\le\\frac{|b-a|}{2^n}." }, { "math_id": 18, "text": "n \\le n_{1/2} \\equiv \\left\\lceil\\log_2\\left(\\frac{\\epsilon_0}{\\epsilon}\\right)\\right\\rceil, " }, { "math_id": 19, "text": "\\epsilon_0 = |b-a|" }, { "math_id": 20, "text": "\\epsilon \\le \\epsilon_0." }, { "math_id": 21, "text": "\\epsilon" }, { "math_id": 22, "text": "\\log_2(D/\\varepsilon)" } ]
https://en.wikipedia.org/wiki?curid=646651
6466838
Immanant
Mathematical function generalizing the determinant and permanent In mathematics, the immanant of a matrix was defined by Dudley E. Littlewood and Archibald Read Richardson as a generalisation of the concepts of determinant and permanent. Let formula_0 be a partition of an integer formula_1 and let formula_2 be the corresponding irreducible representation-theoretic character of the symmetric group formula_3. The "immanant" of an formula_4 matrix formula_5 associated with the character formula_2 is defined as the expression formula_6 Examples. The determinant is a special case of the immanant, where formula_2 is the alternating character formula_7, of "S""n", defined by the parity of a permutation. The permanent is the case where formula_2 is the trivial character, which is identically equal to 1. For example, for formula_8 matrices, there are three irreducible representations of formula_9, as shown in the character table: As stated above, formula_10 produces the permanent and formula_11 produces the determinant, but formula_12 produces the operation that maps as follows: formula_13 Properties. The immanant shares several properties with determinant and permanent. In particular, the immanant is multilinear in the rows and columns of the matrix; and the immanant is invariant under "simultaneous" permutations of the rows or columns by the same element of the symmetric group. Littlewood and Richardson studied the relation of the immanant to Schur functions in the representation theory of the symmetric group. The necessary and sufficient conditions for the immanant of a Gram matrix to be formula_14 are given by Gamas's Theorem.
[ { "math_id": 0, "text": "\\lambda=(\\lambda_1,\\lambda_2,\\ldots)" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "\\chi_\\lambda" }, { "math_id": 3, "text": "S_n" }, { "math_id": 4, "text": "n\\times n" }, { "math_id": 5, "text": "A=(a_{ij})" }, { "math_id": 6, "text": "\\operatorname{Imm}_\\lambda(A)=\\sum_{\\sigma\\in S_n} \\chi_\\lambda(\\sigma) a_{1\\sigma(1)} a_{2\\sigma(2)} \\cdots a_{n\\sigma(n)} = \\sum_{\\sigma\\in S_n} \\chi_\\lambda(\\sigma) \\prod_{i=1}^{n} a_{i\\sigma(i)}." }, { "math_id": 7, "text": "\\sgn" }, { "math_id": 8, "text": "3 \\times 3" }, { "math_id": 9, "text": "S_3" }, { "math_id": 10, "text": "\\chi_1" }, { "math_id": 11, "text": "\\chi_2" }, { "math_id": 12, "text": "\\chi_3" }, { "math_id": 13, "text": "\\begin{pmatrix} a_{11} & a_{12} & a_{13} \\\\ a_{21} & a_{22} & a_{23} \\\\ a_{31} & a_{32} & a_{33} \\end{pmatrix} \\rightsquigarrow 2 a_{11} a_{22} a_{33} - a_{12} a_{23} a_{31} - a_{13} a_{21} a_{32}" }, { "math_id": 14, "text": "0" } ]
https://en.wikipedia.org/wiki?curid=6466838
64668822
Rainbow-independent set
Independent set in a graph In graph theory, a rainbow-independent set (ISR) is an independent set in a graph, in which each vertex has a different color. Formally, let "G" = ("V", "E") be a graph, and suppose vertex set V is partitioned into m subsets "V"1, …, "Vm", called "colors". A set U of vertices is called a rainbow-independent set if it satisfies both the following conditions: Other terms used in the literature are independent set of representatives, independent transversal, and independent system of representatives. As an example application, consider a faculty with m departments, where some faculty members dislike each other. The dean wants to construct a committee with m members, one member per department, but without any pair of members who dislike each other. This problem can be presented as finding an ISR in a graph in which the nodes are the faculty members, the edges describe the "dislike" relations, and the subsets "V"1, …, "Vm" are the departments. Variants. It is assumed for convenience that the sets "V"1, …, "Vm" are pairwise-disjoint. In general the sets may intersect, but this case can be easily reduced to the case of disjoint sets: for every vertex x, form a copy of x for each i such that Vi contains x. In the resulting graph, connect all copies of x to each other. In the new graph, the Vi are disjoint, and each ISR corresponds to an ISR in the original graph. ISR generalizes the concept of a "system of distinct representatives" (SDR, also known as transversal). Every transversal is an ISR where in the underlying graph, all and only copies of the same vertex from different sets are connected. Existence of rainbow-independent sets. There are various sufficient conditions for the existence of an ISR. Condition based on vertex degree. Intuitively, when the departments Vi are larger, and there is less conflict between faculty members, an ISR should be more likely to exist. The "less conflict" condition is represented by the vertex degree of the graph. This is formalized by the following theorem: If the degree of every vertex in G is at most d, and the size of each color-set is at least 2"d", then G has an ISR. The 2"d" is best possible: there are graph with vertex degree k and colors of size 2"d" – 1 without an ISR. But there is a more precise version in which the bound depends both on d and on m. Condition based on dominating sets. Below, given a subset S of colors (a subset of {"V"1, ..., "V""m"}), we denote by U"S" the union of all subsets in S (all vertices whose color is one of the colors in S), and by GS the subgraph of G induced by U"S". The following theorem describes the structure of graphs that have no ISR but are "edge-minimal", in the sense that whenever any edge is removed from them, the remaining graph has an ISR. If G has no ISR, but for every edge e in E, G-e has an ISR, then for every edge "e" = ("x", "y") in E, there exists a subset S of the colors {"V"1, …, "Vm"}, and a set Z of edges of GS, such that: Hall-type condition. Below, given a subset S of colors (a subset of {"V"1, …, "Vm"}), an independent set IS of GS is called special for S if for every independent subset J of vertices of GS of size at most , there exists some v in IS such that "J" ∪ {"v"} is also independent. Figuratively, IS is a team of "neutral members" for the set S of departments, that can augment any sufficiently small set of non-conflicting members, to create a larger such set. The following theorem is analogous to Hall's marriage theorem:If, for every subset S of colors, the graph GS contains an independent set IS that is special for S, then G has an ISR.&lt;br&gt;&lt;br&gt;"Proof idea". The theorem is proved using Sperner's lemma. The standard simplex with m endpoints is assigned a triangulation with some special properties. Each endpoint i of the simplex is associated with the color-set Vi, each face {"i"1, …, "ik"} of the simplex is associated with a set "S" = {"V""i"1, …, "Vik"} of colors. Each point x of the triangulation is labeled with a vertex "g"("x") of G such that: (a) For each point x on a face S, "g"("x") is an element of IS – the special independent set of S. (b) If points x and y are adjacent in the 1-skeleton of the triangulation, then "g"("x") and "g"("y") are not adjacent in G. By Sperner's lemma, there exists a sub-simplex in which, for each point x, "g"("x") belongs to a different color-set; the set of these "g"("x") is an ISR. The above theorem implies Hall's marriage condition. To see this, it is useful to state the theorem for the special case in which G is the line graph of some other graph H; this means that every vertex of G is an edge of H, and every independent set of G is a matching in H. The vertex-coloring of G corresponds to an edge-coloring of H, and a rainbow-independent-set in G corresponds to a rainbow-matching in H. A matching IS in HS is special for S, if for every matching J in HS of size at most , there is an edge e in IS such that "J" ∪ {"e"} is still a matching in HS. Let H be a graph with an edge-coloring. If, for every subset S of colors, the graph HS contains a matching MS that is special for S, then H has a rainbow-matching. Let "H" = ("X" + "Y", "E") be a bipartite graph satisfying Hall's condition. For each vertex i of X, assign a unique color Vi to all edges of H adjacent to i. For every subset S of colors, Hall's condition implies that S has at least neighbors in Y, and therefore there are at least edges of H adjacent to distinct vertices of Y. Let IS be a set of such edges. For any matching J of size at most in H, some element e of IS has a different endpoint in Y than all elements of J, and thus "J" ∪ {"e"} is also a matching, so IS is special for S. The above theorem implies that H has a rainbow matching MR. By definition of the colors, MR is a perfect matching in H. Another corollary of the above theorem is the following condition, which involves both vertex degree and cycle length:If the degree of every vertex in G is at most 2, and the length of each cycle of G is divisible by 3, and the size of each color-set is at least 3, then G has an ISR.&lt;br&gt;&lt;br&gt;"Proof." For every subset S of colors, the graph GS contains at least vertices, and it is a union of cycles of length divisible by 3 and paths. Let IS be an independent set in GS containing every third vertex in each cycle and each path. So contains at least &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄3 = |"S"| vertices. Let J be an independent set in GS of size at most . Since the distance between each two vertices of IS is at least 3, every vertex of J is adjacent to at most one vertex of IS. Therefore, there is at least one vertex of IS which is not adjacent to any vertex of J. Therefore IS is special for S. By the previous theorem, G has an ISR. Condition based on homological connectivity. One family of conditions is based on the homological connectivity of the independence complex of subgraphs. To state the conditions, the following notation is used: The following condition is implicit in and proved explicitly in. If, for all subsets J of ["m"]: formula_0 then the partition "V"1, …, "Vm" admits an ISR.As an example, suppose G is a bipartite graph, and its parts are exactly "V"1 and "V"2. In this case ["m"] = {1,2} so there are four options for J: Other conditions. Every properly coloured triangle-free graph of chromatic number x contains a rainbow-independent set of size at least . Several authors have studied conditions for existence of large rainbow-independent sets in various classes of graphs. Computation. The "ISR decision problem" is the problem of deciding whether a given graph "G" = ("V", "E") and a given partition of V into m colors admits a rainbow-independent set. This problem is NP-complete. The proof is by reduction from the 3-dimensional matching problem (3DM). The input to 3DM is a tripartite hypergraph ("X" + "Y" + "Z", "F"), where X, Y, Z are vertex-sets of size m, and F is a set of triplets, each of which contains a single vertex of each of X, Y, Z. An input to 3DM can be converted into an input to ISR as follows: In the resulting graph "G" = ("V", "E"), an ISR corresponds to a set of triplets ("x","y","z") such that: Therefore, the resulting graph admits an ISR if and only if the original hypergraph admits a 3DM. An alternative proof is by reduction from SAT. Related concepts. If G is the line graph of some other graph H, then the independent sets in G are the matchings in H. Hence, a rainbow-independent set in G is a "rainbow matching" in H. See also matching in hypergraphs. Another related concept is a "rainbow cycle", which is a cycle in which each vertex has a different color. When an ISR exists, a natural question is whether there exist other ISRs, such that the entire set of vertices is partitioned into disjoint ISRs (assuming the number of vertices in each color is the same). Such a partition is called "strong coloring". Using the faculty metaphor: A "rainbow clique" or a "colorful clique" is a clique in which every vertex has a different color. Every clique in a graph corresponds to an independent set in its complement graph. Therefore, every rainbow clique in a graph corresponds to a rainbow-independent set in its complement graph. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\eta_H(\\text{Ind}(G[V_J])) \\geq |J|" }, { "math_id": 1, "text": "\\tilde{H_0}(\\text{Ind}(G))" } ]
https://en.wikipedia.org/wiki?curid=64668822
64669
De Morgan's laws
Pair of logical equivalences In propositional logic and Boolean algebra, De Morgan's laws, also known as De Morgan's theorem, are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation. The rules can be expressed in English as: or or where "A or B" is an "inclusive or" meaning "at least" one of A or B rather than an "exclusive or" that means "exactly" one of A or B. In set theory and Boolean algebra, these are written formally as formula_0 where In formal language, the rules are written as formula_6 and formula_7 where Another form of De Morgan's law is the following as seen in the right figure. formula_12 formula_13 Applications of the rules include simplification of logical expressions in computer programs and digital circuit designs. De Morgan's laws are an example of a more general concept of mathematical duality. Formal notation. The "negation of conjunction" rule may be written in sequent notation: formula_14 The "negation of disjunction" rule may be written as: formula_15 In rule form: "negation of conjunction" formula_16 and "negation of disjunction" formula_17 and expressed as truth-functional tautologies or theorems of propositional logic: formula_18 where formula_19 and formula_20 are propositions expressed in some formal system. The generalized De Morgan’s laws provide an equivalence for negating a conjunction or disjunction involving multiple terms.For a set of propositions formula_21, the generalized De Morgan’s Laws are as follows: formula_22 These laws generalize De Morgan’s original laws for negating conjunctions and disjunctions. Substitution form. De Morgan's laws are normally shown in the compact form above, with the negation of the output on the left and negation of the inputs on the right. A clearer form for substitution can be stated as: formula_23 This emphasizes the need to invert both the inputs and the output, as well as change the operator when doing a substitution. Set theory and Boolean algebra. In set theory and Boolean algebra, it is often stated as "union and intersection interchange under complementation", which can be formally expressed as: formula_0 where: Unions and intersections of any number of sets. The generalized form is formula_24 where "I" is some, possibly countably or uncountably infinite, indexing set. In set notation, De Morgan's laws can be remembered using the mnemonic "break the line, change the sign". Engineering. In electrical and computer engineering, De Morgan's laws are commonly written as: formula_25 and formula_26 where: Text searching. De Morgan's laws commonly apply to text searching using Boolean operators AND, OR, and NOT. Consider a set of documents containing the words "cats" and "dogs". De Morgan's laws hold that these two searches will return the same set of documents: Search A: NOT (cats OR dogs) Search B: (NOT cats) AND (NOT dogs) The corpus of documents containing "cats" or "dogs" can be represented by four documents: Document 1: Contains only the word "cats". Document 2: Contains only "dogs". Document 3: Contains both "cats" and "dogs". Document 4: Contains neither "cats" nor "dogs". To evaluate Search A, clearly the search "(cats OR dogs)" will hit on Documents 1, 2, and 3. So the negation of that search (which is Search A) will hit everything else, which is Document 4. Evaluating Search B, the search "(NOT cats)" will hit on documents that do not contain "cats", which is Documents 2 and 4. Similarly the search "(NOT dogs)" will hit on Documents 1 and 4. Applying the AND operator to these two searches (which is Search B) will hit on the documents that are common to these two searches, which is Document 4. A similar evaluation can be applied to show that the following two searches will both return Documents 1, 2, and 4: Search C: NOT (cats AND dogs), Search D: (NOT cats) OR (NOT dogs). History. The laws are named after Augustus De Morgan (1806–1871), who introduced a formal version of the laws to classical propositional logic. De Morgan's formulation was influenced by algebraization of logic undertaken by George Boole, which later cemented De Morgan's claim to the find. Nevertheless, a similar observation was made by Aristotle, and was known to Greek and Medieval logicians. For example, in the 14th century, William of Ockham wrote down the words that would result by reading the laws out. Jean Buridan, in his , also describes rules of conversion that follow the lines of De Morgan's laws. Still, De Morgan is given credit for stating the laws in the terms of modern formal logic, and incorporating them into the language of logic. De Morgan's laws can be proved easily, and may even seem trivial. Nonetheless, these laws are helpful in making valid inferences in proofs and deductive arguments. Informal proof. De Morgan's theorem may be applied to the negation of a disjunction or the negation of a conjunction in all or part of a formula. Negation of a disjunction. In the case of its application to a disjunction, consider the following claim: "it is false that either of A or B is true", which is written as: formula_29 In that it has been established that "neither" A nor B is true, then it must follow that both A is not true and B is not true, which may be written directly as: formula_30 If either A or B "were" true, then the disjunction of A and B would be true, making its negation false. Presented in English, this follows the logic that "since two things are both false, it is also false that either of them is true". Working in the opposite direction, the second expression asserts that A is false and B is false (or equivalently that "not A" and "not B" are true). Knowing this, a disjunction of A and B must be false also. The negation of said disjunction must thus be true, and the result is identical to the first claim. Negation of a conjunction. The application of De Morgan's theorem to conjunction is very similar to its application to a disjunction both in form and rationale. Consider the following claim: "it is false that A and B are both true", which is written as: formula_31 In order for this claim to be true, either or both of A or B must be false, for if they both were true, then the conjunction of A and B would be true, making its negation false. Thus, one (at least) or more of A and B must be false (or equivalently, one or more of "not A" and "not B" must be true). This may be written directly as, formula_32 Presented in English, this follows the logic that "since it is false that two things are both true, at least one of them must be false". Working in the opposite direction again, the second expression asserts that at least one of "not A" and "not B" must be true, or equivalently that at least one of A and B must be false. Since at least one of them must be false, then their conjunction would likewise be false. Negating said conjunction thus results in a true expression, and this expression is identical to the first claim. Formal proof. Here we use formula_3 to denote the complement of A, as above in . The proof that formula_33 is completed in 2 steps by proving both formula_34 and formula_35. Part 1. Let formula_36. Then, formula_37. Because formula_38, it must be the case that formula_39 or formula_40. If formula_39, then formula_41, so formula_42. Similarly, if formula_40, then formula_43, so formula_44. Thus, formula_45; that is, formula_34. Part 2. To prove the reverse direction, let formula_42, and for contradiction assume formula_46. Under that assumption, it must be the case that formula_47, so it follows that formula_48 and formula_49, and thus formula_50 and formula_51. However, that means formula_52, in contradiction to the hypothesis that formula_42, therefore, the assumption formula_46 must not be the case, meaning that formula_53. Hence, formula_54, that is, formula_35. Conclusion. If formula_35 "and" formula_55, then formula_33; this concludes the proof of De Morgan's law. The other De Morgan's law, formula_56, is proven similarly. Generalising De Morgan duality. In extensions of classical propositional logic, the duality still holds (that is, to any logical operator one can always find its dual), since in the presence of the identities governing negation, one may always introduce an operator that is the De Morgan dual of another. This leads to an important property of logics based on classical logic, namely the existence of negation normal forms: any formula is equivalent to another formula where negations only occur applied to the non-logical atoms of the formula. The existence of negation normal forms drives many applications, for example in digital circuit design, where it is used to manipulate the types of logic gates, and in formal logic, where it is needed to find the conjunctive normal form and disjunctive normal form of a formula. Computer programmers use them to simplify or properly negate complicated logical conditions. They are also often useful in computations in elementary probability theory. Let one define the dual of any propositional operator P("p", "q", ...) depending on elementary propositions "p", "q", ... to be the operator formula_57 defined by formula_58 Extension to predicate and modal logic. This duality can be generalised to quantifiers, so for example the universal quantifier and existential quantifier are duals: formula_59 formula_60 To relate these quantifier dualities to the De Morgan laws, set up a model with some small number of elements in its domain "D", such as "D" = {"a", "b", "c"}. Then formula_61 and formula_62 But, using De Morgan's laws, formula_63 and formula_64 verifying the quantifier dualities in the model. Then, the quantifier dualities can be extended further to modal logic, relating the box ("necessarily") and diamond ("possibly") operators: formula_65 formula_66 In its application to the alethic modalities of possibility and necessity, Aristotle observed this case, and in the case of normal modal logic, the relationship of these modal operators to the quantification can be understood by setting up models using Kripke semantics. In intuitionistic logic. Three out of the four implications of de Morgan's laws hold in intuitionistic logic. Specifically, we have formula_67 and formula_68 The converse of the last implication does not hold in pure intuitionistic logic. That is, the failure of the joint proposition formula_69 cannot necessarily be resolved to the failure of either of the two conjuncts. For example, from knowing it not to be the case that both Alice and Bob showed up to their date, it does not follow who did not show up. The latter principle is equivalent to the principle of the weak excluded middle formula_70, formula_71 This weak form can be used as a foundation for an intermediate logic. For a refined version of the failing law concerning existential statements, see the lesser limited principle of omniscience formula_72, which however is different from formula_73. The validity of the other three De Morgan's laws remains true if negation formula_74 is replaced by implication formula_75 for some arbitrary constant predicate C, meaning that the above laws are still true in minimal logic. Similarly to the above, the quantifier laws: formula_76 and formula_77 are tautologies even in minimal logic with negation replaced with implying a fixed formula_20, while the converse of the last law does not have to be true in general. Further, one still has formula_78 formula_79 formula_80 formula_81 but their inversion implies excluded middle, formula_82. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n \\overline{A \\cup B} &= \\overline{A} \\cap \\overline{B}, \\\\\n \\overline{A \\cap B} &= \\overline{A} \\cup \\overline{B},\n\\end{align}" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": "\\overline{A}" }, { "math_id": 4, "text": "\\cap" }, { "math_id": 5, "text": "\\cup" }, { "math_id": 6, "text": "\\neg(P\\lor Q)\\iff(\\neg P)\\land(\\neg Q)," }, { "math_id": 7, "text": "\\neg(P\\land Q)\\iff(\\neg P)\\lor(\\neg Q)" }, { "math_id": 8, "text": "\\neg" }, { "math_id": 9, "text": "\\land" }, { "math_id": 10, "text": "\\lor" }, { "math_id": 11, "text": "\\iff" }, { "math_id": 12, "text": "A -(B \\cup C) = (A - B) \\cap (A - C)," }, { "math_id": 13, "text": "A -(B \\cap C) = (A - B) \\cup (A - C)." }, { "math_id": 14, "text": "\\begin{align}\n\\neg(P \\land Q) &\\vdash (\\neg P \\lor \\neg Q), \\text{and} \\\\\n(\\neg P \\lor \\neg Q) &\\vdash \\neg(P \\land Q).\n\\end{align}" }, { "math_id": 15, "text": "\\begin{align}\n\\neg(P \\lor Q) &\\vdash (\\neg P \\land \\neg Q), \\text{and} \\\\\n(\\neg P \\land \\neg Q) &\\vdash \\neg(P \\lor Q).\n\\end{align}" }, { "math_id": 16, "text": "\n\\frac{\\neg (P \\land Q)}{\\therefore \\neg P \\lor \\neg Q}\n\\qquad\n\\frac{\\neg P \\lor \\neg Q}{\\therefore \\neg (P \\land Q)}\n" }, { "math_id": 17, "text": "\n\\frac{\\neg (P \\lor Q)}{\\therefore \\neg P \\land \\neg Q}\n\\qquad\n\\frac{\\neg P \\land \\neg Q}{\\therefore \\neg (P \\lor Q)}\n" }, { "math_id": 18, "text": "\\begin{align}\n \\neg (P \\land Q) &\\leftrightarrow (\\neg P \\lor \\neg Q), \\\\\n \\neg (P \\lor Q) &\\leftrightarrow (\\neg P \\land \\neg Q). \\\\\n\\end{align}" }, { "math_id": 19, "text": "P" }, { "math_id": 20, "text": "Q" }, { "math_id": 21, "text": "P_1, P_2, \\dots,P_n" }, { "math_id": 22, "text": "\\begin{align}\n\\lnot(P_1 \\land P_2 \\land \\dots \\land P_n) \\leftrightarrow \\lnot P_1 \\lor \\lnot P_2 \\lor \\ldots \\lor \\lnot P_n \\\\\n\\lnot(P_1 \\lor P_2 \\lor \\dots \\lor P_n) \\leftrightarrow \\lnot P_1 \\land \\lnot P_2 \\land \\ldots \\land \\lnot P_n\n\\end{align}" }, { "math_id": 23, "text": "\\begin{align}\n(P \\land Q) &\\Longleftrightarrow \\neg (\\neg P \\lor \\neg Q), \\\\\n(P \\lor Q) &\\Longleftrightarrow \\neg (\\neg P \\land \\neg Q).\n\\end{align}" }, { "math_id": 24, "text": "\\begin{align}\n \\overline{\\bigcap_{i \\in I} A_{i}} &\\equiv \\bigcup_{i \\in I} \\overline{A_{i}}, \\\\\n \\overline{\\bigcup_{i \\in I} A_{i}} &\\equiv \\bigcap_{i \\in I} \\overline{A_{i}},\n\\end{align}" }, { "math_id": 25, "text": "\\overline{(A \\cdot B)} \\equiv (\\overline {A} + \\overline {B})" }, { "math_id": 26, "text": "\\overline{A + B} \\equiv \\overline {A} \\cdot \\overline {B}," }, { "math_id": 27, "text": " \\cdot " }, { "math_id": 28, "text": "+" }, { "math_id": 29, "text": "\\neg(A\\lor B)." }, { "math_id": 30, "text": "(\\neg A)\\wedge(\\neg B)." }, { "math_id": 31, "text": "\\neg(A\\land B)." }, { "math_id": 32, "text": "(\\neg A)\\lor(\\neg B)." }, { "math_id": 33, "text": "\\overline{A\\cap B} = \\overline{A} \\cup \\overline{B}" }, { "math_id": 34, "text": "\\overline{A\\cap B} \\subseteq \\overline{A} \\cup \\overline{B}" }, { "math_id": 35, "text": "\\overline{A} \\cup \\overline{B} \\subseteq \\overline{A\\cap B}" }, { "math_id": 36, "text": "x \\in \\overline{A \\cap B}" }, { "math_id": 37, "text": "x \\not\\in A \\cap B" }, { "math_id": 38, "text": "A \\cap B = \\{\\,y\\ |\\ y\\in A \\wedge y \\in B\\,\\}" }, { "math_id": 39, "text": "x \\not\\in A" }, { "math_id": 40, "text": "x \\not\\in B" }, { "math_id": 41, "text": "x \\in \\overline{A}" }, { "math_id": 42, "text": "x \\in \\overline{A} \\cup \\overline{B}" }, { "math_id": 43, "text": "x \\in \\overline{B}" }, { "math_id": 44, "text": "x \\in \\overline{A}\\cup \\overline{B}" }, { "math_id": 45, "text": "\\forall x\\Big( x \\in \\overline{A\\cap B} \\implies x \\in \\overline{A} \\cup \\overline{B}\\Big)" }, { "math_id": 46, "text": "x \\not\\in \\overline{A\\cap B}" }, { "math_id": 47, "text": "x \\in A\\cap B" }, { "math_id": 48, "text": "x \\in A" }, { "math_id": 49, "text": "x \\in B" }, { "math_id": 50, "text": "x \\not\\in \\overline{A}" }, { "math_id": 51, "text": "x \\not\\in \\overline{B}" }, { "math_id": 52, "text": "x \\not\\in \\overline{A} \\cup \\overline{B}" }, { "math_id": 53, "text": "x \\in \\overline{A\\cap B}" }, { "math_id": 54, "text": "\\forall x\\Big( x \\in \\overline{A} \\cup \\overline{B} \\implies x \\in \\overline{A\\cap B}\\Big)" }, { "math_id": 55, "text": "\\overline{A \\cap B} \\subseteq \\overline{A} \\cup \\overline{B}" }, { "math_id": 56, "text": "\\overline{A \\cup B} = \\overline{A} \\cap \\overline{B}" }, { "math_id": 57, "text": "\\mbox{P}^d" }, { "math_id": 58, "text": "\\mbox{P}^d(p, q, ...) = \\neg P(\\neg p, \\neg q, \\dots)." }, { "math_id": 59, "text": " \\forall x \\, P(x) \\equiv \\neg [ \\exists x \\, \\neg P(x)] " }, { "math_id": 60, "text": " \\exists x \\, P(x) \\equiv \\neg [ \\forall x \\, \\neg P(x)] " }, { "math_id": 61, "text": " \\forall x \\, P(x) \\equiv P(a) \\land P(b) \\land P(c) " }, { "math_id": 62, "text": " \\exists x \\, P(x) \\equiv P(a) \\lor P(b) \\lor P(c)." }, { "math_id": 63, "text": " P(a) \\land P(b) \\land P(c) \\equiv \\neg (\\neg P(a) \\lor \\neg P(b) \\lor \\neg P(c)) " }, { "math_id": 64, "text": " P(a) \\lor P(b) \\lor P(c) \\equiv \\neg (\\neg P(a) \\land \\neg P(b) \\land \\neg P(c)), " }, { "math_id": 65, "text": " \\Box p \\equiv \\neg \\Diamond \\neg p, " }, { "math_id": 66, "text": " \\Diamond p \\equiv \\neg \\Box \\neg p." }, { "math_id": 67, "text": "\\neg(P\\lor Q)\\,\\leftrightarrow\\,\\big((\\neg P)\\land(\\neg Q)\\big)," }, { "math_id": 68, "text": "\\big((\\neg P)\\lor(\\neg Q)\\big)\\,\\to\\,\\neg(P\\land Q)." }, { "math_id": 69, "text": "P\\land Q" }, { "math_id": 70, "text": "{\\mathrm {WPEM}}" }, { "math_id": 71, "text": "(\\neg P)\\lor\\neg(\\neg P)." }, { "math_id": 72, "text": "{\\mathrm {LLPO}}" }, { "math_id": 73, "text": "{\\mathrm {WLPO}}" }, { "math_id": 74, "text": "\\neg P" }, { "math_id": 75, "text": "P\\to C" }, { "math_id": 76, "text": "\\forall x\\,\\neg P(x)\\,\\leftrightarrow\\,\\neg\\exists x\\,P(x)" }, { "math_id": 77, "text": "\\exists x\\,\\neg P(x)\\,\\to\\,\\neg\\forall x\\,P(x)." }, { "math_id": 78, "text": "(P\\lor Q)\\,\\to\\,\\neg\\big((\\neg P)\\land(\\neg Q)\\big)," }, { "math_id": 79, "text": "(P\\land Q)\\,\\to\\,\\neg\\big((\\neg P)\\lor(\\neg Q)\\big)," }, { "math_id": 80, "text": "\\forall x\\,P(x)\\,\\to\\,\\neg\\exists x\\,\\neg P(x)," }, { "math_id": 81, "text": "\\exists x\\,P(x)\\,\\to\\,\\neg\\forall x\\,\\neg P(x)," }, { "math_id": 82, "text": "{\\mathrm {PEM}}" } ]
https://en.wikipedia.org/wiki?curid=64669
64671582
Collective classification
In network theory, collective classification is the simultaneous prediction of the labels for multiple objects, where each label is predicted using information about the object's observed features, the observed features and labels of its neighbors, and the unobserved labels of its neighbors. Collective classification problems are defined in terms of networks of random variables, where the network structure determines the relationship between the random variables. Inference is performed on multiple random variables simultaneously, typically by propagating information between nodes in the network to perform approximate inference. Approaches that use collective classification can make use of relational information when performing inference. Examples of collective classification include predicting attributes (ex. gender, age, political affiliation) of individuals in a social network, classifying webpages in the World Wide Web, and inferring the research area of a paper in a scientific publication dataset. Motivation and background. Traditionally, a major focus of machine learning is to solve classification problems. (For example, given a collection of e-mails, we wish to determine which are spam, and which are not.) Many machine learning models for performing this task will try to categorize each item independently, and focus on predicting the class labels separately. However, the prediction accuracy for the labels whose values must be inferred can be improved with knowledge of the correct class labels for related items. For example, it is easier to predict the topic of a webpage if we know the topics of the webpages that link to it. Similarly, the chance of a particular word being a verb increases if we know that the previous word in the sentence is a noun; knowing the first few characters in a word can make it much easier to identify the remaining characters. Many researchers have proposed techniques that attempt to classify samples in a joint or collective manner, instead of treating each sample in isolation; these techniques have enabled significant gains in classification accuracy. Example. Consider the task of inferring the political affiliation of users in a social network, where some portion of these affiliations are observed, and the remainder are unobserved. Each user has local features, such as their profile information, and links exist between users who are friends in this social network. An approach that does not collectively classify users will consider each user in the network independently and use their local features to infer party affiliations. An approach which performs collective classification might assume that users who are friends tend to have similar political views, and could then jointly infer all unobserved party affiliations while making use of the rich relational structure of the social network. Definition. Consider the semi supervised learning problem of assigning labels to nodes in a network by using knowledge of a subset of the nodes' labels. Specifically, we are given a network represented by a graph formula_0 with a set of nodes formula_1 and an edge set formula_2 representing relationships among nodes. Each node formula_3 is described by its attributes: a feature vector formula_4 and its label (or class) formula_5. formula_1 can further be divided into two sets of nodes: formula_6, the set of nodes for which we know the correct label values (observed variables), and formula_7, the nodes whose labels must be inferred. The collective classification task is to label the nodes in formula_7 with a label from a label set formula_8. In such settings, traditional classification algorithms assume that the data is drawn independently and identically from some distribution (iid). This means that the labels inferred for nodes whose label is unobserved are independent of each other. One does not make this assumption when performing collective classification. Instead, there are three distinct types of correlations that can be utilized to determine the classification or label of formula_9: Collective classification refers to the combined classification of a set of interlinked objects using the three above types of information. Methods. There are several existing approaches to collective classification. The two major methods are iterative methods and methods based on probabilistic graphical models. Iterative methods. The general idea for iterative methods is to iteratively combine and revise individual node predictions so as to reach an equilibrium. When updating predictions for individual nodes is a fast operation, the complexity of these iterative methods will be the number of iterations needed for convergence. Though convergence and optimality is not always mathematically guaranteed, in practice, these approaches will typically converge quickly to a good solution, depending on the graph structure and problem complexity. The methods presented in this section are representative of this iterative approach. Label propagation. A natural assumption in network classification is that adjacent nodes are likely to have the same label (i.e., contagion or homophily). The predictor for node formula_10 using the label propagation method is a weighted average of its neighboring labels formula_11 Iterative Classification Algorithms (ICA). While label propagation is surprisingly effective, it may sometimes fail to capture complex relational dynamics. More sophisticated approaches can use richer predictors. Suppose we have a classifier formula_12 that has been trained to classify a node formula_13 given its features formula_14 and the features formula_15 and labels formula_11 of its neighbors formula_16. Iterative classification applies uses a local classifier for each node, which uses information about current predictions and ground truth information about the node's neighbors, and iterates until the local predictions converge to a global solution. Iterative classification is an “algorithmic framework,” in that it is agnostic to the choice of predictor; this makes it a very versatile tool for collective classification. Collective classification with graphical models. Another approach to collective classification is to represent the problem with a graphical model and use learning and inference techniques for the graphical modeling approach to arrive at the correct classifications. Graphical models are tools for joint, probabilistic inference, making them ideal for collective classification. They are characterized by a graphical representation of a probability distribution formula_17, in which random variables are nodes in a graph formula_0. Graphical models can be broadly categorized by whether the underlying graph is directed (e.g., Bayesian networks or collections of local classifiers) or undirected (e.g., Markov random fields (MRF)). Gibbs sampling. Gibbs sampling is a general framework for approximating a distribution. It is a Markov chain Monte Carlo algorithm, in that it iteratively samples from the current estimate of the distribution, constructing a Markov chain that converges to the target (stationary) distribution. The basic idea for Gibbs Sampling is to sample for the best label estimate for formula_18 given all the values for the nodes in formula_16 using local classifier formula_19 for a fixed number of iterations. After that, we sample labels for each formula_5 and maintain count statistics for the number of times we sampled label formula_20 for node formula_18. After collecting a predefined number of such samples, we output the best label assignment for node formula_18 by choosing the label that was assigned the maximum number of times to formula_18 while collecting samples. Loopy belief propagation. For certain undirected graphical models, it is possible to efficiently perform exact inference via message passing, or belief propagation algorithms. These algorithms follow a simple iterative pattern: each variable passes its "beliefs" about its neighbors' marginal distributions, then uses the incoming messages about its own value to update its beliefs. Convergence to the true marginals is guaranteed for tree-structured MRFs, but is not guaranteed for MRFs with cycles. Statistical relational learning (SRL) related. Statistical relational learning is often used to address collective classification problems. A variety of SRL methods has been applied to the collective classification setting. Some of the methods include direct methods such probabilistic relational models (PRM), coupled conditional models such as link-based classification, and indirect methods such as Markov logic networks (MLN) and Probabilistic Soft Logic (PSL). Applications. Collective classification is applied in many domains which exhibit relational structure, such as: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "E" }, { "math_id": 3, "text": "v_i\\in V" }, { "math_id": 4, "text": "x_i \\in X" }, { "math_id": 5, "text": "y_i\\in Y" }, { "math_id": 6, "text": "L" }, { "math_id": 7, "text": "U" }, { "math_id": 8, "text": "L=\\{L_1,\\cdots L_q\\}" }, { "math_id": 9, "text": "v" }, { "math_id": 10, "text": "V_{i}" }, { "math_id": 11, "text": "Y_{N_i}" }, { "math_id": 12, "text": "h" }, { "math_id": 13, "text": "v_i" }, { "math_id": 14, "text": "X_i" }, { "math_id": 15, "text": "X_{N_i}" }, { "math_id": 16, "text": "N_i" }, { "math_id": 17, "text": "P" }, { "math_id": 18, "text": "y_i" }, { "math_id": 19, "text": "f" }, { "math_id": 20, "text": "l" } ]
https://en.wikipedia.org/wiki?curid=64671582
64675441
Graph homology
In algebraic topology and graph theory, graph homology describes the homology groups of a graph, where the graph is considered as a topological space. It formalizes the idea of the number of "holes" in the graph. It is a special case of a simplicial homology, as a graph is a special case of a simplicial complex. Since a finite graph is a 1-complex (i.e., its 'faces' are the vertices - which are 0-dimensional, and the edges - which are 1-dimensional), the only non-trivial homology groups are the 0-th group and the 1-th group. The 1st homology group. The general formula for the 1st homology group of a topological space "X" is:formula_0 The example below explains these symbols and concepts in full detail on a graph. Example. Let "X" be a directed graph with 3 vertices {x,y,z} and 4 edges {a: x→y, b: y→z, c: z→x, d: z→x}. It has several "cycles": If we cut the plane along the loop a+b+d, and then cut at c and "glue" at d, we get a cut along the loop a+b+c. This can be represented by the following relation: (a+b+d) + (c-d) = (a+b+c). To formally define this relation, we define the following commutative groups: Most elements of "C"1 are not cycles, for example a+b, 2a+5b-c, etc. are not cycles. To formally define a cycle, we first define "boundaries". The boundary of an edge is denoted by the formula_1 operator and defined as its target minus its source, so formula_2 So formula_1 is a mapping from the group "C"1 to the group "C"0. Since a,b,c,d are the generators of "C"1, this formula_1 naturally extends to a group homomorphism from "C"1 to "C"0. In this homomorphism, formula_3. Similarly, formula_1 maps any cycle in "C"1 to the zero element of "C"0. In other words, the set of cycles in C1 generates the null space (the kernel) of formula_1. In this case, the kernel of formula_1 has two generators: one corresponds to a+b+c and the other to a+b+d (the third cycle, c-d, is a linear combination of the first two). So ker formula_1is isomorphic to Z2. In a general topological space, we would define higher-dimensional chains. In particular, "C"2 would be the free abelian group on the set of 2-dimensional objects. However, in a graph there are no such objects, so "C"2 is a trivial group. Therefore, the image of the second boundary operator, formula_4, is trivial too. Therefore:formula_5 This corresponds to the intuitive fact that the graph has two "holes". The exponent is the number of holes. General case. The above example can be generalized to an arbitrary connected graph "G" = ("V", "E"). Let "T" be a spanning tree of "G". Every edge in "E" \ "T" corresponds to a cycle; these are exactly the linearly independent cycles. Therefore, the first homology group "H"1 of a graph is the free abelian group with |"E" \ "T"| generators. This number equals |"E"|-|"V"|+1; so:formula_6In a disconnected graph, when "C" is the set of connected components, a similar computation shows:formula_7In particular, the first group is trivial if and only if "X" is a forest. The 0-th homology group. The general formula for the 0-th homology group of a topological space "X" is:formula_8 Example. We return to the graph with 3 vertices {x,y,z} and 4 edges {a: x→y, b: y→z, c: z→x, d: z→x}. Recall that the group "C"0 is generated by the set of vertices. Since there are no (−1)-dimensional elements, the group "C"−1 is trivial, and so the entire group "C"0 is a kernel of the corresponding boundary operator: formula_9 = the free abelian group generated by {x,y,z}. The image of formula_1 contains an element for each pair of vertices that are boundaries of an edge, i.e., it is generated by the differences {y−x, z−y, x−z}. To calculate the quotient group, it is convenient to think of all the elements of formula_10 as "equivalent to zero". This means that x, y and z are equivalent - they are in the same equivalence class of the quotient. In other words, formula_11 is generated by a single element (any vertex can generate it). So it is isomorphic to Z. General case. The above example can be generalized to any connected graph. Starting from any vertex, it is possible to get to any other vertex by adding to it one or more expressions corresponding to edges (e.g. starting from x, one can get to z by adding y-x and z-y). Since the elements of formula_10 are all equivalent to zero, it means that all vertices of the graph are in a single equivalence class, and therefore formula_11 is isomorphic to Z. In general, the graph can have several connected components. Let C be the set of components. Then, every connected component is an equivalence class in the quotient group. Therefore: formula_12 It can be generated by any |"C"|-tuple of vertices, one from each component. Reduced homology. Often, it is convenient to assume that the 0-th homology of a connected graph is trivial (so that, if the graph contains a single point, then all its homologies are trivial). This leads to the definition of the reduced homology. For a graph, the reduced 0-th homology is: formula_13 This "reduction" affects only the 0-th homology; the reduced homologies of higher dimensions are equal to the standard homologies. Higher dimensional homologies. A graph has only vertices (0-dimensional elements) and edges (1-dimensional elements). We can generalize the graph to an abstract simplicial complex by adding elements of a higher dimension. Then, the concept of graph homology is generalized by the concept of simplicial homology. Example. In the above example graph, we can add a two-dimensional "cell" enclosed between the edges c and d; let's call it A and assume that it is oriented clockwise. Define "C"2 as the free abelian group generated by the set of two-dimensional cells, which in this case is a singleton {A}. Each element of "C"2 is called a "2-dimensional chain". Just like the boundary operator from "C"1 to "C"0, which we denote by formula_1, there is a boundary operator from "C"2 to "C"1, which we denote by formula_4. In particular, the boundary of the 2-dimensional cell A are the 1-dimensional edges c and d, where c is in the "correct" orientation and d is in a "reverse" orientation; therefore: formula_14. The sequence of chains and boundary operators can be presented as follows: formula_15 The addition of the 2-dimensional cell A implies that its boundary, c-d, no longer represents a hole (it is homotopic to a single point). Therefore, the group of "holes" now has a single generator, namely a+b+c (it is homotopic to a+b+d). The first homology group is now defined as the quotient group: formula_0 Here, formula_16 is the group of 1-dimensional cycles, which is isomorphic to Z2, and formula_17 is the group of 1-dimensional cycles that are boundaries of 2-dimensional cells, which is isomorphic to Z. Hence, their quotient "H"1 is isomorphic to Z. This corresponds to the fact that "X" now has a single hole. Previously. the image of formula_4 was the trivial group, so the quotient was equal to formula_18. Suppose now that we add another oriented 2-dimensional cell B between the edges c and d, such that formula_19. Now "C"2 is the free abelian group generated by {A,B}. This does not change "H"1 - it is still isomorphic to Z (X still has a single 1-dimensional hole). But now "C"2 contains the two-dimensional cycle A-B, so formula_4 has a non-trivial kernel. This cycle generates the second homology group, corresponding to the fact that there is a single two-dimensional hole: formula_20 We can proceed and add a 3-cell - a solid 3-dimensional object (called C) bounded by A and B. Define "C"3 as the free abelian group generated by {C}, and the boundary operator formula_21. We can orient C such that formula_22; note that the boundary of C is a cycle in "C"2. Now the second homology group is: formula_23 corresponding to the fact that there are no two-dimensional holes (C "fills the hole" between A and B). General case. In general, one can define chains of any dimension. If the maximum dimension of a chain is "k", then we get the following sequence of groups: formula_24 It can be proved that any boundary of a ("k"+1)-dimensional cell is a "k"-dimensional cycle. In other words, for any "k", formula_25(the group of boundaries of "k"+1 elements) is contained in formula_26 (the group of "k"-dimensional cycles). Therefore, the quotient formula_27 is well-defined, and it is defined as the "k"-th homology group: formula_28
[ { "math_id": 0, "text": "H_1(X) := \\ker \\partial_1 \\big/ \\operatorname{im} \\partial_2 " }, { "math_id": 1, "text": "\\partial_1" }, { "math_id": 2, "text": "\\partial_1(a)=y-x, ~ \\partial_1(b)=z-y, ~ \\partial_1(c)=\\partial_1(d)=x-z. " }, { "math_id": 3, "text": " \\partial_1(a+b+c)= \\partial_1(a)+\\partial_1(b)+\\partial_1(c) = (y-x)+(z-y)+ (x-z) = 0 " }, { "math_id": 4, "text": "\\partial_2" }, { "math_id": 5, "text": "H_1(X) = \\ker\\partial_1 \\big/ \\operatorname{im}\\partial_2 \\cong \\mathbb{Z}^2 / \\mathbb{Z}^0 = \\mathbb{Z}^2 " }, { "math_id": 6, "text": "H_1(X) \\cong \\mathbb{Z}^{|E|-|V|+1}." }, { "math_id": 7, "text": "H_1(X) \\cong \\mathbb{Z}^{|E|-|V|+|C|}." }, { "math_id": 8, "text": "H_0(X) := \\ker \\partial_0 \\big/ \\operatorname{im} \\partial_{1} " }, { "math_id": 9, "text": " \\ker \\partial_0 = C_0 " }, { "math_id": 10, "text": "\\operatorname{im} \\partial_{1} " }, { "math_id": 11, "text": "H_0(X) " }, { "math_id": 12, "text": "H_0(X) \\cong \\mathbb{Z}^{|C|} ." }, { "math_id": 13, "text": "\\tilde{H_0}(X) \\cong \\mathbb{Z}^{|C|-1}." }, { "math_id": 14, "text": "\\partial_2(A)=c-d " }, { "math_id": 15, "text": "C_2 \\xrightarrow{\\partial_2} C_1 \\xrightarrow{\\partial_1} C_0 " }, { "math_id": 16, "text": "\\ker \\partial_1 " }, { "math_id": 17, "text": "\\operatorname{im} \\partial_2 " }, { "math_id": 18, "text": "\\ker \\partial_1 " }, { "math_id": 19, "text": "\\partial_2(B)=\\partial_2(A)=c-d " }, { "math_id": 20, "text": "H_2(X) := \\ker \\partial_2 \\cong \\mathbb{Z} " }, { "math_id": 21, "text": "\\partial_3: C_3 \\to C_2" }, { "math_id": 22, "text": "\\partial_3(C)=A - B " }, { "math_id": 23, "text": "H_2(X) := \\ker \\partial_2 \\big/ \\operatorname{im} \\partial_3 \\cong {0} " }, { "math_id": 24, "text": "C_k \\xrightarrow{\\partial_k} C_{k-1} \\cdots C_1 \\xrightarrow{\\partial_1} C_0 " }, { "math_id": 25, "text": "\\operatorname{im} \\partial_{k+1} " }, { "math_id": 26, "text": "\\ker \\partial_k " }, { "math_id": 27, "text": "\\ker\\partial_k \\big/ \\operatorname{im} \\partial_{k+1} " }, { "math_id": 28, "text": "H_k(X) := \\ker \\partial_k \\big/ \\operatorname{im} \\partial_{k+1} " } ]
https://en.wikipedia.org/wiki?curid=64675441
64675556
Vogel–Fulcher–Tammann equation
Viscosity equation The Vogel–Fulcher–Tammann equation, also known as Vogel–Fulcher–Tammann–Hesse equation or Vogel–Fulcher equation (abbreviated: VFT equation), is used to describe the viscosity of liquids as a function of temperature, and especially its strongly temperature dependent variation in the supercooled regime, upon approaching the glass transition. In this regime the viscosity of certain liquids can increase by up to 13 orders of magnitude within a relatively narrow temperature interval. The VFT equation reads as follows: formula_0 where formula_1 and formula_2 are empirical material-dependent parameters, and formula_3 is also an empirical fitting parameter, and typically lies about 50 °C below the glass transition temperature. These three parameters are normally used as adjustable parameters to fit the VFT equation to experimental data of specific systems. The VFT equation is named after Hans Vogel, Gordon Scott Fulcher (1884–1971) and Gustav Tammann (1861–1938).
[ { "math_id": 0, "text": "\\eta = \\eta_0 \\cdot e^{\\frac{B}{T - T_\\mathrm{VF}}}" }, { "math_id": 1, "text": "\\eta_0" }, { "math_id": 2, "text": "B," }, { "math_id": 3, "text": "T_\\mathrm{VF}" } ]
https://en.wikipedia.org/wiki?curid=64675556
646796
Regula falsi
Numerical method used to approximate solutions of univariate equations In mathematics, the regula falsi, method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and error technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of algebra and the use of equations. As an example, consider problem 26 in the Rhind papyrus, which asks for a solution of (written in modern notation) the equation "x" + = 15. This is solved by false position. First, guess that "x" = 4 to obtain, on the left, 4 + = 5. This guess is a good choice since it produces an integer value. However, 4 is not the solution of the original equation, as it gives a value which is three times too small. To compensate, multiply x (currently set to 4) by 3 and substitute again to get 12 + = 15, verifying that the solution is "x" = 12. Modern versions of the technique employ systematic ways of choosing new test values and are concerned with the questions of whether or not an approximation to a solution can be obtained, and if it can, how fast can the approximation be found. Two historical types. Two basic types of false position method can be distinguished historically, "simple false position" and "double false position". "Simple false position" is aimed at solving problems involving direct proportion. Such problems can be written algebraically in the form: determine "x" such that formula_0 if "a" and "b" are known. The method begins by using a test input value "x"′, and finding the corresponding output value "b"′ by multiplication: "ax"′ = "b"′. The correct answer is then found by proportional adjustment, "x" = "x"′. "Double false position" is aimed at solving more difficult problems that can be written algebraically in the form: determine "x" such that formula_1 if it is known that formula_2 Double false position is mathematically equivalent to linear interpolation. By using a pair of test inputs and the corresponding pair of outputs, the result of this algorithm given by, formula_3 would be memorized and carried out by rote. Indeed, the rule as given by Robert Recorde in his "Ground of Artes" (c. 1542) is: &lt;poem&gt; Gesse at this woorke as happe doth leade. By chaunce to truthe you may procede. And firste woorke by the question, Although no truthe therein be don. Suche falsehode is so good a grounde, That truth by it will soone be founde. From many bate to many mo, From to fewe take to fewe also. With to much ioyne to fewe againe, To to fewe adde to manye plaine. In crossewaies multiplye contrary kinde, All truthe by falsehode for to fynde. &lt;/poem&gt; For an affine linear function, formula_4 double false position provides the exact solution, while for a nonlinear function "f" it provides an approximation that can be successively improved by iteration. History. The simple false position technique is found in cuneiform tablets from ancient Babylonian mathematics, and in papyri from ancient Egyptian mathematics. Double false position arose in late antiquity as a purely arithmetical algorithm. In the ancient Chinese mathematical text called "The Nine Chapters on the Mathematical Art" (九章算術), dated from 200 BC to AD 100, most of Chapter 7 was devoted to the algorithm. There, the procedure was justified by concrete arithmetical arguments, then applied creatively to a wide variety of story problems, including one involving what we would call secant lines on a conic section. A more typical example is this "joint purchase" problem involving an "excess and deficit" condition: Now an item is purchased jointly; everyone contributes 8 [coins], the excess is 3; everyone contributes 7, the deficit is 4. Tell: The number of people, the item price, what is each? Answer: 7 people, item price 53. Between the 9th and 10th centuries, the Egyptian mathematician Abu Kamil wrote a now-lost treatise on the use of double false position, known as the "Book of the Two Errors" ("Kitāb al-khaṭāʾayn"). The oldest surviving writing on double false position from the Middle East is that of Qusta ibn Luqa (10th century), an Arab mathematician from Baalbek, Lebanon. He justified the technique by a formal, Euclidean-style geometric proof. Within the tradition of medieval Muslim mathematics, double false position was known as "hisāb al-khaṭāʾayn" ("reckoning by two errors"). It was used for centuries to solve practical problems such as commercial and juridical questions (estate partitions according to rules of Quranic inheritance), as well as purely recreational problems. The algorithm was often memorized with the aid of mnemonics, such as a verse attributed to Ibn al-Yasamin and balance-scale diagrams explained by al-Hassar and Ibn al-Banna, all three being mathematicians of Moroccan origin. Leonardo of Pisa (Fibonacci) devoted Chapter 13 of his book "Liber Abaci" (AD 1202) to explaining and demonstrating the uses of double false position, terming the method "regulis elchatayn" after the "al-khaṭāʾayn" method that he had learned from Arab sources. In 1494, Pacioli used the term "el cataym" in his book "Summa de arithmetica", probably taking the term from Fibonacci. Other European writers would follow Pacioli and sometimes provided a translation into Latin or the vernacular. For instance, Tartaglia translates the Latinized version of Pacioli's term into the vernacular "false positions" in 1556. Pacioli's term nearly disappeared in the 16th century European works and the technique went by various names such as "Rule of False", "Rule of Position" and "Rule of False Position". "Regula Falsi" appears as the Latinized version of Rule of False as early as 1690. Several 16th century European authors felt the need to apologize for the name of the method in a science that seeks to find the truth. For instance, in 1568 Humphrey Baker says: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The Rule of falsehoode is so named not for that it teacheth anye deceyte or falsehoode, but that by fayned numbers taken at all aduentures, it teacheth to finde out the true number that is demaunded, and this of all the vulgar Rules which are in practise) is ye most excellence. Numerical analysis. The method of false position provides an exact solution for linear functions, but more direct algebraic techniques have supplanted its use for these functions. However, in numerical analysis, double false position became a root-finding algorithm used in iterative numerical approximation techniques. Many equations, including most of the more complicated ones, can be solved only by iterative numerical approximation. This consists of trial and error, in which various values of the unknown quantity are tried. That trial-and-error may be guided by calculating, at each step of the procedure, a new estimate for the solution. There are many ways to arrive at a calculated-estimate and "regula falsi" provides one of these. Given an equation, move all of its terms to one side so that it has the form, "f" ("x") = 0, where f is some function of the unknown variable x. A value c that satisfies this equation, that is, "f" ("c") = 0, is called a "root" or "zero" of the function f and is a solution of the original equation. If f is a continuous function and there exist two points "a"0 and "b"0 such that "f" ("a"0) and "f" ("b"0) are of opposite signs, then, by the intermediate value theorem, the function "f" has a root in the interval ("a"0, "b"0). There are many root-finding algorithms that can be used to obtain approximations to such a root. One of the most common is Newton's method, but it can fail to find a root under certain circumstances and it may be computationally costly since it requires a computation of the function's derivative. Other methods are needed and one general class of methods are the "two-point bracketing methods". These methods proceed by producing a sequence of shrinking intervals ["a""k", "b""k"], at the kth step, such that ("a""k", "b""k") contains a root of "f". Two-point bracketing methods. These methods start with two x-values, initially found by trial-and-error, at which "f" ("x") has opposite signs. Under the continuity assumption, a root of f is guaranteed to lie between these two values, that is to say, these values "bracket" the root. A point strictly between these two values is then selected and used to create a smaller interval that still brackets a root. If c is the point selected, then the smaller interval goes from c to the endpoint where "f" ("x") has the sign opposite that of "f" ("c"). In the improbable case that "f" ("c") = 0, a root has been found and the algorithm stops. Otherwise, the procedure is repeated as often as necessary to obtain an approximation to the root to any desired accuracy. The point selected in any current interval can be thought of as an estimate of the solution. The different variations of this method involve different ways of calculating this solution estimate. Preserving the bracketing and ensuring that the solution estimates lie in the interior of the bracketing intervals guarantees that the solution estimates will converge toward the solution, a guarantee not available with other root finding methods such as Newton's method or the secant method. The simplest variation, called the bisection method, calculates the solution estimate as the midpoint of the bracketing interval. That is, if at step k, the current bracketing interval is ["a""k", "b""k"], then the new solution estimate ck is obtained by, formula_5 This ensures that ck is between ak and bk, thereby guaranteeing convergence toward the solution. Since the bracketing interval's length is halved at each step, the bisection method's error is, on average, halved with each iteration. Hence, every 3 iterations, the method gains approximately a factor of 23, i.e. roughly a decimal place, in accuracy. The "regula falsi" (false position) method. The convergence rate of the bisection method could possibly be improved by using a different solution estimate. The "regula falsi" method calculates the new solution estimate as the x-intercept of the line segment joining the endpoints of the function on the current bracketing interval. Essentially, the root is being approximated by replacing the actual function by a line segment on the bracketing interval and then using the classical double false position formula on that line segment. More precisely, suppose that in the k-th iteration the bracketing interval is ("a""k", "b""k"). Construct the line through the points ("a""k", "f" ("a""k")) and ("b""k", "f" ("b""k")), as illustrated. This line is a secant or chord of the graph of the function "f". In point-slope form, its equation is given by formula_6 Now choose "c""k" to be the x-intercept of this line, that is, the value of x for which "y" = 0, and substitute these values to obtain formula_7 Solving this equation for "c""k" gives: formula_8 This last symmetrical form has a computational advantage: As a solution is approached, ak and bk will be very close together, and nearly always of the same sign. Such a subtraction can lose significant digits. Because "f" ("b""k") and "f" ("a""k") are always of opposite sign the “subtraction” in the numerator of the improved formula is effectively an addition (as is the subtraction in the denominator too). At iteration number "k", the number ck is calculated as above and then, if "f" ("a""k") and "f" ("c""k") have the same sign, set "a""k" + 1 = "c""k" and "b""k" + 1 = "b""k", otherwise set "a""k" + 1 = "a""k" and "b""k" + 1 = "c""k". This process is repeated until the root is approximated sufficiently well. The above formula is also used in the secant method, but the secant method always retains the last two computed points, and so, while it is slightly faster, it does not preserve bracketing and may not converge. The fact that "regula falsi" always converges, and has versions that do well at avoiding slowdowns, makes it a good choice when speed is needed. However, its rate of convergence can drop below that of the bisection method. Analysis. Since the initial end-points "a"0 and "b"0 are chosen such that "f" ("a"0) and "f" ("b"0) are of opposite signs, at each step, one of the end-points will get closer to a root of "f". If the second derivative of "f" is of constant sign (so there is no inflection point) in the interval, then one endpoint (the one where "f" also has the same sign) will remain fixed for all subsequent iterations while the converging endpoint becomes updated. As a result, unlike the bisection method, the width of the bracket does not tend to zero (unless the zero is at an inflection point around which sign("f" ) = −sign("f"")). As a consequence, the linear approximation to "f" ("x"), which is used to pick the false position, does not improve as rapidly as possible. One example of this phenomenon is the function formula_9 on the initial bracket [−1,1]. The left end, −1, is never replaced (it does not change at first and after the first three iterations, "f"" is negative on the interval) and thus the width of the bracket never falls below 1. Hence, the right endpoint approaches 0 at a linear rate (the number of accurate digits grows linearly, with a rate of convergence of 2/3). For discontinuous functions, this method can only be expected to find a point where the function changes sign (for example at "x" = 0 for 1/"x" or the sign function). In addition to sign changes, it is also possible for the method to converge to a point where the limit of the function is zero, even if the function is undefined (or has another value) at that point (for example at "x" = 0 for the function given by "f" ("x") = abs("x") − "x"2 when "x" ≠ 0 and by "f" (0) = 5, starting with the interval [-0.5, 3.0]). It is mathematically possible with discontinuous functions for the method to fail to converge to a zero limit or sign change, but this is not a problem in practice since it would require an infinite sequence of coincidences for both endpoints to get stuck converging to discontinuities where the sign does not change, for example at "x" = ±1 in formula_10 The method of bisection avoids this hypothetical convergence problem. Improvements in "regula falsi". Though "regula falsi" always converges, usually considerably faster than bisection, there are situations that can slow its convergence – sometimes to a prohibitive degree. That problem isn't unique to "regula falsi": Other than bisection, "all" of the numerical equation-solving methods can have a slow-convergence or no-convergence problem under some conditions. Sometimes, Newton's method and the secant method "diverge" instead of converging – and often do so under the same conditions that slow "regula falsi's" convergence. But, though "regula falsi" is one of the best methods, and even in its original un-improved version would often be the best choice; for example, when Newton's isn't used because the derivative is prohibitively time-consuming to evaluate, or when Newton's and "Successive-Substitutions" have failed to converge. "Regula falsi's" failure mode is easy to detect: The same end-point is retained twice in a row. The problem is easily remedied by picking instead a modified false position, chosen to avoid slowdowns due to those relatively unusual unfavorable situations. A number of such improvements to "regula falsi" have been proposed; two of them, the Illinois algorithm and the Anderson–Björk algorithm, are described below. The Illinois algorithm. The Illinois algorithm halves the y-value of the retained end point in the next estimate computation when the new y-value (that is, "f" ("c""k")) has the same sign as the previous one ("f" ("c""k" − 1)), meaning that the end point of the previous step will be retained. Hence: formula_11 or formula_12 down-weighting one of the endpoint values to force the next "c""k" to occur on that side of the function. The factor used above looks arbitrary, but it guarantees superlinear convergence (asymptotically, the algorithm will perform two regular steps after any modified step, and has order of convergence 1.442). There are other ways to pick the rescaling which give even better superlinear convergence rates. The above adjustment to "regula falsi" is called the Illinois algorithm by some scholars. Ford (1995) summarizes and analyzes this and other similar superlinear variants of the method of false position. Anderson–Björck algorithm. Suppose that in the k-th iteration the bracketing interval is ["a""k", "b""k"] and that the functional value of the new calculated estimate "c""k" has the same sign as "f" ("b""k"). In this case, the new bracketing interval ["a""k" + 1, "b""k" + 1] = ["a""k", "c""k"] and the left-hand endpoint has been retained. But, whereas the Illinois algorithm would multiply "f" ("a""k") by , Anderson–Björck algorithm multiplies it by "m", where "m" has one of the two following values: formula_13 For simple roots, Anderson–Björck performs very well in practice. ITP method. Given formula_14, formula_15 and formula_16 where formula_17 is the golden ration formula_18, in each iteration formula_19 the ITP method calculates the point formula_20 following three steps: The value of the function formula_28 on this point is queried, and the interval is then reduced to bracket the root by keeping the sub-interval with function values of opposite sign on each end. This three step procedure guarantees that the minmax properties of the bisection method are enjoyed by the estimate as well as the superlinear convergence of the secant method. And, is observed to outperform both bisection and interpolation based methods under smooth and non-smooth functions. Practical considerations. When solving one equation, or just a few, using a computer, the bisection method is an adequate choice. Although bisection isn't as fast as the other methods—when they're at their best and don't have a problem—bisection nevertheless is guaranteed to converge at a useful rate, roughly halving the error with each iteration – gaining roughly a decimal place of accuracy with every 3 iterations. For manual calculation, by calculator, one tends to want to use faster methods, and they usually, but not always, converge faster than bisection. But a computer, even using bisection, will solve an equation, to the desired accuracy, so rapidly that there's no need to try to save time by using a less reliable method—and every method is less reliable than bisection. An exception would be if the computer program had to solve equations very many times during its run. Then the time saved by the faster methods could be significant. Then, a program could start with Newton's method, and, if Newton's isn't converging, switch to "regula falsi", maybe in one of its improved versions, such as the Illinois or Anderson–Björck versions. Or, if even that isn't converging as well as bisection would, switch to bisection, which always converges at a useful, if not spectacular, rate. When the change in "y" has become very small, and "x" is also changing very little, then Newton's method most likely will not run into trouble, and will converge. So, under those favorable conditions, one could switch to Newton's method if one wanted the error to be very small and wanted very fast convergence. Example: Growth of a bulrush. In chapter 7 of "The Nine Chapters", a root finding problem can be translated to modern language as follows: Excess And Deficit Problem #11: Answer: formula_29 days; the height is formula_30 units. Explanation: To understand this, we shall model the heights of the plants on day n (n = 1, 2, 3...) after a geometric series. formula_31Bulrush formula_32Club-rush For the sake of better notations, let formula_33 Rewrite the plant height series formula_34 in terms of k and invoke the sum formula. formula_35 formula_36 Now, use "regula falsi" to find the root of formula_37 formula_38 Set formula_39 and compute formula_40 which equals −1.5 (the "deficit"). Set formula_41 and compute formula_42 which equals 1.75 (the "excess"). Estimated root (1st iteration): formula_43 Example code. This example program, written in the C programming language, is an example of the Illinois algorithm. To find the positive number "x" where cos("x") = "x"3, the equation is transformed into a root-finding form "f" ("x") = cos("x") -- "x"3 = 0. double f(double x) { return cos(x) - x*x*x; /* a,b: endpoints of an interval where we search e: half of upper bound for relative error m: maximal number of iteration double FalsiMethod(double (*f)(double), double a, double b, double e, int m) { double c, fc; int n, side = 0; /* starting values at endpoints of interval */ double fa = f(a); double fb = f(b); for (n = 0; n &lt; m; n++) { c = (fa * b - fb * a) / (fa - fb); if (fabs(b - a) &lt; e * fabs(b + a)) break; fc = f(c); if (fc * fb &gt; 0) { /* fc and fb have same sign, copy c to b */ b = c; fb = fc; if (side == -1) fa /= 2; side = -1; } else if (fa * fc &gt; 0) { /* fc and fa have same sign, copy c to a */ a = c; fa = fc; if (side == +1) fb /= 2; side = +1; } else { /* fc * f_ very small (looks like zero) */ break; return c; int main(void) { printf("%0.15f\n", FalsiMethod(&amp;f, 0, 1, 5E-15, 100)); return 0; After running this code, the final answer is approximately 0.865474033101614. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "ax = b ," }, { "math_id": 1, "text": "f(x) = ax + c = 0 ," }, { "math_id": 2, "text": "f(x_1) = b_1, \\qquad f(x_2) = b_2 ." }, { "math_id": 3, "text": " x = \\frac{b_1 x_2 - b_2 x_1}{b_1 - b_2}," }, { "math_id": 4, "text": "f(x) = ax + c ," }, { "math_id": 5, "text": "c_k=\\frac{a_k+b_k}{2}." }, { "math_id": 6, "text": " y - f(b_k) = \\frac{f(b_k)-f(a_k)}{b_k-a_k} (x-b_k). " }, { "math_id": 7, "text": " f(b_k) + \\frac{f(b_k)-f(a_k)}{b_k-a_k} (c_k-b_k) = 0. " }, { "math_id": 8, "text": "\n c_k = b_k - f(b_k) \\frac{b_k-a_k}{f(b_k)-f(a_k)} = \\frac{a_k f(b_k) - b_k f(a_k)}{f(b_k) - f(a_k)}." }, { "math_id": 9, "text": " f(x) = 2x^3-4x^2+3x " }, { "math_id": 10, "text": "f(x) = \\frac{1}{(x-1)^2} + \\frac{1}{(x+1)^2}." }, { "math_id": 11, "text": " c_k = \\frac{\\frac{1}{2}f(b_k) a_k - f(a_k) b_k}{\\frac{1}{2}f(b_k) - f(a_k)}" }, { "math_id": 12, "text": " c_k = \\frac{f(b_k) a_k - \\frac{1}{2}f(a_k) b_k}{f(b_k) - \\frac{1}{2}f(a_k)}," }, { "math_id": 13, "text": "\n\\begin{align}\nm' &= 1 - \\frac{f(c_k)}{f(b_k)},\\\\ \nm &=\n\\begin{cases}\n m' & \\text{if } m' > 0, \\\\\n \\frac{1}{2} & \\text{otherwise.}\n\\end{cases}\n\\end{align}\n" }, { "math_id": 14, "text": "\\kappa_1\\in (0,\\infty), \\kappa_2 \\in \\left[1,1+\\phi\\right) " }, { "math_id": 15, "text": "n_{1/2} \\equiv \\lceil(b_0-a_0)/2\\epsilon\\rceil " }, { "math_id": 16, "text": "n_0\\in[0,\\infty) " }, { "math_id": 17, "text": "\\phi " }, { "math_id": 18, "text": "\\tfrac{1}{2}(1+\\sqrt{5}) " }, { "math_id": 19, "text": "j = 0,1,2... " }, { "math_id": 20, "text": "x_{\\text{ITP}} " }, { "math_id": 21, "text": "x_{1/2} \\equiv \\frac{a+b}{2} " }, { "math_id": 22, "text": "x_f \\equiv \\frac{bf(a)-af(b)}{f(a)-f(b)} " }, { "math_id": 23, "text": "x_t \\equiv x_f+\\sigma \\delta " }, { "math_id": 24, "text": "\\sigma \\equiv \\text{sign}(x_{1/2}-x_f) " }, { "math_id": 25, "text": "\\delta \\equiv \\min\\{\\kappa_1|b-a|^{\\kappa_2},|x_{1/2}-x_f|\\} " }, { "math_id": 26, "text": "x_{\\text{ITP}} \\equiv x_{1/2} -\\sigma \\rho_k " }, { "math_id": 27, "text": "\\rho_k \\equiv \\min\\left\\{\\epsilon 2^{n_{1/2}+n_0-j} - \\frac{b-a}{2},|x_t-x_{1/2}|\\right\\} " }, { "math_id": 28, "text": "f(x_{\\text{ITP}}) " }, { "math_id": 29, "text": "(2 + \\frac{6}{13})" }, { "math_id": 30, "text": "(4 + \\frac{8}{10} + \\frac{6}{130})" }, { "math_id": 31, "text": "B(n) = \\sum_{i=1}^n 3 \\cdot \\frac{1}{2^{i-1}} \\quad" }, { "math_id": 32, "text": "C(n) = \\sum_{i=1}^n 1 \\cdot 2^{i-1} \\quad" }, { "math_id": 33, "text": "\\ k = i-1 ~." }, { "math_id": 34, "text": "\\ B(n),\\ C(n)\\ " }, { "math_id": 35, "text": "\\ B(n) = \\sum_{k=0}^{n-1} 3 \\cdot \\frac{1}{2^k} = 3 \\left( \\frac{1 - (\\tfrac{1}{2})^{n-1+1}}{1 - \\tfrac{1}{2}} \\right) = 6 \\left( 1 - \\frac{1}{2^n} \\right)" }, { "math_id": 36, "text": "\\ C(n) = \\sum_{k=0}^{n-1} 2^k = \\frac{~~ 1 - 2^n}{\\ 1 - 2\\ } = 2^n - 1\\ " }, { "math_id": 37, "text": "\\ (C(n) - B(n))\\ " }, { "math_id": 38, "text": "\\ F(n) := C(n) - B(n) = \\frac{6}{2^n} + 2^n - 7\\ " }, { "math_id": 39, "text": "\\ x_1 = 2\\ " }, { "math_id": 40, "text": "\\ F(x_1) = F(2)\\ " }, { "math_id": 41, "text": "\\ x_2 = 3\\ " }, { "math_id": 42, "text": "\\ F(x_2) = F(3)\\ " }, { "math_id": 43, "text": "\\ \\hat{x} ~=~ \\frac{~ x_1 F(x_2) - x_2 F(x_1) ~}{F(x_2) - F(x_1)} ~=~ \\frac{~ 2 \\times 1.75 + 3 \\times 1.5 ~}{1.75 + 1.5} ~\\approx~ 2.4615\\ " } ]
https://en.wikipedia.org/wiki?curid=646796
64685
Post correspondence problem
The Post correspondence problem is an undecidable decision problem that was introduced by Emil Post in 1946. Because it is simpler than the halting problem and the "Entscheidungsproblem" it is often used in proofs of undecidability. Definition of the problem. Let formula_0 be an alphabet with at least two symbols. The input of the problem consists of two finite lists formula_1 and formula_2 of words over formula_0. A solution to this problem is a sequence of indices formula_3 with formula_4 and formula_5 for all formula_6, such that formula_7 The decision problem then is to decide whether such a solution exists or not. formula_8 formula_9 Alternative definition. This gives rise to an equivalent alternative definition often found in the literature, according to which any two homomorphisms formula_10 with a common domain and a common codomain form an instance of the Post correspondence problem, which now asks whether there exists a nonempty word formula_11 in the domain such that formula_12. Another definition describes this problem easily as a type of puzzle. We begin with a collection of dominos, each containing two strings, one on each side. An individual domino looks like formula_13 and a collection of dominos looks like formula_14. The task is to make a list of these dominos (repetition permitted) so that the string we get by reading off the symbols on the top is the same as the string of symbols on the bottom. This list is called a match. The Post correspondence problem is to determine whether a collection of dominos has a match. For example, the following list is a match for this puzzle. formula_15. For some collections of dominos, finding a match may not be possible. For example, the collection formula_16. cannot contain a match because every top string is longer than the corresponding bottom string. Example instances of the problem. Example 1. Consider the following two lists: &lt;templatestyles src="Col-begin/styles.css"/&gt; A solution to this problem would be the sequence (3, 2, 3, 1), because formula_17 Furthermore, since (3, 2, 3, 1) is a solution, so are all of its "repetitions", such as (3, 2, 3, 1, 3, 2, 3, 1), etc.; that is, when a solution exists, there are infinitely many solutions of this repetitive kind. However, if the two lists had consisted of only formula_18 and formula_19 from those sets, then there would have been no solution (the last letter of any such α string is not the same as the letter before it, whereas β only constructs pairs of the same letter). A convenient way to view an instance of a Post correspondence problem is as a collection of blocks of the form &lt;templatestyles src="Col-begin/styles.css"/&gt; there being an unlimited supply of each type of block. Thus the above example is viewed as &lt;templatestyles src="Col-begin/styles.css"/&gt; where the solver has an endless supply of each of these three block types. A solution corresponds to some way of laying blocks next to each other so that the string in the top cells corresponds to the string in the bottom cells. Then the solution to the above example corresponds to: &lt;templatestyles src="Col-begin/styles.css"/&gt; Example 2. Again using blocks to represent an instance of the problem, the following is an example that has infinitely many solutions in addition to the kind obtained by merely "repeating" a solution. &lt;templatestyles src="Col-begin/styles.css"/&gt; In this instance, every sequence of the form (1, 2, 2, . . ., 2, 3) is a solution (in addition to all their repetitions): &lt;templatestyles src="Col-begin/styles.css"/&gt; Proof sketch of undecidability. The most common proof for the undecidability of PCP describes an instance of PCP that can simulate the computation of an arbitrary Turing machine on a particular input. A match will occur if and only if the input would be accepted by the Turing machine. Because deciding if a Turing machine will accept an input is a basic undecidable problem, PCP cannot be decidable either. The following discussion is based on Michael Sipser's textbook "Introduction to the Theory of Computation". In more detail, the idea is that the string along the top and bottom will be a computation history of the Turing machine's computation. This means it will list a string describing the initial state, followed by a string describing the next state, and so on until it ends with a string describing an accepting state. The state strings are separated by some separator symbol (usually written #). According to the definition of a Turing machine, the full state of the machine consists of three parts: Although the tape has infinitely many cells, only some finite prefix of these will be non-blank. We write these down as part of our state. To describe the state of the finite control, we create new symbols, labelled "q"1 through "q""k", for each of the finite state machine's "k" states. We insert the correct symbol into the string describing the tape's contents at the position of the tape head, thereby indicating both the tape head's position and the current state of the finite control. For the alphabet {0,1}, a typical state might look something like: 101101110"q"700110. A simple computation history would then look something like this: "q"0101#1"q"401#11"q"21#1"q"810. We start out with this block, where "x" is the input string and "q"0 is the start state: The top starts out "lagging" the bottom by one state, and keeps this lag until the very end stage. Next, for each symbol "a" in the tape alphabet, as well as #, we have a "copy" block, which copies it unmodified from one state to the next: We also have a block for each position transition the machine can make, showing how the tape head moves, how the finite state changes, and what happens to the surrounding symbols. For example, here the tape head is over a 0 in state 4, and then writes a 1 and moves right, changing to state 7: Finally, when the top reaches an accepting state, the bottom needs a chance to finally catch up to complete the match. To allow this, we extend the computation so that once an accepting state is reached, each subsequent machine step will cause a symbol near the tape head to vanish, one at a time, until none remain. If "q""f" is an accepting state, we can represent this with the following transition blocks, where "a" is a tape alphabet symbol: &lt;templatestyles src="Col-begin/styles.css"/&gt; There are a number of details to work out, such as dealing with boundaries between states, making sure that our initial tile goes first in the match, and so on, but this shows the general idea of how a static tile puzzle can simulate a Turing machine computation. The previous example "q"0101#1"q"401#11"q"21#1"q"810. is represented as the following solution to the Post correspondence problem: &lt;templatestyles src="Col-begin/styles.css"/&gt; Variants. Many variants of PCP have been considered. One reason is that, when one tries to prove undecidability of some new problem by reducing from PCP, it often happens that the first reduction one finds is not from PCP itself but from an apparently weaker version. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\alpha_{1}, \\ldots, \\alpha_{N}" }, { "math_id": 2, "text": "\\beta_{1}, \\ldots, \\beta_{N}" }, { "math_id": 3, "text": "(i_k)_{1 \\le k \\le K}" }, { "math_id": 4, "text": "K \\ge 1" }, { "math_id": 5, "text": " 1 \\le i_k \\le N" }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "\\alpha_{i_1} \\ldots \\alpha_{i_K} = \\beta_{i_1} \\ldots \\beta_{i_K}." }, { "math_id": 8, "text": "g: (i_1,\\ldots,i_K) \\mapsto \\alpha_{i_1} \\ldots \\alpha_{i_K}" }, { "math_id": 9, "text": "h: (i_1,\\ldots,i_K) \\mapsto \\beta_{i_1} \\ldots \\beta_{i_K}." }, { "math_id": 10, "text": "g,h" }, { "math_id": 11, "text": "w" }, { "math_id": 12, "text": "g(w)=h(w)" }, { "math_id": 13, "text": "\\begin{bmatrix}a \\\\ ab\\end{bmatrix} " }, { "math_id": 14, "text": "{ \\begin{bmatrix}bc \\\\ ca\\end{bmatrix}, \\begin{bmatrix}a \\\\ ab\\end{bmatrix}, \\begin{bmatrix}ca \\\\ a\\end{bmatrix}, \\begin{bmatrix}abc \\\\ c\\end{bmatrix} }" }, { "math_id": 15, "text": "{ \\begin{bmatrix}a \\\\ ab\\end{bmatrix}, \\begin{bmatrix}bc \\\\ ca\\end{bmatrix}, \\begin{bmatrix}a \\\\ ab\\end{bmatrix}, \\begin{bmatrix}abc \\\\ c\\end{bmatrix} }" }, { "math_id": 16, "text": "{ \\begin{bmatrix}abc \\\\ ab\\end{bmatrix}, \\begin{bmatrix}ca \\\\ a\\end{bmatrix}, \\begin{bmatrix}acc \\\\ ba\\end{bmatrix} }" }, { "math_id": 17, "text": "\\alpha_3 \\alpha_2 \\alpha_3 \\alpha_1 = bba \\cdot ab \\cdot bba \\cdot a = bbaabbbaa = bb \\cdot aa \\cdot bb \\cdot baa = \\beta_{3} \\beta_{2} \\beta_{3} \\beta_{1}." }, { "math_id": 18, "text": "\\alpha_2, \\alpha_3" }, { "math_id": 19, "text": "\\beta_{2}, \\beta_{3}" }, { "math_id": 20, "text": "i_1, i_2,\\ldots" }, { "math_id": 21, "text": "\\alpha_{i_1} \\cdots \\alpha_{i_k}" }, { "math_id": 22, "text": "\\beta_{i_1} \\cdots \\beta_{i_k}" }, { "math_id": 23, "text": "\\alpha_i" }, { "math_id": 24, "text": "\\beta_i" }, { "math_id": 25, "text": "i_1, i_2, \\ldots" }, { "math_id": 26, "text": "\\{1,\\ldots,N\\}" } ]
https://en.wikipedia.org/wiki?curid=64685
64685253
Nonlinear mixed-effects model
Nonlinear mixed-effects models constitute a class of statistical models generalizing linear mixed-effects models. Like linear mixed-effects models, they are particularly useful in settings where there are multiple measurements within the same statistical units or when there are dependencies between measurements on related statistical units. Nonlinear mixed-effects models are applied in many fields including medicine, public health, pharmacology, and ecology. Definition. While any statistical model containing both fixed effects and random effects is an example of a nonlinear mixed-effects model, the most commonly used models are members of the class of nonlinear mixed-effects models for repeated measures formula_0 where Estimation. When the model is only nonlinear in fixed effects and the random effects are Gaussian, maximum-likelihood estimation can be done using nonlinear least squares methods, although asymptotic properties of estimators and test statistics may differ from the conventional general linear model. In the more general setting, there exist several methods for doing maximum-likelihood estimation or maximum a posteriori estimation in certain classes of nonlinear mixed-effects models – typically under the assumption of normally distributed random variables. A popular approach is the Lindstrom-Bates algorithm which relies on iteratively optimizing a nonlinear problem, locally linearizing the model around this optimum and then employing conventional methods from linear mixed-effects models to do maximum likelihood estimation. Stochastic approximation of the expectation-maximization algorithm gives an alternative approach for doing maximum-likelihood estimation. Applications. Example: Disease progression modeling. Nonlinear mixed-effects models have been used for modeling progression of disease. In progressive disease, the temporal patterns of progression on outcome variables may follow a nonlinear temporal shape that is similar between patients. However, the stage of disease of an individual may not be known or only partially known from what can be measured. Therefore, a latent time variable that describe individual disease stage (i.e. where the patient is along the nonlinear mean curve) can be included in the model. Example: Modeling cognitive decline in Alzheimer's disease. Alzheimer's disease is characterized by a progressive cognitive deterioration. However, patients may differ widely in cognitive ability and reserve, so cognitive testing at a single time point can often only be used to coarsely group individuals in different stages of disease. Now suppose we have a set of longitudinal cognitive data formula_11 from formula_12 individuals that are each categorized as having either normal cognition (CN), mild cognitive impairment (MCI) or dementia (DEM) at the baseline visit (time formula_13 corresponding to measurement formula_14). These longitudinal trajectories can be modeled using a nonlinear mixed effects model that allows differences in disease state based on baseline categorization: formula_15 where An example of such a model with an exponential mean function fitted to longitudinal measurements of the Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog) is shown in the box. As shown, the inclusion of fixed effects of baseline categorization (MCI or dementia relative to normal cognition) and the random effect of individual continuous disease stage formula_23 aligns the trajectories of cognitive deterioration to reveal a common pattern of cognitive decline. Example: Growth analysis. Growth phenomena often follow nonlinear patters (e.g. logistic growth, exponential growth, and hyperbolic growth). Factors such as nutrient deficiency may both directly affect the measured outcome (e.g. organisms with lack of nutrients end up smaller), but possibly also timing (e.g. organisms with lack of nutrients grow at a slower pace). If a model fails to account for the differences in timing, the estimated population-level curves may smooth out finer details due to lack of synchronization between organisms. Nonlinear mixed-effects models enable simultaneous modeling of individual differences in growth outcomes and timing. Example: Modeling human height. Models for estimating the mean curves of human height and weight as a function of age and the natural variation around the mean are used to create growth charts. The growth of children can however become desynchronized due to both genetic and environmental factors. For example, age at onset of puberty and its associated height spurt can vary several years between adolescents. Therefore, cross-sectional studies may underestimate the magnitude of the pubertal height spurt because age is not synchronized with biological development. The differences in biological development can be modeled using random effects formula_24 that describe a mapping of observed age to a latent biological age using a so-called "warping function" formula_25. A simple nonlinear mixed-effects model with this structure is given by formula_26 where There exists several methods and software packages for fitting such models. The so-called "SITAR" model can fit such models using warping functions that are affine transformations of time (i.e. additive shifts in biological age and differences in rate of maturation), while the so-called "pavpop" model can fit models with smoothly-varying warping functions. An example of the latter is shown in the box. Example: Population Pharmacokinetic/pharmacodynamic modeling. PK/PD models for describing exposure-response relationships such as the Emax model can be formulated as nonlinear mixed-effects models. The mixed-model approach allows modeling of both population level and individual differences in effects that have a nonlinear effect on the observed outcomes, for example the rate at which a compound is being metabolized or distributed in the body. Example: COVID-19 epidemiological modeling. The platform of the nonlinear mixed effect models can be used to describe infection trajectories of subjects and understand some common features shared across the subjects. In epidemiological problems, subjects can be countries, states, or counties, etc. This can be particularly useful in estimating a future trend of the epidemic in an early stage of pendemic where nearly little information is known regarding the disease. Example: Prediction of oil production curve of shale oil wells at a new location with latent kriging. The eventual success of petroleum development projects relies on a large degree of well construction costs. As for unconventional oil and gas reservoirs, because of very low permeability, and a flow mechanism very different from that of conventional reservoirs, estimates for the well construction cost often contain high levels of uncertainty, and oil companies need to make heavy investment in the drilling and completion phase of the wells. The overall recent commercial success rate of horizontal wells in the United States is known to be 65%, which implies that only 2 out of 3 drilled wells will be commercially successful. For this reason, one of the crucial tasks of petroleum engineers is to quantify the uncertainty associated with oil or gas production from shale reservoirs, and further, to predict an approximated production behavior of a new well at a new location given specific completion data before actual drilling takes place to save a large degree of well construction costs. The platform of the nonlinear mixed effect models can be extended to consider the spatial association by incorporating the geostatistical processes such as Gaussian process on the second stage of the model as follows: formula_29 formula_30 formula_31 formula_32 formula_33 where The Gaussian process regressions used on the latent level (the second stage) eventually produce kriging predictors for the curve parameters formula_42 that dictate the shape of the mean curve formula_34 on the date level (the first level). As the kriging techniques have been employed in the latent level, this technique is called latent kriging. The right panels show the prediction results of the latent kriging method applied to the two test wells in the Eagle Ford Shale Reservoir of South Texas. Bayesian nonlinear mixed-effects model. The framework of Bayesian hierarchical modeling is frequently used in diverse applications. Particularly, Bayesian nonlinear mixed-effects models have recently received significant attention. A basic version of the Bayesian nonlinear mixed-effects models is represented as the following three-stage: Stage 1: Individual-Level Model formula_43 Stage 2: Population Model formula_44 Stage 3: Prior formula_45 Here, formula_28 denotes the continuous response of the formula_3-th subject at the time point formula_18, and formula_46 is the formula_47-th covariate of the formula_3-th subject. Parameters involved in the model are written in Greek letters. formula_48 is a known function parameterized by the formula_49-dimensional vector formula_50. Typically, formula_4 is a `nonlinear' function and describes the temporal trajectory of individuals. In the model, formula_10 and formula_51 describe within-individual variability and between-individual variability, respectively. If Stage 3: Prior is not considered, then the model reduces to a frequentist nonlinear mixed-effect model. A central task in the application of the Bayesian nonlinear mixed-effect models is to evaluate the posterior density: formula_52 formula_53 formula_54 The panel on the right displays Bayesian research cycle using Bayesian nonlinear mixed-effects model. A research cycle using the Bayesian nonlinear mixed-effects model comprises two steps: (a) standard research cycle and (b) Bayesian-specific workflow. Standard research cycle involves literature review, defining a problem and specifying the research question and hypothesis. Bayesian-specific workflow comprises three sub-steps: (b)–(i) formalizing prior distributions based on background knowledge and prior elicitation; (b)–(ii) determining the likelihood function based on a nonlinear function formula_55; and (b)–(iii) making a posterior inference. The resulting posterior inference can be used to start a new research cycle. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{y}_{ij} = f(\\phi_{ij},{v}_{ij}) + \\epsilon_{ij},\\quad i =1,\\ldots, M, \\, j = 1,\\ldots, n_i" }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "n_i" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "\\phi_{ij}" }, { "math_id": 6, "text": "v_{ij}" }, { "math_id": 7, "text": "\\phi_{ij}= \\boldsymbol{A}_{ij}\\beta + \\boldsymbol{B}_{ij} \\boldsymbol{b}_{i}," }, { "math_id": 8, "text": "\\beta" }, { "math_id": 9, "text": "\\boldsymbol{b}_{i}" }, { "math_id": 10, "text": "\\epsilon_{ij}" }, { "math_id": 11, "text": "(y_{i1}, \\ldots, y_{in_i})" }, { "math_id": 12, "text": "i=1,\\ldots,M" }, { "math_id": 13, "text": "t_{i1} =0" }, { "math_id": 14, "text": "y_{i1} " }, { "math_id": 15, "text": "{y}_{ij} = f_{\\tilde\\beta}(t_{ij} + A^{MCI}_i \\beta^{MCI} + A^{DEM}_i \\beta^{DEM} + b_i) + \\epsilon_{ij},\\quad i =1,\\ldots, M, \\, j = 1,\\ldots, n_i" }, { "math_id": 16, "text": "f_{\\tilde\\beta}" }, { "math_id": 17, "text": "\\tilde\\beta" }, { "math_id": 18, "text": "t_{ij}" }, { "math_id": 19, "text": "A^{MCI}_i" }, { "math_id": 20, "text": "A^{DEM}_i" }, { "math_id": 21, "text": "\\beta^{MCI}" }, { "math_id": 22, "text": "\\beta^{DEM}" }, { "math_id": 23, "text": "b_{i}" }, { "math_id": 24, "text": "\\boldsymbol{w}_i" }, { "math_id": 25, "text": "v(\\cdot, \\boldsymbol{w}_i)" }, { "math_id": 26, "text": "{y}_{ij} = f_{\\beta}(v(t_{ij}, \\boldsymbol{w}_i)) + \\epsilon_{ij},\\quad i =1,\\ldots, M, \\, j = 1,\\ldots, n_i" }, { "math_id": 27, "text": "f_{\\beta}" }, { "math_id": 28, "text": "y_{ij}" }, { "math_id": 29, "text": "{y}_{it} = \\mu(t;\\theta_{1i},\\theta_{2i},\\theta_{3i}) + \\epsilon_{it},\\quad \\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad i =1,\\ldots, N, \\, t = 1,\\ldots, T_i," }, { "math_id": 30, "text": "\\theta_{li}=\\theta_{l}(s_i) = \\alpha_l + \\sum_{j=1}^{p}\\beta_{lj}x_j + \\epsilon_{l}(s_i) + \\eta_{l}(s_i), \\quad \\epsilon_{l}(\\cdot) \\sim GWN(\\sigma_l^2), \\quad\\quad l=1,2,3," }, { "math_id": 31, "text": "\\eta_{l}(\\cdot) \\sim GP(0,K_{\\gamma_{l}}(\\cdot, \\cdot)),\\quad K_{\\gamma_{l}}(s_i,s_j) = \\gamma_l^2 \\exp (-e^{\\rho_l} \\| s_i - s_j \\|^2), \\quad\\quad\\quad l=1,2,3," }, { "math_id": 32, "text": " \\beta_{lj}|\\lambda_{lj},\\tau_l,\\sigma_l \\sim N(0,\\sigma_l^2 \\tau_l^2 \\lambda_{lj}^2 ),\\quad \\sigma,\\lambda_{lj},\\tau_l,\\sigma_l\\sim C^{+}(0,1), \\quad\\quad\\quad\\quad\\quad\\quad\\quad l=1,2,3,\\, j =1,\\cdots, p," }, { "math_id": 33, "text": " \\alpha_l \\sim \\pi(\\alpha)\\propto 1, \\quad \\sigma_l^2 \\sim \\pi(\\sigma^2) \\propto 1/\\sigma^2, \\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad l=1,2,3, " }, { "math_id": 34, "text": "\\mu(t;\\theta_{1},\\theta_{2},\\theta_{3})" }, { "math_id": 35, "text": "(\\theta_{1},\\theta_{2},\\theta_{3})" }, { "math_id": 36, "text": "x_i = (x_{i1},\\cdots,x_{ip})^{\\top}" }, { "math_id": 37, "text": "s_i = (s_{i1},s_{i2})^{\\top}" }, { "math_id": 38, "text": "\\epsilon_{l}(\\cdot)" }, { "math_id": 39, "text": "\\sigma_l^2" }, { "math_id": 40, "text": "\\eta_{l}(\\cdot)" }, { "math_id": 41, "text": "K_{\\gamma_{l}}(\\cdot, \\cdot)" }, { "math_id": 42, "text": "(\\theta_{1i},\\theta_{2i},\\theta_{3i}), (i=1,\\cdots,N)," }, { "math_id": 43, "text": "{y}_{ij} = f(t_{ij};\\theta_{1i},\\theta_{2i},\\ldots,\\theta_{li},\\ldots,\\theta_{Ki} ) + \\epsilon_{ij},\\quad \\epsilon_{ij} \\sim N(0, \\sigma^2), \\quad i =1,\\ldots, N, \\, j = 1,\\ldots, M_i." }, { "math_id": 44, "text": "\\theta_{li}= \\alpha_l + \\sum_{b=1}^{P}\\beta_{lb}x_{ib} + \\eta_{li}, \\quad \\eta_{li} \\sim N(0, \\omega_l^2), \\quad i =1,\\ldots, N, \\, l=1,\\ldots, K." }, { "math_id": 45, "text": " \\sigma^2 \\sim \\pi(\\sigma^2),\\quad \\alpha_l \\sim \\pi(\\alpha_l), \\quad (\\beta_{l1},\\ldots,\\beta_{lb},\\ldots,\\beta_{lP}) \\sim \\pi(\\beta_{l1},\\ldots,\\beta_{lb},\\ldots,\\beta_{lP}), \\quad \\omega_l^2 \\sim \\pi(\\omega_l^2), \\quad l=1,\\ldots, K." }, { "math_id": 46, "text": "x_{ib}" }, { "math_id": 47, "text": "b" }, { "math_id": 48, "text": "f(t ; \\theta_{1},\\ldots,\\theta_{K})" }, { "math_id": 49, "text": "K" }, { "math_id": 50, "text": "(\\theta_{1},\\ldots,\\theta_{K})" }, { "math_id": 51, "text": "\\eta_{li}" }, { "math_id": 52, "text": "\\pi(\\{\\theta_{li}\\}_{i=1,l=1}^{N,K},\\sigma^2, \\{\\alpha_l\\}_{l=1}^K, \\{\\beta_{lb}\\}_{l=1,b=1}^{K,P},\\{\\omega_l\\}_{l=1}^K | \\{y_{ij}\\}_{i=1,j=1}^{N,M_i}) " }, { "math_id": 53, "text": "\\propto \\pi(\\{y_{ij}\\}_{i=1,j=1}^{N,M_i}, \\{\\theta_{li}\\}_{i=1,l=1}^{N,K},\\sigma^2, \\{\\alpha_l\\}_{l=1}^K, \\{\\beta_{lb}\\}_{l=1,b=1}^{K,P},\\{\\omega_l\\}_{l=1}^K)" }, { "math_id": 54, "text": "= \\underbrace{\\pi(\\{y_{ij}\\}_{i=1,j=1}^{N,M_i} |\\{\\theta_{li}\\}_{i=1,l=1}^{N,K},\\sigma^2)}_{Stage 1: Individual-Level Model}\n\\times \n\\underbrace{\\pi(\\{\\theta_{li}\\}_{i=1,l=1}^{N,K}|\\{\\alpha_l\\}_{l=1}^K, \\{\\beta_{lb}\\}_{l=1,b=1}^{K,P},\\{\\omega_l\\}_{l=1}^K)}_{Stage 2: Population Model}\n\\times \n\\underbrace{p(\\sigma^2, \\{\\alpha_l\\}_{l=1}^K, \\{\\beta_{lb}\\}_{l=1,b=1}^{K,P},\\{\\omega_l\\}_{l=1}^K)}_{Stage 3: Prior}\n" }, { "math_id": 55, "text": " f " } ]
https://en.wikipedia.org/wiki?curid=64685253
64688677
Homological connectivity
Algebra concept In algebraic topology, homological connectivity is a property describing a topological space based on its homology groups. Definitions. Background. "X" is "homologically-connected" if its 0-th homology group equals Z, i.e. formula_0, or equivalently, its 0-th reduced homology group is trivial: formula_1. "X" is "homologically 1-connected" if it is homologically-connected, and additionally, its 1-th homology group is trivial, i.e. formula_4. In general, for any integer "k", "X" is "homologically k-connected" if its reduced homology groups of order 0, 1, ..., "k" are all trivial. Note that the reduced homology group equals the homology group for 1..., "k" (only the 0-th reduced homology group is different). Connectivity. The "homological connectivity" of "X", denoted connH(X), is the largest "k" ≥ 0 for which "X" is homologically "k"-connected. Examples: Some computations become simpler if the connectivity is defined with an offset of 2, that is, formula_6. The eta of the empty space is 0, which is its smallest possible value. The eta of any disconnected space is 1. Dependence on the field of coefficients. The basic definition considers homology groups with integer coefficients. Considering homology groups with other coefficients leads to other definitions of connectivity. For example, "X" is "F2-homologically 1-connected" if its 1st homology group with coefficients from F2 (the cyclic field of size 2) is trivial, i.e.: formula_7. Homological connectivity in specific spaces. For homological connectivity of simplicial complexes, see simplicial homology. Homological connectivity was calculated for various spaces, including: Relation with homotopical connectivity. Hurewicz theorem relates the homological connectivity formula_8 to the homotopical connectivity, denoted by formula_9. For any "X" that is simply-connected, that is, formula_10, the connectivities are the same:formula_11If "X" is not simply-connected (formula_12), then inequality holds:formula_13but it may be strict. See Homotopical connectivity. See also. Meshulam's game is a game played on a graph "G", that can be used to calculate a lower bound on the homological connectivity of the independence complex of "G". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H_0(X)\\cong \\mathbb{Z}" }, { "math_id": 1, "text": "\\tilde{H_0}(X)\\cong 0" }, { "math_id": 2, "text": "H_0(X)\\cong \\mathbb{Z}^{|C|}" }, { "math_id": 3, "text": "\\tilde{H_0}(X)\\cong \\mathbb{Z}^{|C|-1}" }, { "math_id": 4, "text": "H_1(X)\\cong 0" }, { "math_id": 5, "text": "H_1(X) \\cong \\mathbb{Z}^{|E|-|V|+1}" }, { "math_id": 6, "text": "\\eta_H(X) := \\text{conn}_H(X) + 2" }, { "math_id": 7, "text": "H_1(X; \\mathbb{F}_2)\\cong 0" }, { "math_id": 8, "text": "\\text{conn}_H(X)" }, { "math_id": 9, "text": "\\text{conn}_{\\pi}(X)" }, { "math_id": 10, "text": "\\text{conn}_{\\pi}(X)\\geq 1" }, { "math_id": 11, "text": "\\text{conn}_H(X) = \\text{conn}_{\\pi}(X)" }, { "math_id": 12, "text": "\\text{conn}_{\\pi}(X)\\leq 0" }, { "math_id": 13, "text": "\\text{conn}_H(X)\\geq \\text{conn}_{\\pi}(X)" } ]
https://en.wikipedia.org/wiki?curid=64688677
64691475
Rubber band experiment
The rubber band experiment demonstrates entropic force and a refrigeration cycle using a simple rubber band. The rubber band experiment is performed by sensing the temperature of a rubber band as it is stretched, and then released. The rubber band first heats up as its stretched, then allowed to equilibrate back to room temperature. The rubber band cools below room temperature when the tension is released, the effect is large enough to be noticed by touch. The rubber band experiment is often used as a simple example when explaining entropy and energy in high school physics classes. Thermodynamic model. The decrease in the temperature of the rubber band in a spontaneous process at ambient temperature can be explained using the Helmholtz free energy formula_0 where "dF" is the change in free energy, "dL" is the change in length, "τ" is the tension, "dT" is the change in temperature and "S" is the entropy. Rearranging to see the change in temperature we obtain formula_1 . In a spontaneous process "dF" is negative and "τ", "S" are positive and in this case "dL" is negative and it's possible for "dT" to be negative. The rubber band experiment can be modeled as a thermodynamic cycle as shown in the diagram. The stretching of the rubber band is an isobaric expansion (A → B) that increases the energy but reduces the entropy (this is a property of a rubber bands due to rubber elasticity). Holding the rubber band in tension at ambient temperature is an isochoric cooling process (B → C) in which the energy decreases (and the entropy remains approximately stable). Releasing the tension from the rubber band is a process of isobaric cooling (C → D) in which the energy decreases but the entropy increases. The rubber band then equilibrates back to room temperature in an isochoric heating process (D → A) completing the cycle. A simple qualitative model. The model can be derived from two experimental observations on rubber bands. The first is that the internal energy of a rubber band is independent of length: "U"="c" "L"0"T" where "c" is a constant "L"0 is the resting length of the rubber band and "T" is the temperature. The second is that tension in a rubber band increases linearly with the length of the rubber band up to the elasticity limit, formula_2 where "τ" is the tension, "L"1 is the elasticity limit, L is the current length, "b" is a constant, "T" is the temperature and Δ"L" is change in length of the rubber band. Requiring the consistency of two equations of state we obtain the condition formula_3 . Integrating the result we obtain formula_4 where "dS" is the change in entropy. We can see that the entropy of a rubber band will decrease when stretched. After the rubber band equilibrates back to room temperature it has the same internal energy it had at the beginning according to our model but a lower entropy because "dU" is 0 and b, Δ"L" and "dL" are positive. When removing the tension the rubber band will spontaneously equilibrate to a lower energy and higher entropy state resulting in lower temperature. Ideal chain polymer. The decrease in the entropy of a rubber band can be explained using the ideal chain model, where the rubber band can be modeled as a bundle of long chain polymers. The free variables are the angles between links in the polymer. The longer the polymer the fewer possible permutations of angles exist resulting in length "L". Using the definition of entropy in the ideal chain model formula_5 where "K"B is the Boltzmann constant and "Ω" is the number of possible permutations of the polymer. As the rubber band is being stretched "Ω" decreases as a function of length and therefore the entropy decreases as a function of length. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\ndF = \\tau dL - S dT\n\\end{align}\n" }, { "math_id": 1, "text": "\n\\begin{align}\ndT = \\frac{\\tau}{S} dL - \\frac{1}{S} dF\n\\end{align}\n" }, { "math_id": 2, "text": "\n\\begin{align}\n\\tau = \\overline{b}T \\frac{L - L_0}{L_1 - L_0} = b T \\Delta L \n\\end{align}\n" }, { "math_id": 3, "text": "\n\\begin{align}\n& \\frac {\\partial }{\\partial L} \\frac{1}{T} = - \\frac {\\partial }{\\partial U} \\frac{\\tau}{T}\n \\end{align}\n" }, { "math_id": 4, "text": "\n\\begin{align}\n& d S = \\frac{1}{T}dU -\\frac{\\tau}{T}dL = cL_0 \\frac{dU}{U} -b \\Delta L dL \n\\end{align}\n" }, { "math_id": 5, "text": "\n\\begin{align}\n S(L) = K_B \\ln \\Omega(L) \n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=64691475
64691694
Polynomial evaluation
Algorithms for polynomial evaluation In mathematics and computer science, polynomial evaluation refers to computation of the value of a polynomial when its indeterminates are substituted for some values. In other words, evaluating the polynomial formula_0 at formula_1 consists of computing formula_2 See also For evaluating the univariate polynomial formula_3 the most naive method would use formula_4 multiplications to compute formula_5, use formula_6 multiplications to compute formula_7 and so on for a total of formula_8 multiplications and formula_4 additions. Using better methods, such as Horner's rule, this can be reduced to formula_4 multiplications and formula_4 additions. If some preprocessing is allowed, even more savings are possible. Background. This problem arises frequently in practice. In computational geometry, polynomials are used to compute function approximations using Taylor polynomials. In cryptography and hash tables, polynomials are used to compute "k"-independent hashing. In the former case, polynomials are evaluated using floating-point arithmetic, which is not exact. Thus different schemes for the evaluation will, in general, give slightly different answers. In the latter case, the polynomials are usually evaluated in a finite field, in which case the answers are always exact. General methods. Horner's rule. Horner's method evaluates a polynomial using repeated bracketing: formula_9 This method reduces the number of multiplications and additions to just formula_4 Horner's method is so common that a computer instruction "multiply–accumulate operation" has been added to many computer processors, which allow doing the addition and multiplication operations in one combined step. Multivariate. If the polynomial is multivariate, Horner's rule can be applied recursively over some ordering of the variables. E.g. formula_10 can be written as formula_11 An efficient version of this approach was described by Carnicer and Gasca. Estrin's scheme. While it's not possible to do less computation than Horner's rule (without preprocessing), on modern computers the order of evaluation can matter a lot for the computational efficiency. A method known as Estrin's scheme computes a (single variate) polynomial in a tree like pattern: formula_12 Combined by Exponentiation by squaring, this allows parallelizing the computation. Evaluation with preprocessing. Arbitrary polynomials can be evaluated with fewer operations than Horner's rule requires if we first "preprocess" the coefficients formula_13. An example was first given by Motzkin who noted that formula_14 can be written as formula_15 where the values formula_16 are computed in advanced, based on formula_17. Motzkin's method uses just 3 multiplications compared to Horner's 4. The values for each formula_18 can be easily computed by expanding formula_19 and equating the coefficients: formula_20 Example. To compute the Taylor expansion formula_21, we can upscale by a factor 24, apply the above steps, and scale back down. That gives us the three multiplication computation formula_22 Improving over the equivalent Horner form (that is formula_23) by 1 multiplication. Some general methods include the Knuth–Eve algorithm and the Rabin–Winograd algorithm. Multipoint evaluation. Evaluate of a formula_4-degree polynomial formula_19 in multiple points formula_24 can be done with formula_25 multiplications by using Horner's method formula_26 times. Using above preprocessing approach, this can be reduced that by a factor of two, that is, to formula_27 multiplications. However, it is possible to do better. It is possible to reduce the time requirement to just formula_28. The idea is to define two polynomials that are zero in respectively the first and second half of the points: formula_29 and formula_30. We then compute formula_31 and formula_32 using the Polynomial remainder theorem, which can be done in formula_33 time using a fast Fourier transform. This means formula_34 and formula_35 by construction, where formula_36 and formula_37 are polynomials of degree at most formula_38. Because of how formula_39 and formula_40 were defined, we have formula_41 Thus to compute formula_42 on all formula_4 of the formula_43, it suffices to compute the smaller polynomials formula_36 and formula_37 on each half of the points. This gives us a divide-and-conquer algorithm with formula_44, which implies formula_45 by the master theorem. In the case where the points in which we wish to evaluate the polynomials have some structure, simpler methods exist. For example, Knuth section 4.6.4 gives a method for tabulating polynomial values of the type formula_46 Dynamic evaluation. In the case where formula_24 are not known in advance, Kedlaya and Umans gave a data structure for evaluating polynomials over a finite field of size formula_47 in time formula_48 per evaluation after some initial preprocessing. This was shown by Larsen to be essentially optimal. The idea is to transform formula_19 of degree formula_4 into a multivariate polynomial formula_49, such that formula_50 and the individual degrees of formula_51 is at most formula_52. Since this is over formula_53, the largest value formula_51 can take (over formula_54) is formula_55. Using the Chinese remainder theorem, it suffices to evaluate formula_51 modulo different primes formula_56 with a product at least formula_57. Each prime can be taken to be roughly formula_58, and the number of primes needed, formula_59, is roughly the same. Doing this process recursively, we can get the primes as small as formula_60. That means we can compute and store formula_51 on all the possible values in formula_61 time and space. If we take formula_62, we get formula_63, so the time/space requirement is just formula_64 Kedlaya and Umans further show how to combine this preprocessing with fast (FFT) multipoint evaluation. This allows optimal algorithms for many important algebraic problems, such as polynomial modular composition. Specific polynomials. While general polynomials require formula_65 operations to evaluate, some polynomials can be computed much faster. For example, the polynomial formula_66 can be computed using just one multiplication and one addition since formula_67 Evaluation of powers. A particularly interesting type of polynomial is powers like formula_68. Such polynomials can always be computed in formula_69 operations. Suppose, for example, that we need to compute formula_70; we could simply start with formula_71 and multiply by formula_71 to get formula_72. We can then multiply that by itself to get formula_73 and so on to get formula_74 and formula_70 in just four multiplications. Other powers like formula_75 can similarly be computed efficiently by first computing formula_73 by 2 multiplications and then multiplying by formula_71. The most efficient way to compute a given power formula_68 is provided by addition-chain exponentiation. However, this requires designing a specific algorithm for each exponent, and the computation needed for designing these algorithms are difficult (NP-complete), so exponentiation by squaring is generally preferred for effective computations. Polynomial families. Often polynomials show up in a different form than the well known formula_76. For polynomials in Chebyshev form we can use Clenshaw algorithm. For polynomials in Bézier form we can use De Casteljau's algorithm, and for B-splines there is De Boor's algorithm. Hard polynomials. The fact that some polynomials can be computed significantly faster than "general polynomials" suggests the question: Can we give an example of a simple polynomial that cannot be computed in time much smaller than its degree? Volker Strassen has shown that the polynomial formula_77 cannot be evaluated by with less than formula_78 multiplications and formula_79 additions. At least this bound holds if only operations of those types are allowed, giving rise to a so-called "polynomial chain of length formula_80". The polynomial given by Strassen has very large coefficients, but by probabilistic methods, one can show there must exist even polynomials with coefficients just 0's and 1's such that the evaluation requires at least formula_81 multiplications. For other simple polynomials, the complexity is unknown. The polynomial formula_82 is conjectured to not be computable in time formula_83 for any formula_84. This is supported by the fact, that if it can be computed fast then integer factorization can be computed in polynomial time, breaking the RSA cryptosystem. Matrix polynomials. Sometimes the computational cost of scalar multiplications (like formula_85) is less than the computational cost of "non scalar" multiplications (like formula_72). The typical example of this is matrices. If formula_57 is an formula_86 matrix, a scalar multiplication formula_87 takes about formula_88 arithmetic operations, while computing formula_89 takes about formula_90 (or formula_91 using fast matrix multiplication). Matrix polynomials are important for example for computing the Matrix Exponential. Paterson and Stockmeyer showed how to compute a degree formula_4 polynomial using only formula_92 non scalar multiplications and formula_93 scalar multiplications. Thus a matrix polynomial of degree n can be evaluated in formula_94 time. If formula_95 this is formula_96, as fast as one matrix multiplication with the standard algorithm. This method works as follows: For a polynomial formula_97 let k be the least integer not smaller than formula_98 The powers formula_99 are computed with formula_100 matrix multiplications, and formula_101 are then computed by repeated multiplication by formula_102 Now, formula_103, where formula_104 for "i" ≥ "n". This requires just formula_100 more non-scalar multiplications. We can write this succinctly using the Kronecker product: formula_105. The direct application of this method uses formula_106 non-scalar multiplications, but combining it with Evaluation with preprocessing, Paterson and Stockmeyer show you can reduce this to formula_107. Methods based on matrix polynomial multiplications and additions have been proposed allowing to save nonscalar matrix multiplications with respect to the Paterson-Stockmeyer method. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(x_1, x_2) = 2x_1x_2 + x_1^3 + 4" }, { "math_id": 1, "text": "x_1=2, x_2=3" }, { "math_id": 2, "text": "P(2,3)= 2\\cdot 2\\cdot 3 + 2^3+4=24." }, { "math_id": 3, "text": "a_nx^n+a_{n-1}x^{n-1}+\\cdots +a_0," }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "a_n x^n" }, { "math_id": 6, "text": "n-1" }, { "math_id": 7, "text": "a_{n-1} x^{n-1}" }, { "math_id": 8, "text": "\\tfrac{n(n+1)}{2}" }, { "math_id": 9, "text": "\\begin{align}\na_0 + &a_1x + a_2x^2 + a_3x^3 + \\cdots + a_nx^n \\\\\n &= a_0 + x \\bigg(a_1 + x \\Big(a_2 + x \\big(a_3 + \\cdots + x(a_{n-1} + x\\,a_n) \\cdots \\big) \\Big) \\bigg).\n\\end{align}" }, { "math_id": 10, "text": "P(x, y) = 4 + x + 2 x y + 2 x^2 y + x^2 y^2" }, { "math_id": 11, "text": "\\begin{align}\nP(x, y) &= 4 + x (1 + y(2) + x (y(2 + y)))\n\\quad\\text{or}\\\\\nP(x, y) &= 4 + x + y(x(2 + x(2)) + y(x^2)).\n\\end{align}" }, { "math_id": 12, "text": "\\begin{align}\nP(x) = (a_0 + a_1 x) + (a_2 + a_3 x) x^2 + ((a_4 + a_5 x) + (a_6 + a_7 x) x^2)x^4.\n\\end{align}" }, { "math_id": 13, "text": "a_n, \\dots, a_0" }, { "math_id": 14, "text": "P(x)=x^4 + a_3 x^3 + a_2 x^2 + a_1 x + a_0" }, { "math_id": 15, "text": "y = (x+\\beta_0)x+\\beta_1,\\quad P(x)=(y+x+\\beta_2)y+\\beta_3," }, { "math_id": 16, "text": "\\beta_0, \\dots, \\beta_3" }, { "math_id": 17, "text": "a_0, \\dots, a_3" }, { "math_id": 18, "text": "\\beta_i" }, { "math_id": 19, "text": "P(x)" }, { "math_id": 20, "text": "\\begin{align}\n\\beta_0&=\\tfrac12(a_3-1),\\quad\n&z&=a_2-\\beta_0(\\beta_0+1),\\quad\n&\\beta_1&=a_1-\\beta_0 z,\\\\\n\\beta_2&=z-2\\beta_1, \\quad\n&\\beta_3&=a_0-\\beta_1(\\beta_1+\\beta_2).\\end{align}\n" }, { "math_id": 21, "text": "\\exp(x) \\approx 1+x+x^2/2+x^3/6+x^4/24" }, { "math_id": 22, "text": "y = (x+1.5)x+11.625,\\quad P(x)=(y+x-15)y/24+2.63477." }, { "math_id": 23, "text": "P(x) = 1 + x (1 + x (1/2 + x(1/6 + x/24)))" }, { "math_id": 24, "text": "x_1, \\dots, x_m" }, { "math_id": 25, "text": "mn" }, { "math_id": 26, "text": "m" }, { "math_id": 27, "text": "mn/2" }, { "math_id": 28, "text": "O\\big((n + m) \\log^2(n + m)\\big)" }, { "math_id": 29, "text": "m_0(x)=(x-x_1)\\cdots(x-x_{n/2})" }, { "math_id": 30, "text": "m_1(x)=(x-x_{n/2+1})\\cdots(x-x_{n})" }, { "math_id": 31, "text": "R_0 = P \\bmod m_0" }, { "math_id": 32, "text": "R_1 = P \\bmod m_1" }, { "math_id": 33, "text": "O(n\\log n)" }, { "math_id": 34, "text": "P(x) = Q(x)m_0(x) + R_0(x)" }, { "math_id": 35, "text": "P(x) = Q(x)m_1(x) + R_1(x)" }, { "math_id": 36, "text": "R_0" }, { "math_id": 37, "text": "R_1" }, { "math_id": 38, "text": "n/2" }, { "math_id": 39, "text": "m_0" }, { "math_id": 40, "text": "m_1" }, { "math_id": 41, "text": "\\begin{align}\nR_0(x_i) &= P(x_i) \\quad\\text{for } i \\le n/2 \\quad\\text{and}\\\\\nR_1(x_i) &= P(x_i) \\quad\\text{for } i > n/2.\n\\end{align}" }, { "math_id": 42, "text": "P" }, { "math_id": 43, "text": "x_i" }, { "math_id": 44, "text": "T(n) = 2T(n/2) + n\\log n" }, { "math_id": 45, "text": "T(n)=O(n(\\log n)^2)" }, { "math_id": 46, "text": "P(x_0 + h), P(x_0 + 2h), \\dots." }, { "math_id": 47, "text": "F_q" }, { "math_id": 48, "text": "(\\log n)^{O(1)}(\\log_2 q)^{1+o(1)}" }, { "math_id": 49, "text": "f(x_1, x_2, \\dots, x_m)" }, { "math_id": 50, "text": "P(x) = f(x, x^d, x^{d^2}, \\dots, x^{d^m})" }, { "math_id": 51, "text": "f" }, { "math_id": 52, "text": "d" }, { "math_id": 53, "text": "\\bmod q" }, { "math_id": 54, "text": "\\mathbb Z" }, { "math_id": 55, "text": "M = d^m (q-1)^{dm}" }, { "math_id": 56, "text": "p_1, \\dots, p_\\ell" }, { "math_id": 57, "text": "M" }, { "math_id": 58, "text": "\\log M = O(dm\\log q)" }, { "math_id": 59, "text": "\\ell" }, { "math_id": 60, "text": "\\log\\log q" }, { "math_id": 61, "text": "T = (\\log\\log q)^m" }, { "math_id": 62, "text": "d = \\log q" }, { "math_id": 63, "text": "m = \\tfrac{\\log n}{\\log\\log q}" }, { "math_id": 64, "text": "n^\\frac{\\log\\log q}{\\log\\log\\log q}." }, { "math_id": 65, "text": "\\Omega(n)" }, { "math_id": 66, "text": "P(x)=x^2+2x+1" }, { "math_id": 67, "text": "P(x)=(x+1)^2" }, { "math_id": 68, "text": "x^n" }, { "math_id": 69, "text": "O(\\log n)" }, { "math_id": 70, "text": "x^{16}" }, { "math_id": 71, "text": "x" }, { "math_id": 72, "text": "x^2" }, { "math_id": 73, "text": "x^4" }, { "math_id": 74, "text": "x^8" }, { "math_id": 75, "text": "x^5" }, { "math_id": 76, "text": "a_n x^n + \\dots + a_1 x + a_0" }, { "math_id": 77, "text": "P(x)=\\sum_{k=0}^n 2^{2^{kn^3}}x^k" }, { "math_id": 78, "text": "\\tfrac12 n - 2" }, { "math_id": 79, "text": "n - 4" }, { "math_id": 80, "text": "<n^2/\\log n" }, { "math_id": 81, "text": "\\Omega(n/\\log n)" }, { "math_id": 82, "text": "(x+1)(x+2)\\cdots(x+n)" }, { "math_id": 83, "text": "(\\log n)^{c}" }, { "math_id": 84, "text": "c" }, { "math_id": 85, "text": "ax" }, { "math_id": 86, "text": "m\\times m" }, { "math_id": 87, "text": "aM" }, { "math_id": 88, "text": "m^2" }, { "math_id": 89, "text": "M^2" }, { "math_id": 90, "text": "m^3" }, { "math_id": 91, "text": "m^{2.3}" }, { "math_id": 92, "text": "O(\\sqrt n)" }, { "math_id": 93, "text": "O(n)" }, { "math_id": 94, "text": "O(m^{2.3}\\sqrt{n} + m^2n)" }, { "math_id": 95, "text": "m=n" }, { "math_id": 96, "text": "O(m^3)" }, { "math_id": 97, "text": "P(M)=a_{n-1} M^{n-1} + \\dots + a_{1}M + a_0 I," }, { "math_id": 98, "text": "\\sqrt{n}." }, { "math_id": 99, "text": "M, M^2, \\dots, M^k" }, { "math_id": 100, "text": "k" }, { "math_id": 101, "text": "M^{2k}, M^{3k}, \\dots, M^{k^2-k}" }, { "math_id": 102, "text": "M^k." }, { "math_id": 103, "text": "\\begin{align}P(M) =\n&\\,(a_0 I + a_1 M + \\dots + a_{k-1}M^{k-1})\n\\\\+&\\,(a_k I + a_{k+1} M + \\dots + a_{2k-1}M^{k-1})M^k\n\\\\+&\\,\\dots\n\\\\+&\\,(a_{n-k} I + a_{n-k+1} M + \\dots + a_{n-1}M^{k-1})M^{k^2-k},\n\\end{align}" }, { "math_id": 104, "text": "a_i=0" }, { "math_id": 105, "text": "P(M) =\n\\begin{bmatrix}I\\\\M\\\\\\vdots\\\\M^{k-1}\\end{bmatrix}^T\n\\left(\\begin{bmatrix}\na_0 & a_1 & a_2 & \\dots\\\\\na_k & a_{k+1} & \\ddots \\\\\na_{2k} & \\ddots \\\\\n\\vdots\\end{bmatrix}\\otimes I\\right)\n\\begin{bmatrix}I\\\\M^k\\\\M^{2k}\\\\\\vdots\\end{bmatrix}\n" }, { "math_id": 106, "text": "2\\sqrt{n}" }, { "math_id": 107, "text": "\\sqrt{2n}" } ]
https://en.wikipedia.org/wiki?curid=64691694
64692455
Birkhoff algorithm
Birkhoff's algorithm (also called Birkhoff-von-Neumann algorithm) is an algorithm for decomposing a bistochastic matrix into a convex combination of permutation matrices. It was published by Garrett Birkhoff in 1946. It has many applications. One such application is for the problem of fair random assignment: given a randomized allocation of items, Birkhoff's algorithm can decompose it into a lottery on deterministic allocations. Terminology. A "bistochastic matrix" (also called: "doubly-stochastic") is a matrix in which all elements are greater than or equal to 0 and the sum of the elements in each row and column equals 1. An example is the following 3-by-3 matrix: formula_0 A "permutation matrix" is a special case of a bistochastic matrix, in which each element is either 0 or 1 (so there is exactly one "1" in each row and each column). An example is the following 3-by-3 matrix: formula_1 A Birkhoff decomposition (also called: Birkhoff-von-Neumann decomposition) of a bistochastic matrix is a presentation of it as a sum of permutation matrices with non-negative weights. For example, the above matrix can be presented as the following sum: formula_2 Birkhoff's algorithm receives as input a bistochastic matrix and returns as output a Birkhoff decomposition. Tools. A permutation set of an "n"-by-"n" matrix "X" is a set of "n" entries of "X" containing exactly one entry from each row and from each column. A theorem by Dénes Kőnig says that: "Every bistochastic matrix has a permutation-set in which all entries are positive."The positivity graph of an "n"-by-"n" matrix "X" is a bipartite graph with 2"n" vertices, in which the vertices on one side are "n" rows and the vertices on the other side are the "n" columns, and there is an edge between a row and a column iff the entry at that row and column is positive. A permutation set with positive entries is equivalent to a perfect matching in the positivity graph. A perfect matching in a bipartite graph can be found in polynomial time, e.g. using any algorithm for maximum cardinality matching. Kőnig's theorem is equivalent to the following:"The positivity graph of any bistochastic matrix admits a perfect matching."A matrix is called scaled-bistochastic if all elements are non-negative, and the sum of each row and column equals "c", where "c" is some positive constant. In other words, it is "c" times a bistochastic matrix. Since the positivity graph is not affected by scaling:"The positivity graph of any scaled-bistochastic matrix admits a perfect matching." Algorithm. Birkhoff's algorithm is a greedy algorithm: it greedily finds perfect matchings and removes them from the fractional matching. It works as follows. The algorithm is correct because, after step 6, the sum in each row and each column drops by "z"["i"]. Therefore, the matrix "X" remains scaled-bistochastic. Therefore, in step 3, a perfect matching always exists. Run-time complexity. By the selection of "z"["i"] in step 4, in each iteration at least one element of "X" becomes 0. Therefore, the algorithm must end after at most "n"2 steps. However, the last step must simultaneously make "n" elements 0, so the algorithm ends after at most "n"2 − "n" + 1 steps, which implies formula_3. In 1960, Joshnson, Dulmage and Mendelsohn showed that Birkhoff's algorithm actually ends after at most "n"2 − 2"n" + 2 steps, which is tight in general (that is, in some cases "n"2 − 2"n" + 2 permutation matrices may be required). Application in fair division. In the fair random assignment problem, there are "n" objects and "n" people with different preferences over the objects. It is required to give an object to each person. To attain fairness, the allocation is randomized: for each (person, object) pair, a probability is calculated, such that the sum of probabilities for each person and for each object is 1. The probabilistic-serial procedure can compute the probabilities such that each agent, looking at the matrix of probabilities, prefers his row of probabilities over the rows of all other people (this property is called envy-freeness). This raises the question of how to implement this randomized allocation in practice? One cannot just randomize for each object separately, since this may result in allocations in which some people get many objects while other people get no objects. Here, Birkhoff's algorithm is useful. The matrix of probabilities, calculated by the probabilistic-serial algorithm, is bistochastic. Birkhoff's algorithm can decompose it into a convex combination of permutation matrices. Each permutation matrix represents a deterministic assignment, in which every agent receives exactly one object. The coefficient of each such matrix is interpreted as a probability; based on the calculated probabilities, it is possible to pick one assignment at random and implement it. Extensions. The problem of computing the Birkhoff decomposition with the minimum number of terms has been shown to be NP-hard, but some heuristics for computing it are known. This theorem can be extended for the general stochastic matrix with deterministic transition matrices. Budish, Che, Kojima and Milgrom generalize Birkhoff's algorithm to non-square matrices, with some constraints on the feasible assignments. They also present a decomposition algorithm that minimizes the variance in the expected values. Vazirani generalizes Birkhoff's algorithm to non-bipartite graphs. Valls et al. showed that it is possible to obtain an formula_4 -approximate decomposition with formula_5 permutations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{pmatrix}\n0.2 & 0.3 & 0.5 \\\\\n0.6 & 0.2 & 0.2 \\\\\n0.2 & 0.5 & 0.3\n\\end{pmatrix}\n" }, { "math_id": 1, "text": "\\begin{pmatrix}\n0 & 1 & 0 \\\\\n0 & 0 & 1 \\\\\n1 & 0 & 0\n\\end{pmatrix}\n" }, { "math_id": 2, "text": "0.2\n\\begin{pmatrix}\n0 & 1 & 0 \\\\\n0 & 0 & 1 \\\\\n1 & 0 & 0\n\\end{pmatrix}\n+\n0.2\n\\begin{pmatrix}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1\n\\end{pmatrix}\n+ 0.1 \n\\begin{pmatrix}\n0 & 1 & 0 \\\\\n1 & 0 & 0 \\\\\n0 & 0 & 1\n\\end{pmatrix}\n+ 0.5\n\\begin{pmatrix}\n0 & 0 & 1 \\\\\n1 & 0 & 0 \\\\\n0 & 1 & 0\n\\end{pmatrix}" }, { "math_id": 3, "text": "O(n^2)" }, { "math_id": 4, "text": "\\epsilon" }, { "math_id": 5, "text": "O(\\log(1/\\epsilon^2))" } ]
https://en.wikipedia.org/wiki?curid=64692455
64693091
Magnetic Thermodynamic Systems
In thermodynamics and thermal physics, the theoretical formulation of magnetic systems entails expressing the behavior of the systems using the Laws of Thermodynamics. Common magnetic systems examined through the lens of Thermodynamics are ferromagnets and paramagnets as well as the ferromagnet to paramagnet phase transition. It is also possible to derive thermodynamic quantities in a generalized form for an arbitrary magnetic system using the formulation of magnetic work. Simplified thermodynamic models of magnetic systems include the Ising model, the mean field approximation, and the ferromagnet to paramagnet phase transition expressed using the Landau Theory of Phase Transitions. Arbitrary magnetic systems. In order to incorporate magnetic systems into the first law of thermodynamics, it is necessary to formulate the concept of magnetic work. The magnetic contribution to the quasi-static work done by an arbitrary magnetic system is formula_0 where formula_1 is the magnetic field and formula_2 is the magnetic flux density. So the first law of thermodynamics in a reversible process can be expressed as formula_3 Accordingly the change during a quasi-static process in the Helmholtz free energy, formula_4, and the Gibbs free energy, formula_5, will be formula_6 formula_7 Paramagnetic systems. In a paramagnetic system, that is, a system in which the magnetization vanishes without the influence of an external magnetic field, assuming some simplifying assumptions (such as the sample system being ellipsoidal), one can derive a few compact thermodynamic relations. Assuming the external magnetic field is uniform and shares a common axis with the paramagnet, the extensive parameter characterizing the magnetic state is formula_8, the magnetic dipole moment of the system. The fundamental thermodynamic relation describing the system will then be of the form formula_9. In the more general case where the paramagnet does not share an axis with the magnetic field, the extensive parameters characterizing the magnetic state will be formula_10. In this case, the fundamental relation describing the system will be formula_11. The intensive parameter corresponding to the magnetic moment formula_8 is the external magnetic field acting on the paramagnet, formula_12. The relation between them is: formula_13 where formula_14 is the Entropy, formula_15 is the Volume and formula_16 is the number of particles in the system. Note that in this case, formula_17 is the energy added to the system by the insertion of the paramagnet. The total energy in the space occupied by the system includes a component arising from the energy of a magnetic field in a vacuum. This component equals formula_18, where formula_19 is the permeability of free space, and isn't included as a part of formula_17. The choice if to include formula_20 in formula_17 is arbitrary but it is important to note the convention chosen, otherwise, it may lead to confusion emanating from differing results. The Euler relation for a paramagnetic system is then: formula_21 and the Gibbs-Duhem relation for such a system is: formula_22 An experimental problem that distinguishes magnetic systems from other thermodynamical systems is that the magnetic moment can't be constrained. Typically in thermodynamic systems, all extensive quantities describing the system can be constrained to a specified value. Examples are volume and the number of particles, which can both be constrained by enclosing the system in a box. On the other hand, there is no experimental method that can directly hold the magnetic moment to a specified constant value. Nevertheless, this experimental concern does not affect the thermodynamic theory of magnetic systems. Ferromagnetic systems. Ferromagnetic systems are systems in which the magnetization doesn't vanish in the absence of an external magnetic field. Multiple thermodynamic models have been developed in order to model and explain the behavior of ferromagnets, including the Ising model. The Ising model can be solved analytically in one and two dimensions, numerically in higher dimensions, or using the mean-field approximation in any dimensionality. Additionally, the ferromagnet to paramagnet phase transition is a second-order phase transition and so can be modeled using the Landau theory of phase transitions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "W=-\\frac{1}{4\\pi}{\\int_{V}{H\\cdot\\Delta BdV}}" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": "\\Delta U={\\int_{S}{T dS}}-{\\int_{V}{PdV}}+\\frac{1}{4\\pi}{\\int_{V}{H\\cdot\\Delta BdV}}" }, { "math_id": 4, "text": "F" }, { "math_id": 5, "text": "G" }, { "math_id": 6, "text": "\\Delta F=-{\\int_{T}SdT}-{\\int_{V}PdV}+\\frac{1}{4\\pi}{\\int_{V}{H\\cdot\\Delta BdV}}" }, { "math_id": 7, "text": "\\Delta G=-{\\int_{T}SdT}+{\\int_{P}VdP}-\\frac{1}{4\\pi}{\\int_{V}{B\\cdot\\Delta HdV}}" }, { "math_id": 8, "text": "I" }, { "math_id": 9, "text": "U=U(S,V,I,N)" }, { "math_id": 10, "text": "I_x,I_y,I_z" }, { "math_id": 11, "text": "U=U(S,V,I_x,I_y,I_z,N)" }, { "math_id": 12, "text": "B_e" }, { "math_id": 13, "text": "B_e = \\left(\\frac{\\partial U}{\\partial I}\\right)_{S,V,N}" }, { "math_id": 14, "text": "S" }, { "math_id": 15, "text": "V" }, { "math_id": 16, "text": "N" }, { "math_id": 17, "text": "U" }, { "math_id": 18, "text": "U_{vacuum}=\\frac{B_e^2 V}{2\\mu_0}" }, { "math_id": 19, "text": "\\mu_0" }, { "math_id": 20, "text": "U_{vacuum}" }, { "math_id": 21, "text": "U=TS-PV+B_{e}I+\\mu N" }, { "math_id": 22, "text": "SdT-VdP+IdB_{e}+Nd\\mu =0" } ]
https://en.wikipedia.org/wiki?curid=64693091
646933
Triangular bipyramid
Two tetrahedra joined by one face In geometry, the triangular bipyramid is the hexahedron with six triangular faces, constructed by attaching two tetrahedra face-to-face. The same shape is also called the triangular dipyramid or trigonal bipyramid. If these tetrahedra are regular, all faces of triangular bipyramid are equilateral. It is an example of a deltahedron, composite polyhedron, and Johnson solid. Many polyhedra are related to the triangular bipyramid, such as new similar shapes derived in different approaches, and the triangular prism as its dual polyhedron. The many applications of triangular bipyramid include the trigonal bipyramid molecular geometry that describes its atom cluster, a solution of the Thomson problem, and the representation of color order systems by the eighteenth century. Special cases. As a right bipyramid. Like other bipyramids, the triangular bipyramid can be constructed by attaching two tetrahedra face-to-face. These tetrahedra cover their triangular base, such that the resulting polyhedron has six triangles, five vertices, and nine edges. The triangular bipyramid is said to be "right" if the tetrahedra are symmetrically regular and both of their apices are on the line passing through the center of base; otherwise, it is "oblique". According to Steinitz's theorem, a graph can be represented as the skeleton of a polyhedron if it is planar and 3-connected graph. In other words, the edges of that graph do not cross but only intersect at the point, and one of any two vertices leaves a connected subgraph when removed. The triangular bipyramid is represented by a graph with nine edges, constructed by adding one vertex connecting to the vertices of a wheel graph representing tetrahedra. Like other right bipyramids, the triangular bipyramid has three-dimensional point group symmetry, the dihedral group formula_0 of order twelve: the appearance of the triangular bipyramid is unchanged as it rotated by one-, two-thirds, and full angle around the axis of symmetry (a line passing through two vertices and base's center vertically), and it has mirror symmetry relative to any bisector of the base; it is also symmetrical by reflecting it across a horizontal plane. Therefore, the triangular bipyramid is face-transitive or isohedral. As a Johnson solid. If the tetrahedra are regular, all edges of the triangular bipyramid are equal in length, forming equilateral triangular faces. A polyhedron with only equilateral triangles as faces is called a deltahedron. There are only eight different convex deltahedra, one of which is the triangular bipyramid with regular polygonal faces. More generally, the convex polyhedron in which all of the faces are regular polygons is the Johnson solid, and every convex deltahedron is a Johnson solid. The triangular bipyramid with the regular faces is among numbered the Johnson solids as the twelfth Johnson solid formula_1. It is an example of a composite polyhedron, because it is constructed by attaching two regular tetrahedra. A triangular bipyramid's surface area formula_2 is six times that of each triangle. Its volume formula_3 can be calculated by slicing it into two tetrahedra and adding their volume. In the case of edge length formula_4, this is: formula_5 The dihedral angle of a triangular bipyramid can be obtained by adding the dihedral angle of two regular tetrahedra. The dihedral angle of a triangular bipyramid between adjacent triangular faces is that of the regular tetrahedron, 70.5°. In the case of the edge where two tetrahedra are attached, the dihedral angle of adjacent triangles is twice that, 141.1°. Related polyhedra. Some types of triangular bipyramids may be derived in different ways. For example, the Kleetope of polyhedra is a construction involving the attachment of pyramids; in the case of the triangular bipyramid, its Kleetope can be constructed from triangular bipyramid by attaching tetrahedra onto each of its faces, covering and replacing them with other three triangles; the skeleton of resulting polyhedron represents the Goldner–Harary graph. Another type of triangular bipyramid is by cutting off all of its vertices; this process is known as truncation. The bipyramids are the dual polyhedron of prisms, for which the bipyramids' vertices correspond to the faces of the prism, and the edges between pairs of vertices of one correspond to the edges between pairs of faces of the other; dual it again gives the original polyhedron itself. Hence, the triangular bipyramid is the dual polyhedron of the triangular prism, and vice versa. The triangular prism has five faces, nine edges, and six vertices, and it has the same symmetry as the triangular bipyramid. Applications. The Thomson problem concerns the minimum-energy configuration of charged particles on a sphere. One of them is a triangular bipyramid, which is a known solution for the case of five electrons, by placing vertices of a triangular bipyramid inscribed in a sphere. This solution is aided by the mathematically rigorous computer. In the geometry of chemical compound, the trigonal bipyramidal molecular geometry may be described as the atom cluster of the triangular bipyramid. This molecule has a main-group element without an active lone pair, as described by a model that predicts the geometry of molecules known as VSEPR theory. Some examples of this structure are the phosphorus pentafluoride and phosphorus pentachloride in the gas phase. In the study of color theory, the triangular bipyramid was used to represent the three-dimensional color order system in primary color. The German astronomer Tobias Mayer presented in 1758 that each of its vertices represents the colors: white and black are the top and bottom axial vertices respectively, whereas the rest of the vertices are red, blue, and yellow. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " D_{3 \\mathrm{h}} " }, { "math_id": 1, "text": " J_{12} " }, { "math_id": 2, "text": " A " }, { "math_id": 3, "text": " V " }, { "math_id": 4, "text": " a " }, { "math_id": 5, "text": " \\begin{align}\n A &= \\frac{3\\sqrt{3}}{2}a^2 &\\approx 2.598a^2, \\\\\n V &= \\frac{\\sqrt{2}}{6}a^3 &\\approx 0.238a^3.\n\\end{align} " } ]
https://en.wikipedia.org/wiki?curid=646933
6469477
Charged current
One way that particles can interact with the weak force Charged current interactions are one of the ways in which subatomic particles can interact by means of the weak force. These interactions are mediated by the and bosons. In simple terms. Charged current interactions are the most easily detected class of weak interactions. The weak force is best known for mediating nuclear decay. It has very short range, but is the only force (apart from gravity) to interact with neutrinos. The weak force is communicated via the W and Z exchange particles. Of these, the W-boson has either a positive or negative electric charge, and mediates neutrino absorption and emission by or with an electrically charged particle. During these processes, the W-boson induces electron or positron emission or absorption, or changing the flavour of a quark as well as its electrical charge, such as in beta decay or K-capture. By contrast, the Z particle is electrically neutral, and exchange of a Z-boson leaves the interacting particles' quantum numbers unaffected, except for a transfer of momentum, spin, and energy. Because exchange of W bosons involves a transfer of electric charge (as well as a transfer of weak isospin, while weak hypercharge is not transferred), it is known as "charged current". By contrast, exchanges of Z bosons involve no transfer of electrical charge, so it is referred to as a "neutral current". In the latter case, the word "current" has nothing to do with electricity – it simply refers to the Z bosons' movement between other particles. Definition. The name "charged current" arises due to currents of fermions coupled to the W bosons having electric charge. For example, the charged current contribution to the → elastic scattering amplitude is: formula_0 where the charged currents describing the flow of one fermion into the other are given by: formula_1 The W-Boson can couple to any particle with weak isospin (i.e. any left-handed Standard Model fermions). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak{M}^{\\mathrm{CC}} \\propto J_{\\mu}^{\\mathrm{(CC)}}(\\mathrm{e^{-}}\\to\\nu_{\\mathrm{e}}) \\; J^{\\mathrm{(CC)}\\mu}(\\nu_{\\mathrm{e}}\\to\\mathrm{e^{-}})," }, { "math_id": 1, "text": "J^{\\mathrm{(CC)\\mu}}(f\\to f') = \\bar{u}_{f'}\\gamma^{\\mu}\\frac{1}{2}\\left(1-\\gamma^{5}\\right)u_{f}." } ]
https://en.wikipedia.org/wiki?curid=6469477
646974
Wilkinson's polynomial
In numerical analysis, Wilkinson's polynomial is a specific polynomial which was used by James H. Wilkinson in 1963 to illustrate a difficulty when finding the root of a polynomial: the location of the roots can be very sensitive to perturbations in the coefficients of the polynomial. The polynomial is formula_0 Sometimes, the term "Wilkinson's polynomial" is also used to refer to some other polynomials appearing in Wilkinson's discussion. Background. Wilkinson's polynomial arose in the study of algorithms for finding the roots of a polynomial formula_1 It is a natural question in numerical analysis to ask whether the problem of finding the roots of "p" from the coefficients "c""i" is well-conditioned. That is, we hope that a small change in the coefficients will lead to a small change in the roots. Unfortunately, this is not the case here. The problem is ill-conditioned when the polynomial has a multiple root. For instance, the polynomial "x"2 has a double root at "x" = 0. However, the polynomial "x"2 − "ε" (a perturbation of size "ε") has roots at ±√"ε", which is much bigger than "ε" when "ε" is small. It is therefore natural to expect that ill-conditioning also occurs when the polynomial has zeros which are very close. However, the problem may also be extremely ill-conditioned for polynomials with well-separated zeros. Wilkinson used the polynomial "w"("x") to illustrate this point (Wilkinson 1963). In 1984, he described the personal impact of this discovery: "Speaking for myself I regard it as the most traumatic experience in my career as a numerical analyst." Wilkinson's polynomial is often used to illustrate the undesirability of naively computing eigenvalues of a matrix by first calculating the coefficients of the matrix's characteristic polynomial and then finding its roots, since using the coefficients as an intermediate step may introduce an extreme ill-conditioning even if the original problem was well conditioned. Conditioning of Wilkinson's polynomial. Wilkinson's polynomial formula_2 clearly has 20 roots, located at "x" = 1, 2, ..., 20. These roots are far apart. However, the polynomial is still very ill-conditioned. Expanding the polynomial, one finds formula_3 If the coefficient of "x"19 is decreased from −210 by 2−23 to −210.0000001192, then the polynomial value "w"(20) decreases from 0 to −2−232019 = −6.25×1017, and the root at "x" = 20 grows to "x" ≈ 20.8. The roots at "x" = 18 and "x" = 19 collide into a double root at "x" ≈ 18.62 which turns into a pair of complex conjugate roots at "x" ≈ 19.5 ± 1.9"i" as the perturbation increases further. The 20 roots become (to 5 decimals) formula_4 Some of the roots are greatly displaced, even though the change to the coefficient is tiny and the original roots seem widely spaced. Wilkinson showed by the stability analysis discussed in the next section that this behavior is related to the fact that some roots "α" (such as "α" = 15) have many roots "β" that are "close" in the sense that |"α" − "β"| is smaller than |"α"|. Wilkinson chose the perturbation of 2−23 because his Pilot ACE computer had 30-bit floating point significands, so for numbers around 210, 2−23 was an error in the first bit position not represented in the computer. The two real numbers, −210 and −210 − 2−23, are represented by the same floating point number, which means that 2−23 is the "unavoidable" error in representing a real coefficient close to −210 by a floating point number on that computer. The perturbation analysis shows that 30-bit coefficient precision is insufficient for separating the roots of Wilkinson's polynomial. Stability analysis. Suppose that we perturb a polynomial "p"("x") = Π ("x" − "α""j") with roots "α""j" by adding a small multiple "t"·"c"("x") of a polynomial "c"("x"), and ask how this affects the roots "α""j". To first order, the change in the roots will be controlled by the derivative formula_5 When the derivative is large, the roots will be more stable under variations of "t", and conversely if this derivative is small the roots will be unstable. In particular, if "α""j" is a multiple root, then the denominator vanishes. In this case, α"j" is usually not differentiable with respect to "t" (unless "c" happens to vanish there), and the roots will be extremely unstable. For small values of "t" the perturbed root is given by the power series expansion in "t" formula_6 and one expects problems when |"t"| is larger than the radius of convergence of this power series, which is given by the smallest value of |"t"| such that the root "α""j" becomes multiple. A very crude estimate for this radius takes half the distance from "α""j" to the nearest root, and divides by the derivative above. In the example of Wilkinson's polynomial of degree 20, the roots are given by "α""j" = "j" for "j" = 1, ..., 20, and "c"("x") is equal to "x"19. So the derivative is given by formula_7 This shows that the root α"j" will be less stable if there are many roots α"k" close to α"j", in the sense that the distance Example. For the root α1 = 1, the derivative is equal to 1/19! which is very small; this root is stable even for large changes in "t". This is because all the other roots "β" are a long way from it, in the sense that |"α"1 − "β"| = 1, 2, 3, ..., 19 is larger than |"α"1| = 1. For example, even if "t" is as large as –10000000000, the root "α"1 only changes from 1 to about 0.99999991779380 (which is very close to the first order approximation 1 + "t"/19! ≈ 0.99999991779365). Similarly, the other small roots of Wilkinson's polynomial are insensitive to changes in "t". Example. On the other hand, for the root "α"20 = 20, the derivative is equal to −2019/19! which is huge (about 43000000), so this root is very sensitive to small changes in "t". The other roots "β" are close to "α"20, in the sense that |"β" − "α"20| = 1, 2, 3, ..., 19 is less than |"α"20| = 20. For "t" = −2 − 23 the first-order approximation 20 − "t"·2019/19! = 25.137... to the perturbed root 20.84... is terrible; this is even more obvious for the root "α"19 where the perturbed root has a large imaginary part but the first-order approximation (and for that matter all higher-order approximations) are real. The reason for this discrepancy is that |"t"| ≈ 0.000000119 is greater than the radius of convergence of the power series mentioned above (which is about 0.0000000029, somewhat smaller than the value 0.00000001 given by the crude estimate) so the linearized theory does not apply. For a value such as "t" = 0.000000001 that is significantly smaller than this radius of convergence, the first-order approximation 19.9569... is reasonably close to the root 19.9509... At first sight the roots "α"1 = 1 and "α"20 = 20 of Wilkinson's polynomial appear to be similar, as they are on opposite ends of a symmetric line of roots, and have the same set of distances 1, 2, 3, ..., 19 from other roots. However the analysis above shows that this is grossly misleading: the root "α"20 = 20 is less stable than "α"1 = 1 (to small perturbations in the coefficient of "x"19) by a factor of 2019 = 5242880000000000000000000. Wilkinson's second example. The second example considered by Wilkinson is formula_8 The twenty zeros of this polynomial are in a geometric progression with common ratio 2, and hence the quotient formula_9 cannot be large. Indeed, the zeros of "w"2 are quite stable to large "relative" changes in the coefficients. The effect of the basis. The expansion formula_10 expresses the polynomial in a particular basis, namely that of the monomials. If the polynomial is expressed in another basis, then the problem of finding its roots may cease to be ill-conditioned. For example, in a Lagrange form, a small change in one (or several) coefficients need not change the roots too much. Indeed, the basis polynomials for interpolation at the points 0, 1, 2, ..., 20 are formula_11 Every polynomial (of degree 20 or less) can be expressed in this basis: formula_12 For Wilkinson's polynomial, we find formula_13 Given the definition of the Lagrange basis polynomial ℓ0("x"), a change in the coefficient "d"0 will produce no change in the roots of "w". However, a perturbation in the other coefficients (all equal to zero) will slightly change the roots. Therefore, Wilkinson's polynomial is well-conditioned in this basis. References. Wilkinson discussed "his" polynomial in It is mentioned in standard text books in numerical analysis, like Other references: A high-precision numerical computation is presented in:
[ { "math_id": 0, "text": " w(x) = \\prod_{i=1}^{20} (x - i) = (x-1) (x-2) \\cdots (x-20). " }, { "math_id": 1, "text": " p(x) = \\sum_{i=0}^n c_i x^i. " }, { "math_id": 2, "text": " w(x) = \\prod_{i=1}^{20} (x - i) = (x-1)(x-2) \\cdots (x-20) " }, { "math_id": 3, "text": "\n\\begin{align}\nw(x) = {} & x^{20}-210 x^{19}+20615 x^{18}-1256850x^{17}+53327946 x^{16} \\\\\n& {}-1672280820x^{15}+40171771630 x^{14}-756111184500x^{13} \\\\\n& {}+11310276995381x^{12}-135585182899530x^{11} \\\\\n& {}+1307535010540395x^{10}-10142299865511450x^9 \\\\\n& {}+63030812099294896x^8-311333643161390640x^7 \\\\\n& {}+1206647803780373360x^6-3599979517947607200x^5 \\\\\n& {}+8037811822645051776x^4-12870931245150988800x^3 \\\\\n& {}+13803759753640704000x^2-8752948036761600000x \\\\ \n& {}+2432902008176640000.\n\\end{align}\n" }, { "math_id": 4, "text": "\n\\begin{array}{rrrrr}\n1.00000 & 2.00000 & 3.00000 & 4.00000 & 5.00000 \\\\[8pt]\n6.00001 & 6.99970 & 8.00727 & 8.91725 & 20.84691 \\\\[8pt]\n10.09527\\pm {} & 11.79363 \\pm {} & 13.99236\\pm{} & 16.73074\\pm{} & 19.50244 \\pm {} \\\\[-3pt]\n0.64350i & 1.65233i & 2.51883i & 2.81262i & 1.94033i\n\\end{array}\n" }, { "math_id": 5, "text": "{d\\alpha_j \\over dt} = -{c(\\alpha_j)\\over p^\\prime(\\alpha_j)}. " }, { "math_id": 6, "text": " \\alpha_j + {d\\alpha_j \\over dt} t + {d^2\\alpha_j \\over dt^2} {t^2\\over 2!} + \\cdots " }, { "math_id": 7, "text": "{d\\alpha_j \\over dt} = -{\\alpha_j^{19}\\over \\prod_{k\\ne j}(\\alpha_j-\\alpha_k)} = -\\prod_{k\\ne j}{\\alpha_j\\over \\alpha_j-\\alpha_k} . \\,\\!" }, { "math_id": 8, "text": " w_2(x) = \\prod_{i=1}^{20} (x - 2^{-i}) = (x-2^{-1})(x-2^{-2}) \\cdots (x-2^{-20}). " }, { "math_id": 9, "text": " \\alpha_j\\over \\alpha_j-\\alpha_k " }, { "math_id": 10, "text": " p(x) = \\sum_{i=0}^n c_i x^i " }, { "math_id": 11, "text": " \\ell_k(x) = \\prod_{i = 0,\\ldots,20 \\atop i \\neq k} \\frac{x - i}{k - i}, \\qquad\\text{for}\\quad k=0,\\ldots,20. " }, { "math_id": 12, "text": " p(x) = \\sum_{i=0}^{20} d_i \\ell_i(x). " }, { "math_id": 13, "text": " w(x) = (20!) \\ell_0(x) = \\sum_{i=0}^{20} d_i \\ell_i(x) \\quad\\text{with}\\quad d_0=(20!) ,\\, d_1=d_2= \\cdots =d_{20}=0. " } ]
https://en.wikipedia.org/wiki?curid=646974
6469973
De Gua's theorem
Three-dimensional analog of the Pythagorean theorem In mathematics, De Gua's theorem is a three-dimensional analog of the Pythagorean theorem named after Jean Paul de Gua de Malves. It states that if a tetrahedron has a right-angle corner (like the corner of a cube), then the square of the area of the face opposite the right-angle corner is the sum of the squares of the areas of the other three faces: formula_0De Gua's theorem can be applied for proving a special case of Heron's formula. Generalizations. The Pythagorean theorem and de Gua's theorem are special cases ("n" = 2, 3) of a general theorem about "n"-simplices with a right-angle corner, proved by P. S. Donchian and H. S. M. Coxeter in 1935. This, in turn, is a special case of a yet more general theorem by Donald R. Conant and William A. Beyer (1974), which can be stated as follows. Let "U" be a measurable subset of a "k"-dimensional affine subspace of formula_1 (so formula_2). For any subset formula_3 with exactly "k" elements, let formula_4 be the orthogonal projection of "U" onto the linear span of formula_5, where formula_6 and formula_7 is the standard basis for formula_1. Then formula_8 where formula_9 is the "k"-dimensional volume of "U" and the sum is over all subsets formula_3 with exactly "k" elements. De Gua's theorem and its generalisation (above) to "n"-simplices with right-angle corners correspond to the special case where "k" = "n"−1 and "U" is an ("n"−1)-simplex in formula_1 with vertices on the co-ordinate axes. For example, suppose "n" = 3, "k" = 2 and "U" is the triangle formula_10 in formula_11 with vertices "A", "B" and "C" lying on the formula_12-, formula_13- and formula_14-axes, respectively. The subsets formula_15 of formula_16 with exactly 2 elements are formula_17, formula_18 and formula_19. By definition, formula_20 is the orthogonal projection of formula_21 onto the formula_22-plane, so formula_20 is the triangle formula_23 with vertices "O", "B" and "C", where "O" is the origin of formula_11. Similarly, formula_24 and formula_25, so the Conant–Beyer theorem says formula_26 which is de Gua's theorem. The generalisation of de Gua's theorem to "n"-simplices with right-angle corners can also be obtained as a special case from the Cayley–Menger determinant formula. De Gua's theorem can also be generalized to arbitrary tetrahedra and to pyramids. History. Jean Paul de Gua de Malves (1713–1785) published the theorem in 1783, but around the same time a slightly more general version was published by another French mathematician, Charles de Tinseau d'Amondans (1746–1818), as well. However the theorem had also been known much earlier to Johann Faulhaber (1580–1635) and René Descartes (1596–1650).
[ { "math_id": 0, "text": " A_{ABC}^2 = A_{\\color {blue} ABO}^2+A_{\\color {green} ACO}^2+A_{\\color {red} BCO}^2 " }, { "math_id": 1, "text": "\\mathbb{R}^n" }, { "math_id": 2, "text": "k \\le n" }, { "math_id": 3, "text": "I \\subseteq \\{ 1, \\ldots, n \\}" }, { "math_id": 4, "text": "U_I" }, { "math_id": 5, "text": "e_{i_1}, \\ldots, e_{i_k}" }, { "math_id": 6, "text": "I = \\{i_1, \\ldots, i_k\\}" }, { "math_id": 7, "text": "e_1, \\ldots, e_n" }, { "math_id": 8, "text": "\\operatorname{vol}_k^2(U) = \\sum_I \\operatorname{vol}_k^2(U_I)," }, { "math_id": 9, "text": "\\operatorname{vol}_k(U)" }, { "math_id": 10, "text": "\\triangle ABC" }, { "math_id": 11, "text": "\\mathbb{R}^3" }, { "math_id": 12, "text": "x_1" }, { "math_id": 13, "text": "x_2" }, { "math_id": 14, "text": "x_3" }, { "math_id": 15, "text": "I" }, { "math_id": 16, "text": "\\{ 1, 2, 3 \\}" }, { "math_id": 17, "text": "\\{ 2,3 \\}" }, { "math_id": 18, "text": "\\{ 1,3 \\}" }, { "math_id": 19, "text": "\\{ 1,2 \\}" }, { "math_id": 20, "text": "U_{\\{ 2,3 \\}}" }, { "math_id": 21, "text": "U = \\triangle ABC" }, { "math_id": 22, "text": "x_2 x_3" }, { "math_id": 23, "text": "\\triangle OBC" }, { "math_id": 24, "text": "U_{\\{ 1,3 \\}} = \\triangle AOC" }, { "math_id": 25, "text": "U_{\\{ 1,2 \\}} = \\triangle ABO" }, { "math_id": 26, "text": "\\operatorname{vol}_2^2(\\triangle ABC) = \\operatorname{vol}_2^2(\\triangle OBC) + \\operatorname{vol}_2^2(\\triangle AOC) + \\operatorname{vol}_2^2(\\triangle ABO)," } ]
https://en.wikipedia.org/wiki?curid=6469973
64705026
Count sketch
Method of a dimension reduction &lt;templatestyles src="Machine learning/styles.css"/&gt; Count sketch is a type of dimensionality reduction that is particularly efficient in statistics, machine learning and algorithms. It was invented by Moses Charikar, Kevin Chen and Martin Farach-Colton in an effort to speed up the AMS Sketch by Alon, Matias and Szegedy for approximating the frequency moments of streams (these calculations require counting of the number of occurrences for the distinct elements of the stream). The sketch is nearly identical to the Feature hashing algorithm by John Moody, but differs in its use of hash functions with low dependence, which makes it more practical. In order to still have a high probability of success, the median trick is used to aggregate multiple count sketches, rather than the mean. These properties allow use for explicit kernel methods, bilinear pooling in neural networks and is a cornerstone in many numerical linear algebra algorithms. Intuitive explanation. The inventors of this data structure offer the following iterative explanation of its operation: Mathematical definition. 1. For constants formula_9 and formula_10 (to be defined later) independently choose formula_11 random hash functions formula_12 and formula_13 such that formula_14 and formula_15. It is necessary that the hash families from which formula_7 and formula_2 are chosen be pairwise independent. 2. For each item formula_16 in the stream, add formula_17 to the formula_18th bucket of the formula_19th hash. At the end of this process, one has formula_20 sums formula_21 where formula_22 To estimate the count of formula_23s one computes the following value: formula_24 The values formula_25 are unbiased estimates of how many times formula_23 has appeared in the stream. The estimate formula_26 has variance formula_27, where formula_28 is the length of the stream and formula_29 is formula_30. Furthermore, formula_26 is guaranteed to never be more than formula_31 off from the true value, with probability formula_32. Vector formulation. Alternatively Count-Sketch can be seen as a linear mapping with a non-linear reconstruction function. Let formula_33, be a collection of formula_11 matrices, defined by formula_34 for formula_35 and 0 everywhere else. Then a vector formula_36 is sketched by formula_37. To reconstruct formula_38 we take formula_39. This gives the same guarantees as stated above, if we take formula_40 and formula_41. Relation to Tensor sketch. The count sketch projection of the outer product of two vectors is equivalent to the convolution of two component count sketches. The count sketch computes a vector convolution formula_42, where formula_43 and formula_44 are independent count sketch matrices. Pham and Pagh show that this equals formula_45 – a count sketch formula_46 of the outer product of vectors, where formula_47 denotes Kronecker product. The fast Fourier transform can be used to do fast convolution of count sketches. By using the face-splitting product such structures can be computed much faster than normal matrices. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n(q)" }, { "math_id": 1, "text": "\\bold E[C \\cdot s(q)]" }, { "math_id": 2, "text": "s_i" }, { "math_id": 3, "text": "C_i" }, { "math_id": 4, "text": "\\bold E[C_i \\cdot s_i(q)]= n(q)" }, { "math_id": 5, "text": "n(a)" }, { "math_id": 6, "text": "C_{i,j}" }, { "math_id": 7, "text": "h_i" }, { "math_id": 8, "text": "\\bold E[C_{i, h_i(q)} \\cdot s_i(q)] = n(q)" }, { "math_id": 9, "text": "w" }, { "math_id": 10, "text": "t" }, { "math_id": 11, "text": "d=2t+1" }, { "math_id": 12, "text": "h_1, \\dots, h_d" }, { "math_id": 13, "text": "s_1,\\dots,s_d" }, { "math_id": 14, "text": "h_i : [n] \\to [w]" }, { "math_id": 15, "text": "s_i : [n] \\to \\{\\pm 1\\}" }, { "math_id": 16, "text": "q_i" }, { "math_id": 17, "text": "s_j(q_i)" }, { "math_id": 18, "text": "h_j(q_i)" }, { "math_id": 19, "text": "j" }, { "math_id": 20, "text": "wd" }, { "math_id": 21, "text": "(C_{ij})" }, { "math_id": 22, "text": "C_{i,j} = \\sum_{h_i(k)=j}s_i(k)." }, { "math_id": 23, "text": "q" }, { "math_id": 24, "text": "r_q = \\text{median}_{i=1}^d\\, s_i(q)\\cdot C_{i, h_i(q)}." }, { "math_id": 25, "text": "s_i(q)\\cdot C_{i, h_i(q)}" }, { "math_id": 26, "text": "r_q" }, { "math_id": 27, "text": "O(\\mathrm{min}\\{m_1^2/w^2, m_2^2/w\\})" }, { "math_id": 28, "text": "m_1" }, { "math_id": 29, "text": "m_2^2" }, { "math_id": 30, "text": "\\sum_q (\\sum_i [q_i=q])^2" }, { "math_id": 31, "text": "2m_2/\\sqrt{w}" }, { "math_id": 32, "text": "1-e^{-O(t)}" }, { "math_id": 33, "text": "M^{(i\\in[d])}\\in\\{-1,0,1\\}^{w \\times n}" }, { "math_id": 34, "text": "M^{(i)}_{h_i(j),j} = s_i(j)" }, { "math_id": 35, "text": "j\\in[w]" }, { "math_id": 36, "text": "v\\in\\mathbb{R}^n" }, { "math_id": 37, "text": "C^{(i)} = M^{(i)} v \\in \\mathbb{R}^w" }, { "math_id": 38, "text": "v" }, { "math_id": 39, "text": "v^*_j = \\text{median}_i C^{(i)}_j s_i(j)" }, { "math_id": 40, "text": "m_1=\\|v\\|_1" }, { "math_id": 41, "text": "m_2=\\|v\\|_2" }, { "math_id": 42, "text": "C^{(1)}x \\ast C^{(2)}x^T" }, { "math_id": 43, "text": "C^{(1)}" }, { "math_id": 44, "text": "C^{(2)}" }, { "math_id": 45, "text": "C(x \\otimes x^T)" }, { "math_id": 46, "text": "C" }, { "math_id": 47, "text": " \\otimes " } ]
https://en.wikipedia.org/wiki?curid=64705026
64713671
Closed linear operator
In functional analysis, a branch of mathematics, a closed linear operator or often a closed operator is a linear operator whose graph is closed (see closed graph property). It is a basic example of an unbounded operator. The closed graph theorem says a linear operator between Banach spaces is a closed operator if and only if it is a bounded operator. Hence, a closed linear operator that is used in practice is typically only defined on defined on a dense subspace of a Banach space. Definition. It is common in functional analysis to consider partial functions, which are functions defined on a subset of some space formula_0 A partial function formula_1 is declared with the notation formula_2 which indicates that formula_1 has prototype formula_3 (that is, its domain is formula_4 and its codomain is formula_5) Every partial function is, in particular, a function and so all terminology for functions can be applied to them. For instance, the graph of a partial function formula_1 is the set formula_6 However, one exception to this is the definition of "closed graph". A partial function formula_7 is said to have a closed graph if formula_8 is a closed subset of formula_9 in the product topology; importantly, note that the product space is formula_9 and not formula_10 as it was defined above for ordinary functions. In contrast, when formula_3 is considered as an ordinary function (rather than as the partial function formula_7), then "having a closed graph" would instead mean that formula_8 is a closed subset of formula_11 If formula_8 is a closed subset of formula_9 then it is also a closed subset of formula_12 although the converse is not guaranteed in general. Definition: If X and Y are topological vector spaces (TVSs) then we call a linear map "f" : "D"("f") ⊆ "X" → "Y" a closed linear operator if its graph is closed in "X" × "Y". Closable maps and closures. A linear operator formula_7 is &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;closable in formula_9 if there exists a vector subspace formula_13 containing formula_4 and a function (resp. multifunction) formula_14 whose graph is equal to the closure of the set formula_8 in formula_15 Such an formula_16 is called a closure of formula_1 in formula_9, is denoted by formula_17 and necessarily extends formula_18 If formula_7 is a closable linear operator then a &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;core or an &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;essential domain of formula_1 is a subset formula_19 such that the closure in formula_9 of the graph of the restriction formula_20 of formula_1 to formula_21 is equal to the closure of the graph of formula_1 in formula_9 (i.e. the closure of formula_8 in formula_9 is equal to the closure of formula_22 in formula_9). Examples. A bounded operator is a closed operator. Here are examples of closed operators that are not bounded. Basic properties. The following properties are easily checked for a linear operator "f" : "D"("f") ⊆ "X" → "Y" between Banach spaces: References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X." }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "f : D \\subseteq X \\to Y," }, { "math_id": 3, "text": "f : D \\to Y" }, { "math_id": 4, "text": "D" }, { "math_id": 5, "text": "Y" }, { "math_id": 6, "text": "\\operatorname{graph}{\\!(f)} = \\{(x, f(x)) : x \\in \\operatorname{dom} f\\}." }, { "math_id": 7, "text": "f : D \\subseteq X \\to Y" }, { "math_id": 8, "text": "\\operatorname{graph} f" }, { "math_id": 9, "text": "X \\times Y" }, { "math_id": 10, "text": "D \\times Y = \\operatorname{dom} f \\times Y" }, { "math_id": 11, "text": "D \\times Y." }, { "math_id": 12, "text": "\\operatorname{dom} (f) \\times Y" }, { "math_id": 13, "text": "E \\subseteq X" }, { "math_id": 14, "text": "F : E \\to Y" }, { "math_id": 15, "text": "X \\times Y." }, { "math_id": 16, "text": "F" }, { "math_id": 17, "text": "\\overline{f}," }, { "math_id": 18, "text": "f." }, { "math_id": 19, "text": "C \\subseteq D" }, { "math_id": 20, "text": "f\\big\\vert_C : C \\to Y" }, { "math_id": 21, "text": "C" }, { "math_id": 22, "text": "\\operatorname{graph} f\\big\\vert_C" } ]
https://en.wikipedia.org/wiki?curid=64713671
6471621
Control variable
Experimental element which is not changed throughout the experiment A control variable (or scientific constant) in scientific experimentation is an experimental element which is constant (controlled) and unchanged throughout the course of the investigation. Control variables could strongly influence experimental results were they not held constant during the experiment in order to test the relative relationship of the dependent variable (DV) and independent variable (IV). The control variables themselves are not of primary interest to the experimenter. Usage. A variable in an experiment which is held constant in order to assess the relationship between multiple variables, is a control variable. A control variable is an element that is not changed throughout an experiment because its unchanging state allows better understanding of the relationship between the other variables being tested. In any system existing in a natural state, many variables may be interdependent, with each affecting the other. Scientific experiments test the relationship of an IV (or independent variable: that element that is manipulated by the experimenter) to the DV (or dependent variable: that element affected by the manipulation of the IV). Any additional independent variable can be a control variable. A control variable is an experimental condition or element that is kept the same throughout the experiment, and it is not of primary concern in the experiment, nor will it influence the outcome of the experiment. Any unexpected (e.g.: uncontrolled) change in a control variable during an experiment would invalidate the correlation of dependent variables (DV) to the independent variable (IV), thus skewing the results, and invalidating the working hypothesis. This indicates the presence of a spurious relationship existing within experimental parameters. Unexpected results may result from the presence of a confounding variable, thus requiring a re-working of the initial experimental hypothesis. Confounding variables are a threat to the internal validity of an experiment. This situation may be resolved by first identifying the confounding variable and then redesigning the experiment taking that information into consideration. One way to this is to control the confounding variable, thus making it a control variable. If, however, the spurious relationship cannot be identified, the working hypothesis may have to be abandoned. Experimental examples. Take, for example, the well known combined gas law, which is stated mathematically as: formula_0 where: "P" is the pressure "V" is the volume "T" is the thermodynamic temperature measured in kelvins "k" is a constant (with units of energy divided by temperature). which shows that the ratio between the pressure-volume product and the temperature of a system remains constant. In an experimental verification of parts of the combined gas law, ("P" * "V" = "T"), where Pressure, Temperature, and Volume are all variables, to test the resultant changes to any of these variables requires at least one to be kept constant. This is in order to see "comparable experimental results" in the remaining variables. If Temperature is made the control variable and it is not allowed to change throughout the course of the experiment, the relationship between the dependent variables, Pressure, and Volume, can quickly be established by changing the value for one or the other, and this is Boyle's law. For instance, if the Pressure is raised then the Volume must decrease. If, however, Volume is made the control variable and it is not allowed to change throughout the course of the experiment, the relationship between dependent variables, Pressure, and Temperature, can quickly be established by changing the value for one or the other, and this is Gay-Lussac's Law. For instance, if the Pressure is raised then the Temperature must increase. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\qquad \\frac {PV}{T}= k " } ]
https://en.wikipedia.org/wiki?curid=6471621
64719459
Reciprocity (electrical networks)
Reciprocity in electrical networks is a property of a circuit that relates voltages and currents at two points. The reciprocity theorem states that the current at one point in a circuit due to a voltage at a second point is the same as the current at the second point due to the same voltage at the first. The reciprocity theorem is valid for almost all passive networks. The reciprocity theorem is a feature of a more general principle of reciprocity in electromagnetism. Description. If a current, formula_0, injected into port A produces a voltage, formula_1, at port B and formula_0 injected into port B produces formula_1 at port A, then the network is said to be reciprocal. Equivalently, reciprocity can be defined by the dual situation; applying voltage, formula_2, at port A producing current formula_3 at port B and formula_2 at port B producing current formula_3 at port A. In general, passive networks are reciprocal. Any network that consists entirely of ideal capacitances, inductances (including mutual inductances), and resistances, that is, elements that are linear and bilateral, will be reciprocal. However, passive components that are non-reciprocal do exist. Any component containing ferromagnetic material is likely to be non-reciprocal. Examples of passive components deliberately designed to be non-reciprocal include circulators and isolators. The transfer function of a reciprocal network has the property that it is symmetrical about the main diagonal if expressed in terms of a z-parameter, y-parameter, or s-parameter matrix. A non-symmetrical matrix implies a non-reciprocal network. A symmetric matrix does not imply a symmetric network. In some parametisations of networks, the representative matrix is not symmetrical for reciprocal networks. Common examples are h-parameters and ABCD-parameters, but they all have some other condition for reciprocity that can be calculated from the parameters. For h-parameters the condition is formula_4 and for the ABCD parameters it is formula_5. These representations mix voltages and currents in the same column vector and therefore do not even have matching units in transposed elements. Example. An example of reciprocity can be demonstrated using an asymmetrical resistive attenuator. An asymmetrical network is chosen as the example because a symmetrical network is self-evidently reciprocal. Injecting 6 amperes into port 1 of this network produces 24 volts at port 2. Injecting 6 amperes into port 2 produces 24 volts at port 1. Hence, the network is reciprocal. In this example, the port that is not injecting current is left open circuit. This is because a current generator applying zero current is an open circuit. If, on the other hand, one wished to apply voltages and measure the resulting current, then the port to which the voltage is not applied would be made short circuit. This is because a voltage generator applying zero volts is a short circuit. Proof. Reciprocity of electrical networks is a special case of Lorentz reciprocity, but it can also be proven more directly from network theorems. This proof shows reciprocity for a two-node network in terms of its admittance matrix, and then shows reciprocity for a network with an arbitrary number of nodes by an induction argument. A linear network can be represented as a set of linear equations through nodal analysis. For a network consisting of "n"+1 nodes (one being a reference node) where, in general, an admittance is connected between each pair of nodes and where a current is injected in each node (provided by an ideal current source connected between the node and the reference node), these equations can be expressed in the form of an admittance matrix, formula_6 where formula_7 is the current injected into node "k" by a generator (which amounts to zero if no current source is connected to node "k") formula_8 is the voltage at node "k" with respect to the reference node (one could also say, it is the electric potential at node "k") formula_9 ("j" ≠ "k") is the negative of the admittance directly connecting nodes "j" and "k" (if any) formula_10 is the sum of the admittances connected to node "k" (regardless of the other node the admittance is connected to). This representation corresponds to the one obtained by nodal analysis. If we further require that network is made up of passive, bilateral elements, then formula_11 since the admittance connected between nodes "j" and "k" is the same element as the admittance connected between nodes "k" and "j". The matrix is therefore symmetrical. For the case where formula_12 the matrix reduces to, formula_13. From which it can be seen that, formula_14 and formula_15 But since formula_16 then, formula_17 which is synonymous with the condition for reciprocity. In words, the ratio of the current at one port to the voltage at another is the same ratio if the ports being driven and measured are interchanged. Thus reciprocity is proven for the case of formula_12. For the case of a matrix of arbitrary size, the order of the matrix can be reduced through node elimination. After eliminating the "s"th node, the new admittance matrix will have the form, formula_18 It can be seen that this new matrix is also symmetrical. Nodes can continue to be eliminated in this way until only a 2×2 symmetrical matrix remains involving the two nodes of interest. Since this matrix is symmetrical it is proved that reciprocity applies to a matrix of arbitrary size when one node is driven by a voltage and current measured at another. A similar process using the impedance matrix from mesh analysis demonstrates reciprocity where one node is driven by a current and voltage is measured at another. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I_\\text {A}" }, { "math_id": 1, "text": "V_\\text {B}" }, { "math_id": 2, "text": "V_\\text {A}" }, { "math_id": 3, "text": "I_\\text {B}" }, { "math_id": 4, "text": "h_{12} = - h_{21}" }, { "math_id": 5, "text": "AD - BC = 1" }, { "math_id": 6, "text": "\n\\begin{bmatrix}\nI_1\\\\ \nI_2\\\\ \n\\vdots\\\\ \nI_n\n\\end{bmatrix}=\n\\begin{bmatrix}\nY_{11} & Y_{12} & \\cdots & Y_{1n} \\\\ \nY_{21} & Y_{22} & \\cdots & Y_{2n} \\\\ \n\\vdots & \\vdots & \\ddots & \\vdots\\\\ \nY_{n1} & Y_{n2} & \\cdots & Y_{nn} \n\\end{bmatrix}\n\\begin{bmatrix}\nV_1\\\\ \nV_2\\\\ \n\\vdots\\\\ \nV_n\n\\end{bmatrix}\n" }, { "math_id": 7, "text": "I_k" }, { "math_id": 8, "text": "V_k" }, { "math_id": 9, "text": "Y_{jk}" }, { "math_id": 10, "text": "Y_{kk}" }, { "math_id": 11, "text": "Y_{jk} = Y_{kj}" }, { "math_id": 12, "text": "n = 2" }, { "math_id": 13, "text": "\n\\begin{bmatrix}\nI_1 \\\\ I_2\n\\end{bmatrix}=\n\\begin{bmatrix}\nY_{11} & Y_{12} \\\\ \nY_{21} & Y_{22} \n\\end{bmatrix}\n\\begin{bmatrix}\nV_1 \\\\ V_2\n\\end{bmatrix}\n" }, { "math_id": 14, "text": " Y_{12} = \\left . \\frac {I_1}{V_2} \\right |_{V_1=0}" }, { "math_id": 15, "text": " Y_{21} = \\left . \\frac {I_2}{V_1} \\right |_{V_2=0} \\ ." }, { "math_id": 16, "text": "Y_{12} = Y_{21}" }, { "math_id": 17, "text": " \\left . \\frac {I_1}{V_2} \\right |_{V_1=0} = \\left . \\frac {I_2}{V_1} \\right |_{V_2=0} " }, { "math_id": 18, "text": "\n\\begin{bmatrix}\n(Y_{11}Y_{ss} - Y_{s1}Y_{1s}) & (Y_{12}Y_{ss} - Y_{s2}Y_{1s}) & (Y_{13}Y_{ss} - Y_{s3}Y_{1s}) & \\cdots \\\\ \n(Y_{21}Y_{ss} - Y_{s1}Y_{2s}) & (Y_{22}Y_{ss} - Y_{s2}Y_{2s}) & (Y_{23}Y_{ss} - Y_{s3}Y_{2s}) & \\cdots \\\\\n(Y_{31}Y_{ss} - Y_{s1}Y_{3s}) & (Y_{32}Y_{ss} - Y_{s2}Y_{3s}) & (Y_{33}Y_{ss} - Y_{s3}Y_{3s}) & \\cdots \\\\ \n\\cdots & \\cdots & \\cdots & \\cdots \n\\end{bmatrix}\n" } ]
https://en.wikipedia.org/wiki?curid=64719459
647196
2–3 tree
Data structure in computer science In computer science, a 2–3 tree is a tree data structure, where every node with children (internal node) has either two children (2-node) and one data element or three children (3-node) and two data elements. A 2–3 tree is a B-tree of order 3. Nodes on the outside of the tree (leaf nodes) have no children and one or two data elements. 2–3 trees were invented by John Hopcroft in 1970. 2–3 trees are required to be balanced, meaning that each leaf is at the same level. It follows that each right, center, and left subtree of a node contains the same or close to the same amount of data. Definitions. We say that an internal node is a 2-node if it has "one" data element and "two" children. We say that an internal node is a 3-node if it has "two" data elements and "three" children. A 4-node, with three data elements, may be temporarily created during manipulation of the tree but is never persistently stored in the tree. We say that T is a 2–3 tree if and only if one of the following statements hold: Operations. Searching. Searching for an item in a 2–3 tree is similar to searching for an item in a binary search tree. Since the data elements in each node are ordered, a search function will be directed to the correct subtree and eventually to the correct node which contains the item. Insertion. Insertion maintains the balanced property of the tree. To insert into a 2-node, the new key is added to the 2-node in the appropriate order. To insert into a 3-node, more work may be required depending on the location of the 3-node. If the tree consists only of a 3-node, the node is split into three 2-nodes with the appropriate keys and children. If the target node is a 3-node whose parent is a 2-node, the key is inserted into the 3-node to create a temporary 4-node. In the illustration, the key 10 is inserted into the 2-node with 6 and 9. The middle key is 9, and is promoted to the parent 2-node. This leaves a 3-node of 6 and 10, which is split to be two 2-nodes held as children of the parent 3-node. If the target node is a 3-node and the parent is a 3-node, a temporary 4-node is created then split as above. This process continues up the tree to the root. If the root must be split, then the process of a single 3-node is followed: a temporary 4-node root is split into three 2-nodes, one of which is considered to be the root. This operation grows the height of the tree by one. Deletion. Deleting a key from a non-leaf node can be done by replacing it by its immediate predecessor or successor, and then deleting the predecessor or successor from a leaf node. Deleting a key from a leaf node is easy if the leaf is a 3-node. Otherwise, it may require creating a temporary 1-node which may be absorbed by reorganizing the tree, or it may repeatedly travel upwards before it can be absorbed, as a temporary 4-node may in the case of insertion. Alternatively, it's possible to use an algorithm which is both top-down and bottom-up, creating temporary 4-nodes on the way down that are then destroyed as you travel back up. Deletion methods are explained in more detail in the references. Parallel operations. Since 2–3 trees are similar in structure to red–black trees, parallel algorithms for red–black trees can be applied to 2–3 trees as well. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d < a" }, { "math_id": 1, "text": "d > a" }, { "math_id": 2, "text": "a < b" }, { "math_id": 3, "text": "a < d < b" }, { "math_id": 4, "text": "d > b" } ]
https://en.wikipedia.org/wiki?curid=647196
64726
World file
Geographic data file A world file is a six line plain text sidecar file used by geographic information systems (GIS) to georeference raster map images. The file specification was introduced by Esri. Definition. The generic meaning of the six parameters in a world file (as defined by Esri) is: This description is however misleading in that the "D" and "B" parameters are not angular rotations, and that the "A" and "E" parameters do not correspond to the pixel size if "D" or "B" are not zero. The "A", "D", "B" and "E" parameters are sometimes named "x-scale", "y-skew", "x-skew" and "y-scale". A better description of the "A", "D", "B" and "E" parameters is: All four parameters are expressed in the map units, which are described by the spatial reference system for the raster. When "D" or "B" are non-zero the pixel width is given by: formula_0 and the pixel height by formula_1 World files describing a map on the Universal Transverse Mercator coordinate system (UTM) use these conventions: The above description applies also to a rectangular, non-rotated image which might be, for example, overlaid on an orthogonally projected map. If the world file describes an image that is rotated from the axis of the target projection, however, then A, D, B and E must be derived from the required affine transformation (see below). Specifically, A and E will no longer be the meter/pixel measurement on their respective axes. These values are used in a six-parameter affine transformation: formula_2 which can be written as this set of equations: formula_3 where: "x"' is the calculated UTM easting of the pixel on the map "y"' is the calculated UTM northing of the pixel on the map "x" is the column number of the pixel in the image counting from left "y" is the row number of the pixel in the image counting from top "A" or "x"-scale; dimension of a pixel in map units in "x"-direction "B", "D" are rotation terms "C", "F" are translation terms: "x", "y" map coordinates of the center of the upper-left pixel "E" is negative of "y"-scale: dimension of a pixel in map units in "y"-direction The "y"-scale ("E") is negative because the origins of an image and the UTM coordinate system are different. The origin of an image is located in the upper-left corner, whereas the origin of the map coordinate system is located in the lower-left corner. Row values in the image increase from the origin downward, while "y"-coordinate values in the map increase from the origin upward. Many mapping programs are unable to handle "upside down" images (i.e. those with a positive "y"-scale). To go from UTM(x'y') to pixel position(x,y) one can use the equation: formula_4 Example: Original codice_0 is 800×600 pixels (map not shown). Its world file is codice_1 and contains: 32.0 0.0 0.0 -32.0 691200.0 4576000.0 The position of Falkner Island light on the map image is: x = 171 pixels from left y = 343 pixels from top This gives: x1 = 696672 meters Easting y1 = 4565024 meters Northing The UTM (grid) zone is not given so the coordinates are ambiguous — they can represent a position in any of the approximately 120 UTM grid zones. In this case, approximate latitude and longitude (41.2, −072.7) were looked up in a gazetteer and the UTM (grid) zone was found to be 18 using a Web-based converter. Filename extension. The base filename of a world file matches the raster's base filename, but has a different filename extension (suffix). There are three filename extension naming conventions used for world files, with variable support across software. One simple convention with widespread support is to append the letter "w" to the end of the raster filename. For example, a raster named "mymap".jpg should have a world file named "mymap".jpgw. An alternative file naming convention that uses a three-character extension to conform to the 8.3 file naming convention uses the first and last character of the raster file's extension, followed by "w" at the end. For example, here are a few naming conventions for popular raster formats: A third convention is to use a .wld file extension, irrespective of the type of raster file, as supported by GDAL and QGIS, but not Esri. Localization. When writing world files it is advisable to ignore localization settings and always use "." as the decimal separator. Also, negative numbers should be specified with the "-" character exclusively. This ensures maximum portability of the images. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{A^2+D^2}" }, { "math_id": 1, "text": "\\sqrt{B^2+E^2}" }, { "math_id": 2, "text": "\n\\begin{bmatrix} x\\prime \\\\\ny\\prime \\end{bmatrix}\n= \\begin{bmatrix} A & B & C\\\\\nD & E & F\\end{bmatrix}\n\\begin{bmatrix} x \\\\\ny \\\\\n1 \\end{bmatrix}" }, { "math_id": 3, "text": "\\begin{align}\nx' &= A\\,x + B\\,y + C \\\\\ny' &= D\\,x + E\\,y + F\n\\end{align}" }, { "math_id": 4, "text": "\\begin{align}\nx&=\\frac{Ex'-By'+BF-EC}{AE-DB}\\\\\ny&=\\frac{-Dx'+Ay'+DC-AF}{AE-DB}\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=64726
647297
Half-space (geometry)
Bisection of Euclidean space by a hyperplane In geometry, a half-space is either of the two parts into which a plane divides the three-dimensional Euclidean space. If the space is two-dimensional, then a half-space is called a half-plane (open or closed). A half-space in a one-dimensional space is called a "half-line" or "ray". More generally, a half-space is either of the two parts into which a hyperplane divides an affine space. That is, the points that are not incident to the hyperplane are partitioned into two convex sets (i.e., half-spaces), such that any subspace connecting a point in one set to a point in the other must intersect the hyperplane. A half-space can be either "open" or "closed". An open half-space is either of the two open sets produced by the subtraction of a hyperplane from the affine space. A closed half-space is the union of an open half-space and the hyperplane that defines it. The open (closed) "upper half-space" is the half-space of all ("x"1, "x"2, ..., "x""n") such that "x""n" &gt; 0 (≥ 0). The open (closed) "lower half-space" is defined similarly, by requiring that "x""n" be negative (non-positive). A half-space may be specified by a linear inequality, derived from the linear equation that specifies the defining hyperplane. A strict linear inequality specifies an open half-space: formula_0 A non-strict one specifies a closed half-space: formula_1 Here, one assumes that not all of the real numbers "a"1, "a"2, ..., "a""n" are zero. A half-space is a convex set.
[ { "math_id": 0, "text": "a_1x_1+a_2x_2+\\cdots+a_nx_n>b" }, { "math_id": 1, "text": "a_1x_1+a_2x_2+\\cdots+a_nx_n\\geq b" } ]
https://en.wikipedia.org/wiki?curid=647297
6473308
Catalyst poisoning
Catalyst poisoning is the partial or total deactivation of a catalyst by a chemical compound. Poisoning refers specifically to chemical deactivation, rather than other mechanisms of catalyst degradation such as thermal decomposition or physical damage. Although usually undesirable, poisoning may be helpful when it results in improved catalyst selectivity (e.g. Lindlar's catalyst). An important historic example was the poisoning of catalytic converters by leaded fuel. Poisoning of Pd catalysts. Organic functional groups and inorganic anions often have the ability to strongly adsorb to metal surfaces. Common catalyst poisons include carbon monoxide, halides, cyanides, sulfides, sulfites, phosphates, phosphites and organic molecules such as nitriles, nitro compounds, oximes, and nitrogen-containing heterocycles. Agents vary their catalytic properties because of the nature of the transition metal. Lindlar catalysts are prepared by the reduction of palladium chloride in a slurry of calcium carbonate (CaCO3) followed by poisoning with lead acetate. In a related case, the Rosenmund reduction of acyl halides to aldehydes, the palladium catalyst (over barium sulfate or calcium carbonate) is intentionally poisoned by the addition of sulfur or quinoline in order to lower the catalyst activity and thereby prevent over-reduction of the aldehyde product to the primary alcohol. Poisoning process. Poisoning often involves compounds that chemically bond to a catalyst's active sites. Poisoning decreases the number of active sites, and the average distance that a reactant molecule must diffuse through the pore structure before undergoing reaction increases as a result. As a result, poisoned sites can no longer accelerate the reaction with which the catalyst was supposed to catalyze. Large scale production of substances such as ammonia in the Haber–Bosch process include steps to remove potential poisons from the product stream. When the poisoning reaction rate is slow relative to the rate of diffusion, the poison will be evenly distributed throughout the catalyst and will result in homogeneous poisoning of the catalyst. Conversely, if the reaction rate is fast compared to the rate of diffusion, a poisoned shell will form on the exterior layers of the catalyst, a situation known as "pore-mouth" poisoning, and the rate of catalytic reaction may become limited by the rate of diffusion through the inactive shell. Homogenous and "pore-mouth" poisoning occurrences are most frequently observed when using a porous medium catalyst. Selective poisoning. If the catalyst and reaction conditions are indicative of low effectiveness, selective poisoning may be observed, where poisoning of only a small fraction of the catalyst's surface gives a disproportionately large drop in activity. If "η" is the effectiveness factor of the poisoned surface and "hp" is the Thiele modulus for the poisoned case: formula_0 When the ratio of the reaction rates of the poisoned pore to the unpoisoned pore is considered: formula_1 where "F" is the ratio of poisoned to unpoisoned pores, "h"T is the Thiele modulus for the unpoisoned case, and "α" is the fraction of the surface that is poisoned. The above equation simplifies depending on the value of "h"T. When the surface is available, "h"T is negligible: formula_2 This represents the "classical case" of nonselective poisoning where the fraction of the activity remaining is equal to the fraction of the unpoisoned surface remaining. When "h"T is very large, it becomes: formula_3 In this case, the catalyst effectiveness factors are considerably less than unity, and the effects of the portion of the poison adsorbed near the closed end of the pore are not as apparent as when "h"T is small. The rate of diffusion of the reactant through the poisoned region is equal to the rate of reaction and is given by: formula_4 And the rate of reaction within a pore is given by: formula_5 The fraction of the catalyst surface available for reaction can be obtained from the ratio of the poisoned reaction rate to the unpoisoned reaction rate: formula_6 Benefits of selective poisoning. Usually, catalyst poisoning is undesirable as it leads to the wasting of expensive metals or their complexes. However, poisoning of catalysts can be used to improve selectivity of reactions. Poisoning can allow for selective intermediates to be isolated and desirable final products to be produced. Hydrodesulfurization catalysts. In the purification of petroleum products, the process of hydrodesulfurization is utilized. Thiols, such as thiophene, are reduced using H2 to produce H2S and hydrocarbons of varying chain length. Common catalysts used are tungsten and molybdenum sulfide. Adding cobalt and nickel to either edges or partially incorporating them into the crystal lattice structure can improve the catalyst's efficiency. The synthesis of the catalyst creates a supported hybrid that prevents poisoning of the cobalt nuclei. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\eta =\\frac{\\tanh h_{\\rm p}}{h_{\\rm p}} " }, { "math_id": 1, "text": " F =\\sqrt{1-\\alpha}\\, \\tanh \\left (h_{\\rm T} \\sqrt{1-\\alpha} \\right) \\coth h_{\\rm T} " }, { "math_id": 2, "text": " F = 1 - \\alpha " }, { "math_id": 3, "text": " F = \\sqrt{1- \\alpha} " }, { "math_id": 4, "text": " \\vec{v}_{\\rm diffusion} = -\\pi \\langle r^2 \\rangle D \\vec{\\nabla} c " }, { "math_id": 5, "text": " v = \\eta \\pi \\langle r \\rangle (1-\\alpha) \\langle L \\rangle k_1'' c_{\\rm c} " }, { "math_id": 6, "text": "\\begin{align}\nF &= \\frac{v_{\\rm poisoned}}{v_{\\rm unpoisoned}} \\\\\n &= \\frac{\\tanh[(1-\\alpha) h_{\\rm T}]\\coth h_{\\rm T}}{1 + \\alpha h_{\\rm T} \\tanh[(1-\\alpha) h_{\\rm T}]}\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=6473308
64740
Jacob Bernoulli
Swiss mathematician (1655–1705) Jacob Bernoulli (also known as James in English or Jacques in French; 6 January 1655 [O.S. 27 December 1654] – 16 August 1705) was one of the many prominent mathematicians in the Swiss Bernoulli family. He sided with Gottfried Wilhelm Leibniz during the Leibniz–Newton calculus controversy and was an early proponent of Leibnizian calculus, which he made numerous contributions to; along with his brother Johann, he was one of the founders of the calculus of variations. He also discovered the fundamental mathematical constant e. However, his most important contribution was in the field of probability, where he derived the first version of the law of large numbers in his work "Ars Conjectandi". Biography. Jacob Bernoulli was born in Basel in the Old Swiss Confederacy. Following his father's wish, he studied theology and entered the ministry. But contrary to the desires of his parents, he also studied mathematics and astronomy. He traveled throughout Europe from 1676 to 1682, learning about the latest discoveries in mathematics and the sciences under leading figures of the time. This included the work of Johannes Hudde, Robert Boyle, and Robert Hooke. During this time he also produced an incorrect theory of comets. Bernoulli returned to Switzerland, and began teaching mechanics at the University of Basel from 1683. His doctoral dissertation "Solutionem tergemini problematis" was submitted in 1684. It appeared in print in 1687. In 1684, Bernoulli married Judith Stupanus; they had two children. During this decade, he also began a fertile research career. His travels allowed him to establish correspondence with many leading mathematicians and scientists of his era, which he maintained throughout his life. During this time, he studied the new discoveries in mathematics, including Christiaan Huygens's "De ratiociniis in aleae ludo", Descartes' "La Géométrie" and Frans van Schooten's supplements of it. He also studied Isaac Barrow and John Wallis, leading to his interest in infinitesimal geometry. Apart from these, it was between 1684 and 1689 that many of the results that were to make up "Ars Conjectandi" were discovered. He was appointed professor of mathematics at the University of Basel in 1687, remaining in this position for the rest of his life. By that time, he had begun tutoring his brother Johann Bernoulli on mathematical topics. The two brothers began to study the calculus as presented by Leibniz in his 1684 paper on the differential calculus in "Nova Methodus pro Maximis et Minimis" published in "Acta Eruditorum". They also studied the publications of von Tschirnhaus. It must be understood that Leibniz's publications on the calculus were very obscure to mathematicians of that time and the Bernoullis were among the first to try to understand and apply Leibniz's theories. Jacob collaborated with his brother on various applications of calculus. However the atmosphere of collaboration between the two brothers turned into rivalry as Johann's own mathematical genius began to mature, with both of them attacking each other in print, and posing difficult mathematical challenges to test each other's skills. By 1697, the relationship had completely broken down. The lunar crater Bernoulli is also named after him jointly with his brother Johann. Important works. Jacob Bernoulli's first important contributions were a pamphlet on the parallels of logic and algebra published in 1685, work on probability in 1685 and geometry in 1687. His geometry result gave a construction to divide any triangle into four equal parts with two perpendicular lines. By 1689, he had published important work on infinite series and published his law of large numbers in probability theory. Jacob Bernoulli published five treatises on infinite series between 1682 and 1704. The first two of these contained many results, such as the fundamental result that formula_0 diverges, which Bernoulli believed were new but they had actually been proved by Pietro Mengoli 40 years earlier and was proved by Nicole Oresme in the 14th century already. Bernoulli could not find a closed form for formula_1, but he did show that it converged to a finite limit less than 2. Euler was the first to find the limit of this series in 1737. Bernoulli also studied the exponential series which came out of examining compound interest. In May 1690, in a paper published in "Acta Eruditorum", Jacob Bernoulli showed that the problem of determining the isochrone is equivalent to solving a first-order nonlinear differential equation. The isochrone, or curve of constant descent, is the curve along which a particle will descend under gravity from any point to the bottom in exactly the same time, no matter what the starting point. It had been studied by Huygens in 1687 and Leibniz in 1689. After finding the differential equation, Bernoulli then solved it by what we now call separation of variables. Jacob Bernoulli's paper of 1690 is important for the history of calculus, since the term integral appears for the first time with its integration meaning. In 1696, Bernoulli solved the equation, now called the Bernoulli differential equation, formula_2 Jacob Bernoulli also discovered a general method to determine evolutes of a curve as the envelope of its circles of curvature. He also investigated caustic curves and in particular he studied these associated curves of the parabola, the logarithmic spiral and epicycloids around 1692. The lemniscate of Bernoulli was first conceived by Jacob Bernoulli in 1694. In 1695, he investigated the drawbridge problem which seeks the curve required so that a weight sliding along the cable always keeps the drawbridge balanced. Bernoulli's most original work was "Ars Conjectandi", published in Basel in 1713, eight years after his death. The work was incomplete at the time of his death but it is still a work of the greatest significance in the theory of probability. The book also covers other related subjects, including a review of combinatorics, in particular the work of van Schooten, Leibniz, and Prestet, as well as the use of Bernoulli numbers in a discussion of the exponential series. Inspired by Huygens' work, Bernoulli also gives many examples on how much one would expect to win playing various games of chance. The term Bernoulli trial resulted from this work. In the last part of the book, Bernoulli sketches many areas of mathematical probability, including probability as a measurable degree of certainty; necessity and chance; moral versus mathematical expectation; a priori an a posteriori probability; expectation of winning when players are divided according to dexterity; regard of all available arguments, their valuation, and their calculable evaluation; and the law of large numbers. Bernoulli was one of the most significant promoters of the formal methods of higher analysis. Astuteness and elegance are seldom found in his method of presentation and expression, but there is a maximum of integrity. Discovery of the mathematical constant "e". In 1683, Bernoulli discovered the constant e by studying a question about compound interest which required him to find the value of the following expression (which is in fact "e"): formula_3 One example is an account that starts with $1.00 and pays 100 percent interest per year. If the interest is credited once, at the end of the year, the value is $2.00; but if the interest is computed and added twice in the year, the $1 is multiplied by 1.5 twice, yielding $1.00×1.52 = $2.25. Compounding quarterly yields $1.00×1.254 = $2.4414..., and compounding monthly yields $1.00×(1.0833...)12 = $2.613035... Bernoulli noticed that this sequence approaches a limit (the force of interest) for more and smaller compounding intervals. Compounding weekly yields $2.692597..., while compounding daily yields $2.714567..., just two cents more. Using "n" as the number of compounding intervals, with interest of 100% / "n" in each interval, the limit for large "n" is the number that Euler later named "e"; with "continuous" compounding, the account value will reach $2.7182818... More generally, an account that starts at $1, and yields (1+R) dollars at compound interest, will yield "e"R dollars with continuous compounding. Tombstone. Bernoulli wanted a logarithmic spiral and the motto "Eadem mutata resurgo" ('Although changed, I rise again the same') engraved on his tombstone. He wrote that the self-similar spiral "may be used as a symbol, either of fortitude and constancy in adversity, or of the human body, which after all its changes, even after death, will be restored to its exact and perfect self". Bernoulli died in 1705, but an Archimedean spiral was engraved rather than a logarithmic one. Translation of Latin inscription: Jacob Bernoulli, the incomparable mathematician. Professor at the University of Basel For more than 18 years; member of the Royal Academies of Paris and Berlin; famous for his writings. Of a chronic illness, of sound mind to the end; succumbed in the year of grace 1705, the 16th of August, at the age of 50 years and 7 months, awaiting the resurrection. Judith Stupanus, his wife for 20 years, and his two children have erected a monument to the husband and father they miss so much. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sum{\\frac{1}{n}}" }, { "math_id": 1, "text": "\\sum{\\frac{1}{n^2}}" }, { "math_id": 2, "text": " y' = p(x)y + q(x)y^n. " }, { "math_id": 3, "text": "\\lim_{n\\to\\infty} \\left( 1 + \\frac{1}{n} \\right)^n" } ]
https://en.wikipedia.org/wiki?curid=64740
64740557
Lyth bound
In cosmological inflation, within the slow-roll paradigm, the Lyth argument places a theoretical upper bound on the amount of gravitational waves produced during inflation, given the amount of departure from the homogeneity of the cosmic microwave background (CMB). Detail. The argument was first introduced by David H. Lyth in his 1997 paper "What Would We Learn by Detecting a Gravitational Wave Signal in the Cosmic Microwave Background Anisotropy?" The detailed argument is as follows: The power spectrum for curvature perturbations formula_0 is given by: formula_1 , Whereas the power spectrum for tensor perturbations is given by: formula_2 , in which formula_3 is the Hubble parameter, formula_4 is the wave number, formula_5 is the Planck mass and formula_6 is the first slow-roll parameter given by formula_7. Thus the ratio of tensor to scalar power spectra at a certain wave number formula_4, denoted as the so-called tensor-to-scalar ratio formula_8, is given by: formula_9. While strictly speaking formula_10 is a function of formula_11, during slow-roll inflation, it is understood to change very mildly, thus it is customary to simply omit the wavenumber dependence. Additionally, the numeric pre-factor is susceptible to slight changes owing to more detailed calculations but is usually between formula_12 . Although the slow-roll parameter is given as above, it was shown that in the slow-roll limit, this parameter can be given by the slope of the inflationary potential such that: formula_13, in which formula_14 is the inflationary potential over a scalar field formula_15. Thus, formula_16, and the upper bound on formula_17 placed by CMB measurements and the lack of gravitational wave signal is translated to and upper bound on the steepness of the inflationary potential. Acceptance and significance. Although the Lyth bound argument was adopted relatively slowly, it has been used in many subsequent theoretical works. The original argument deals only with the original inflationary time period that is reflected in the CMB signature, which at the time were about 5 e-folds, as opposed to about 8 e-folds to date. However, an effort was made to generalize this argument to the entire span of physical inflation, which corresponds to the order of 50 to 60 e-folds On the base of these generalized arguments, an unnecessary constraining view arose, which preferred realization of inflation based in large-field models, as opposed to small-field models. This view was prevalent until the last decade which saw a revival in small-field model prevalence due to the theoretical works that pointed to possible likely small-field model candidates. The likelihood of these models was further developed and numerically demonstrated. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Psi" }, { "math_id": 1, "text": "P_{\\Psi}(k)=\\frac{8\\pi}{9k^3}\\frac{H^2}{\\epsilon M_{pl}^2}\\big|_{aH=k}" }, { "math_id": 2, "text": "P_{h}(k)=\\frac{8\\pi}{k^3}\\frac{H^2}{M_{pl}^2}\\big|_{aH=k}" }, { "math_id": 3, "text": "H" }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "M_{pl}" }, { "math_id": 6, "text": "\\epsilon" }, { "math_id": 7, "text": "\\frac{-\\dot{H}}{H^2}" }, { "math_id": 8, "text": "r" }, { "math_id": 9, "text": "r(k)\\equiv \\frac{P_{\\Psi}(k)}{P_{h}(k)}\\big|_{aH=k}=\\frac{1}{9\\epsilon(k)} " }, { "math_id": 10, "text": "\\epsilon " }, { "math_id": 11, "text": "k " }, { "math_id": 12, "text": "\\frac{1}{9}\\sim \\frac{1}{16} " }, { "math_id": 13, "text": "\\epsilon=\\frac{1}{2}\\left(\\frac{\\partial_{\\phi}V(\\phi)}{V(\\phi)}\\right)^2 " }, { "math_id": 14, "text": "V(\\phi) " }, { "math_id": 15, "text": "\\phi " }, { "math_id": 16, "text": "\\left|\\frac{\\partial_{\\phi} V}{V}\\right|\\propto \\sqrt{r} " }, { "math_id": 17, "text": "r " } ]
https://en.wikipedia.org/wiki?curid=64740557
64743403
Mixed Hodge structure
Algebraic structure In algebraic geometry, a mixed Hodge structure is an algebraic structure containing information about the cohomology of general algebraic varieties. It is a generalization of a Hodge structure, which is used to study smooth projective varieties. In mixed Hodge theory, where the decomposition of a cohomology group formula_0 may have subspaces of different weights, i.e. as a direct sum of Hodge structures formula_1 where each of the Hodge structures have weight formula_2. One of the early hints that such structures should exist comes from the long exact sequence formula_3associated to a pair of smooth projective varieties formula_4. This sequence suggests that the cohomology groups formula_5 (for formula_6) should have differing weights coming from both formula_7 and formula_8. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Motivation. Originally, Hodge structures were introduced as a tool for keeping track of abstract Hodge decompositions on the cohomology groups of smooth projective algebraic varieties. These structures gave geometers new tools for studying algebraic curves, such as the Torelli theorem, Abelian varieties, and the cohomology of smooth projective varieties. One of the chief results for computing Hodge structures is an explicit decomposition of the cohomology groups of smooth hypersurfaces using the relation between the Jacobian ideal and the Hodge decomposition of a smooth projective hypersurface through Griffith's residue theorem. Porting this language to smooth non-projective varieties and singular varieties requires the concept of mixed Hodge structures. Definition. A mixed Hodge structure (MHS) is a triple formula_9 such that where the induced filtration of formula_15 on the graded piecesformula_19are pure Hodge structures of weight formula_20. Remark on filtrations. Note that similar to Hodge structures, mixed Hodge structures use a filtration instead of a direct sum decomposition since the cohomology groups with anti-holomorphic terms, formula_21 where formula_22, don't vary holomorphically. But, the filtrations can vary holomorphically, giving a better defined structure. Morphisms of mixed Hodge structures. Morphisms of mixed Hodge structures are defined by maps of abelian groupsformula_23such thatformula_24and the induced map of formula_25-vector spaces has the propertyformula_26 Further definitions and properties. Hodge numbers. The Hodge numbers of a MHS are defined as the dimensionsformula_27since formula_28 is a weight formula_29 Hodge structure, andformula_30is the formula_31-component of a weight formula_29 Hodge structure. Homological properties. There is an Abelian category of mixed Hodge structures which has vanishing formula_32-groups whenever the cohomological degree is greater than formula_33: that is, given mixed hodge structures formula_34 the groupsformula_35for formula_36pg 83. Mixed Hodge structures on bi-filtered complexes. Many mixed Hodge structures can be constructed from a bifiltered complex. This includes complements of smooth varieties defined by the complement of a normal crossing variety. Given a complex of sheaves of abelian groups formula_37 and filtrations formula_38 of the complex, meaningformula_39There is an induced mixed Hodge structure on the hyperhomology groupsformula_40from the bi-filtered complex formula_41. Such a bi-filtered complex is called a mixed Hodge complex Logarithmic complex. Given a smooth variety formula_42 where formula_43 is a normal crossing divisor (meaning all intersections of components are complete intersections), there are filtrations on the logarithmic de Rham complex formula_44 given byformula_45It turns out these filtrations define a natural mixed Hodge structure on the cohomology group formula_46 from the mixed Hodge complex defined on the logarithmic complex formula_44. Smooth compactifications. The above construction of the logarithmic complex extends to every smooth variety; and the mixed Hodge structure is isomorphic under any such compactificaiton. Note a smooth compactification of a smooth variety formula_47 is defined as a smooth variety formula_48 and an embedding formula_49 such that formula_43 is a normal crossing divisor. That is, given compactifications formula_50 with boundary divisors formula_51 there is an isomorphism of mixed Hodge structureformula_52showing the mixed Hodge structure is invariant under smooth compactification. Example. For example, on a genus formula_53 plane curve formula_54 logarithmic cohomology of formula_54 with the normal crossing divisor formula_55 with formula_56 can be easily computed since the terms of the complex formula_57 equal toformula_58are both acyclic. Then, the Hypercohomology is justformula_59the first vector space are just the constant sections, hence the differential is the zero map. The second is the vector space is isomorphic to the vector space spanned byformula_60Then formula_61 has a weight formula_62 mixed Hodge structure and formula_63 has a weight formula_53 mixed Hodge structure. Examples. Complement of a smooth projective variety by a closed subvariety. Given a smooth projective variety formula_64 of dimension formula_65 and a closed subvariety formula_4 there is a long exact sequence in cohomologypg7-8formula_66coming from the distinguished triangleformula_67of constructible sheaves. There is another long exact sequenceformula_68from the distinguished triangleformula_69whenever formula_48 is smooth. Note the homology groups formula_70 are called Borel–Moore homology, which are dual to cohomology for general spaces and the formula_71 means tensoring with the Tate structure formula_72 add weight formula_73 to the weight filtration. The smoothness hypothesis is required because Verdier duality implies formula_74, and formula_75 whenever formula_76 is smooth. Also, the dualizing complex for formula_76 has weight formula_65, hence formula_77. Also, the maps from Borel-Moore homology must be twisted by up to weight formula_71 is order for it to have a map to formula_78. Also, there is the perfect duality pairingformula_79giving an isomorphism of the two groups. Algebraic torus. A one dimensional algebraic torus formula_80 is isomorphic to the variety formula_81, hence its cohomology groups are isomorphic toformula_82The long exact exact sequence then readsformula_83Since formula_84 and formula_85 this gives the exact sequenceformula_86since there is a twisting of weights for well-defined maps of mixed Hodge structures, there is the isomorphism formula_87 Quartic K3 surface minus a genus 3 curve. Given a quartic K3 surface formula_48, and a genus 3 curve formula_88 defined by the vanishing locus of a generic section of formula_89, hence it is isomorphic to a degree formula_90 plane curve, which has genus 3. Then, the Gysin sequence gives the long exact sequenceformula_91But, it is a result that the maps formula_92 take a Hodge class of type formula_93 to a Hodge class of type formula_94. The Hodge structures for both the K3 surface and the curve are well-known, and can be computed using the Jacobian ideal. In the case of the curve there are two zero mapsformula_95 formula_96hence formula_97 contains the weight one pieces formula_98. Because formula_99 has dimension formula_100, but the Leftschetz class formula_101 is killed off by the mapformula_102sending the formula_103 class in formula_104 to the formula_105 class in formula_106. Then the primitive cohomology group formula_107 is the weight 2 piece of formula_97. Therefore,formula_108The induced filtrations on these graded pieces are the Hodge filtrations coming from each cohomology group.
[ { "math_id": 0, "text": "H^k(X)" }, { "math_id": 1, "text": "H^k(X) = \\bigoplus_i (H_i, F_i^\\bullet)" }, { "math_id": 2, "text": "k_i" }, { "math_id": 3, "text": "\\dots \\to H^{i-1}(Y) \\to H^i_c(U) \\to H^i(X) \\to \\dots" }, { "math_id": 4, "text": "Y \\subset X " }, { "math_id": 5, "text": "H^i_c(U) " }, { "math_id": 6, "text": "U = X - Y " }, { "math_id": 7, "text": "H^{i-1}(Y) " }, { "math_id": 8, "text": "H^i(X) " }, { "math_id": 9, "text": "(H_\\mathbb{Z},W_\\bullet, F^\\bullet)" }, { "math_id": 10, "text": "H_\\mathbb{Z}" }, { "math_id": 11, "text": "\\mathbb{Z}" }, { "math_id": 12, "text": "W_\\bullet" }, { "math_id": 13, "text": "H_\\mathbb{Q} = H_\\mathbb{Z}\\otimes\\mathbb{Q}" }, { "math_id": 14, "text": "\\cdots \\subset W_0 \\subset W_1 \\subset W_2 \\subset \\cdots" }, { "math_id": 15, "text": "F^\\bullet" }, { "math_id": 16, "text": "\\mathbb{N}" }, { "math_id": 17, "text": "H_\\mathbb{C}" }, { "math_id": 18, "text": "H_\\mathbb{C} = F^0 \\supset F^1 \\supset F^2 \\supset \\cdots " }, { "math_id": 19, "text": "\\text{Gr}^{W_\\bullet}H_\\mathbb{Q} = \\frac{W_kH_\\mathbb{Q}}{W_{k-1}H_\\mathbb{Q}}" }, { "math_id": 20, "text": "k" }, { "math_id": 21, "text": "H^{p,q}" }, { "math_id": 22, "text": "q > 0" }, { "math_id": 23, "text": "f:(H_\\mathbb{Z},W_\\bullet, F^\\bullet) \\to (H_\\mathbb{Z}',W_\\bullet', F'^\\bullet)" }, { "math_id": 24, "text": "f(W_l) \\subset W'_l" }, { "math_id": 25, "text": "\\mathbb{C}" }, { "math_id": 26, "text": "f_\\mathbb{C}(F^p) \\subset F'^p" }, { "math_id": 27, "text": "h^{p,q}(H_\\mathbb{Z}) = \\dim_\\mathbb{C}\\text{Gr}_{F^\\bullet}^p\\text{Gr}_{p+q}^{W_\\bullet}H_\\mathbb{C} " }, { "math_id": 28, "text": "\\text{Gr}_{p+q}^{W_\\bullet}H_\\mathbb{C} " }, { "math_id": 29, "text": "(p+q) " }, { "math_id": 30, "text": "\\text{Gr}_p^{F^\\bullet} = \\frac{F^p}{F^{p+1}} " }, { "math_id": 31, "text": "(p,q) " }, { "math_id": 32, "text": "\\text{Ext}" }, { "math_id": 33, "text": "1" }, { "math_id": 34, "text": "(H_\\mathbb{Z},W_\\bullet, F^\\bullet), (H_\\mathbb{Z}',W_\\bullet', F'^\\bullet)" }, { "math_id": 35, "text": "\\operatorname{Ext}_{MHS}^p((H_\\mathbb{Z},W_\\bullet, F^\\bullet), (H_\\mathbb{Z}',W_\\bullet', F'^\\bullet)) = 0" }, { "math_id": 36, "text": "p \\geq 2" }, { "math_id": 37, "text": "A^\\bullet" }, { "math_id": 38, "text": "W_\\bullet, F^\\bullet" }, { "math_id": 39, "text": "\\begin{align}\nd(W_iA^\\bullet) &\\subset W_iA^\\bullet \\\\\nd(F^iA^\\bullet) &\\subset F^iA^\\bullet\n\\end{align}" }, { "math_id": 40, "text": "(\\mathbb{H}^k(X,A^\\bullet), W_\\bullet, F^\\bullet)" }, { "math_id": 41, "text": "(A^\\bullet, W_\\bullet, F^\\bullet)" }, { "math_id": 42, "text": "U \\subset X" }, { "math_id": 43, "text": "D = X - U" }, { "math_id": 44, "text": "\\Omega_X^\\bullet(\\log D)" }, { "math_id": 45, "text": "\\begin{align}\nW_m\\Omega^i_X(\\log D) &= \\begin{cases}\n\\Omega_X^i(\\log D) & \\text{ if } i \\leq m \\\\\n\\Omega_X^{i-m}\\wedge \\Omega_X^m(\\log D) & \\text{ if }0 \\leq m \\leq i \\\\\n0 & \\text{ if } m < 0\n\\end{cases} \\\\[6pt]\nF^p\\Omega^i_X(\\log D) &= \\begin{cases}\n\\Omega_X^i(\\log D) & \\text{ if } p \\leq i \\\\\n0 & \\text{ otherwise}\n\\end{cases}\n\\end{align}" }, { "math_id": 46, "text": "H^n(U,\\mathbb{C})" }, { "math_id": 47, "text": "U" }, { "math_id": 48, "text": "X" }, { "math_id": 49, "text": "U \\hookrightarrow X" }, { "math_id": 50, "text": "U \\subset X, X'" }, { "math_id": 51, "text": "D = X - U, \\text{ } D' = X' - U" }, { "math_id": 52, "text": "(\\mathbb{H}^k(X,\\Omega_X^\\bullet(\\log D)), W_\\bullet, F^\\bullet)\n\\cong(\\mathbb{H}^k(X',\\Omega_{X'}^\\bullet(\\log D')), W_\\bullet, F^\\bullet)" }, { "math_id": 53, "text": "0" }, { "math_id": 54, "text": "C" }, { "math_id": 55, "text": "\\{p_1 ,\\ldots, p_k\\}" }, { "math_id": 56, "text": "k \\geq 1" }, { "math_id": 57, "text": "\\Omega_C^\\bullet(\\log D)" }, { "math_id": 58, "text": "\\mathcal{O}_C \\xrightarrow{d} \\Omega_C^1(\\log D)" }, { "math_id": 59, "text": "\\Gamma(\\mathcal{O}_{\\mathbb{P}^1}) \\xrightarrow{d} \\Gamma(\\Omega_{\\mathbb{P}^1}(\\log D))" }, { "math_id": 60, "text": "\\mathbb{C} \\cdot \\frac{dx}{x - p_1}\\oplus \\cdots \\oplus \\mathbb{C} \\frac{dx}{x-p_{k-1}}" }, { "math_id": 61, "text": "\\mathbb{H}^1(\\Omega_C^1(\\log D))" }, { "math_id": 62, "text": "2" }, { "math_id": 63, "text": "\\mathbb{H}^0(\\Omega_C^1(\\log D))" }, { "math_id": 64, "text": "X " }, { "math_id": 65, "text": "n " }, { "math_id": 66, "text": "\\cdots \\to H^m_c(U;\\mathbb{Z}) \\to H^m(X;\\mathbb{Z}) \\to H^m(Y;\\mathbb{Z}) \\to H^{m+1}_c(U;\\mathbb{Z}) \\to \\cdots " }, { "math_id": 67, "text": "\\mathbf{R}j_!\\mathbb{Z}_U \\to \\mathbb{Z}_X \\to i_*\\mathbb{Z}_Y \\xrightarrow{[+1]} " }, { "math_id": 68, "text": "\\cdots \\to H^{BM}_{2n-m}(Y;\\mathbb{Z})(-n) \\to H^m(X;\\mathbb{Z}) \\to H^m(U;\\mathbb{Z})\n\\to H^{BM}_{2n-m-1}(Y;\\mathbb{Z})(-n) \\to \\cdots " }, { "math_id": 69, "text": "i_*i^!\\mathbb{Z}_X \\to \\mathbb{Z}_X \\to \\mathbf{R}j_*\\mathbb{Z}_U \\xrightarrow{[+1]} " }, { "math_id": 70, "text": "H^{BM}_k(X) " }, { "math_id": 71, "text": "(n) " }, { "math_id": 72, "text": "\\mathbb{Z}(1)^{\\otimes n} " }, { "math_id": 73, "text": "-2n" }, { "math_id": 74, "text": "i^!D_X = D_Y " }, { "math_id": 75, "text": "D_X \\cong \\mathbb{Z}_X[2n] " }, { "math_id": 76, "text": "X " }, { "math_id": 77, "text": "D_X \\cong \\mathbb{Z}_X[2n](n) " }, { "math_id": 78, "text": "H^m(X) " }, { "math_id": 79, "text": "H^{BM}_{2n-m}(Y)\\times H^m(Y) \\to \\mathbb{Z} " }, { "math_id": 80, "text": "\\mathbb{T}" }, { "math_id": 81, "text": "\\mathbb{P}^1-\\{0,\\infty \\}" }, { "math_id": 82, "text": "\\begin{align}\nH^0(\\mathbb{T})\\oplus H^1(\\mathbb{T}) & \\cong \\mathbb{Z} \\oplus \\mathbb{Z}\n\\end{align}" }, { "math_id": 83, "text": "\\begin{matrix}\n&H_2^{BM}(Y)(-1) \\to H^0(\\mathbb{P}^1) \\to H^0(\\mathbb{G}_m) \\to \\text{ } \\\\\n&H_1^{BM}(Y)(-1) \\to H^1(\\mathbb{P}^1) \\to H^1(\\mathbb{G}_m) \\to \\text{ } \\\\\n&H_0^{BM}(Y)(-1) \\to H^2(\\mathbb{P}^1) \\to H^2(\\mathbb{G}_m) \\to 0\n\\end{matrix} " }, { "math_id": 84, "text": "H^1(\\mathbb{P}^1) = 0 " }, { "math_id": 85, "text": "H^2(\\mathbb{G}_m) = 0 " }, { "math_id": 86, "text": "0 \\to H^1(\\mathbb{G}_m) \\to H_0^{BM}(Y)(-1) \\to H^2(\\mathbb{P}^1) \\to 0 " }, { "math_id": 87, "text": "H^1(\\mathbb{G}_m) \\cong \\mathbb{Z}(-1) " }, { "math_id": 88, "text": "i:C \\hookrightarrow X" }, { "math_id": 89, "text": "\\mathcal{O}_X(1)" }, { "math_id": 90, "text": "4" }, { "math_id": 91, "text": "\\to H^{k-2}(C) \\xrightarrow{\\gamma_k} H^k(X) \\xrightarrow{i^*} H^k(U) \\xrightarrow{R} H^{k-1}(C) \\to" }, { "math_id": 92, "text": "\\gamma_k" }, { "math_id": 93, "text": "(p,q)" }, { "math_id": 94, "text": "(p+1,q+1)" }, { "math_id": 95, "text": "\\gamma_3:H^{1,0}(C) \\to H^{2,1}(X) = 0" }, { "math_id": 96, "text": "\\gamma_3:H^{0,1}(C) \\to H^{1,2}(X) = 0" }, { "math_id": 97, "text": "H^2(U)" }, { "math_id": 98, "text": "H^{1,0}(C) \\oplus H^{0,1}(C)" }, { "math_id": 99, "text": "H^2(X) = H^2_\\text{prim}(X)\\oplus \\mathbb{C}\\cdot \\mathbb{L}" }, { "math_id": 100, "text": "22" }, { "math_id": 101, "text": "\\mathbb{L}" }, { "math_id": 102, "text": "\\gamma_2:H^0(C) \\to H^2(X)" }, { "math_id": 103, "text": "(0,0)" }, { "math_id": 104, "text": "H^0(C)" }, { "math_id": 105, "text": "(1,1)" }, { "math_id": 106, "text": "H^2(X)" }, { "math_id": 107, "text": "H^2_\\text{prim}(X)" }, { "math_id": 108, "text": "\\begin{align}\n\\text{Gr}_2^{W_\\bullet}H^2(U) &= H^2_\\text{prim}(X)\\\\\n\\text{Gr}_1^{W_\\bullet}H^2(U) &= H^1(C) \\\\\n\\text{Gr}_k^{W_\\bullet}H^2(U) &= 0 & k \\neq 1,2\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=64743403
647470
Chebyshev nodes
Roots of the Chebyshev polynomials of the first kind In numerical analysis, Chebyshev nodes are a set of specific real algebraic numbers, used as nodes for polynomial interpolation. They are the projection of equispaced points on the unit circle onto the real interval formula_0 the diameter of the circle. The "Chebyshev nodes of the first kind", also called the Chebyshev zeros, are the zeros of the Chebyshev polynomials of the first kind. The "Chebyshev nodes of the second kind", also called the Chebyshev extrema, are the extrema of the Chebyshev polynomials of the first kind, which are also the zeros of the Chebyshev polynomials of the second kind. Both of these sets of numbers are commonly referred to as "Chebyshev nodes" in literature. Polynomial interpolants constructed from these nodes minimize the effect of Runge's phenomenon. Definition. For a given positive integer formula_1 the Chebyshev nodes of the first kind in the open interval formula_2 are formula_3 These are the roots of the Chebyshev polynomials of the first kind with degree formula_1. For nodes over an arbitrary interval formula_4 an affine transformation can be used: formula_5 Similarly, for a given positive integer formula_1 the Chebyshev nodes of the second kind in the closed interval formula_6 are formula_7 These are the roots of the Chebyshev polynomials of the second kind with degree formula_1. For nodes over an arbitrary interval formula_8 an affine transformation can be used as above. The Chebyshev nodes of the second kind are also referred to as Chebyshev-Lobatto points or Chebyshev extreme points. Note that the Chebyshev nodes of the second kind include the end points of the interval while the Chebyshev nodes of the first kind do not include the end points. These formulas generate Chebyshev nodes which are sorted from greatest to least on the real interval. Both kinds of nodes are always symmetric about the midpoint of the interval. Hence, for odd formula_1, both kinds of nodes will include the midpoint. Geometrically, for both kinds of nodes, we first place formula_1 points on the upper half of the unit circle with equal spacing between them. Then the points are projected down to the formula_9-axis. The projected points on the formula_9-axis are called Chebyshev nodes. Approximation. The Chebyshev nodes are important in approximation theory because they form a particularly good set of nodes for polynomial interpolation. Given a function "f" on the interval formula_10 and formula_1 points formula_11 in that interval, the interpolation polynomial is that unique polynomial formula_12 of degree at most formula_13 which has value formula_14 at each point formula_15. The interpolation error at formula_9 is formula_16 for some formula_17 (depending on x) in [−1, 1]. So it is logical to try to minimize formula_18 This product is a "monic" polynomial of degree n. It may be shown that the maximum absolute value (maximum norm) of any such polynomial is bounded from below by 21−"n". This bound is attained by the scaled Chebyshev polynomials 21−"n" "T""n", which are also monic. (Recall that |"T""n"("x")| ≤ 1 for "x" ∈ [−1, 1].) Therefore, when the interpolation nodes "x""i" are the roots of "T""n", the error satisfies formula_19 For an arbitrary interval ["a", "b"] a change of variable shows that formula_20 Even order modified Chebyshev nodes. Many applications for Chebyshev nodes, such as the design of equally terminated passive Chebyshev filters, cannot use Chebyshev nodes directly, due to the lack of a root at 0. However, the Chebyshev nodes may be modified into a usable form by translating the roots down such that the lowest roots are moved to zero, thereby creating two roots at zero of the modified Chebyshev nodes. The even order modification translation is: formula_21 The sign of the formula_22 function is chosen to be the same as the sign of formula_23. For example, the Chebyshev nodes for a 4th order Chebyshev function are, {0.92388,0.382683,-0.382683,-0.92388}, and formula_24 is formula_25, or 0.146446. Running all the nodes through the translation yields formula_26 to be {0.910180, 0, 0, -0.910180}. The modified even order Chebyshev nodes now contains two nodes of zero, and is suitable for use in designing even order Chebyshev filters with equally terminated passive element networks. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "[-1, 1]," }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "(-1,1)" }, { "math_id": 3, "text": "x_k = \\cos\\left(\\frac{2k+1}{2n}\\pi\\right), \\quad k = 0, \\ldots, n-1." }, { "math_id": 4, "text": "(a,b)" }, { "math_id": 5, "text": "x_k = \\frac{(a + b)}{2} + \\frac{(b - a)}{2} \\cos\\left(\\frac{2k+1}{2n}\\pi\\right), \\quad k = 0, \\ldots, n-1." }, { "math_id": 6, "text": "[-1,1]" }, { "math_id": 7, "text": "x_k = \\cos\\left(\\frac{k}{n-1}\\pi\\right), \\quad k = 0, \\ldots, n-1." }, { "math_id": 8, "text": "[a,b]" }, { "math_id": 9, "text": "x" }, { "math_id": 10, "text": "[-1,+1]" }, { "math_id": 11, "text": "x_1, x_2, \\ldots , x_n," }, { "math_id": 12, "text": "P_{n-1}" }, { "math_id": 13, "text": "n - 1" }, { "math_id": 14, "text": "f(x_i)" }, { "math_id": 15, "text": "x_i" }, { "math_id": 16, "text": "f(x) - P_{n-1}(x) = \\frac{f^{(n)}(\\xi)}{n!} \\prod_{i=1}^n (x-x_i) " }, { "math_id": 17, "text": "\\xi" }, { "math_id": 18, "text": "\\max_{x \\in [-1,1]} \\biggl| \\prod_{i=1}^n (x-x_i) \\biggr|. " }, { "math_id": 19, "text": "\\left|f(x) - P_{n-1}(x)\\right| \\le \\frac{1}{2^{n - 1}n!} \\max_{\\xi \\in [-1,1]} \\left| f^{(n)} (\\xi) \\right|." }, { "math_id": 20, "text": "\\left|f(x) - P_{n-1}(x)\\right| \\le \\frac{1}{2^{n - 1}n!} \\left(\\frac{b-a}{2}\\right)^n \\max_{\\xi \\in [a,b]} \\left|f^{(n)} (\\xi)\\right|." }, { "math_id": 21, "text": "X_kEven = \\sqrt{\\frac{X_k^2-X_{n/2}^2}{1-X_{n/2}^2}} \\text{ For } n>2" }, { "math_id": 22, "text": "\\sqrt{ }" }, { "math_id": 23, "text": "X_k" }, { "math_id": 24, "text": "X_{n/2}^2" }, { "math_id": 25, "text": "0.382683^2" }, { "math_id": 26, "text": "X_kEven" } ]
https://en.wikipedia.org/wiki?curid=647470
6474767
Freundlich equation
Empirical adsorption isotherm The Freundlich equation or Freundlich adsorption isotherm, an adsorption isotherm, is an empirical relationship between the quantity of a gas adsorbed into a solid surface and the gas pressure. The same relationship is also applicable for the concentration of a solute adsorbed onto the surface of a solid and the concentration of the solute in the liquid phase. In 1909, Herbert Freundlich gave an expression representing the isothermal variation of adsorption of a quantity of gas adsorbed by unit mass of solid adsorbent with gas pressure. This equation is known as Freundlich adsorption isotherm or Freundlich adsorption equation. As this relationship is entirely empirical, in the case where adsorption behavior can be properly fit by isotherms with a theoretical basis, it is usually appropriate to use such isotherms instead (see for example the Langmuir and BET adsorption theories). The Freundlich equation is also derived (non-empirically) by attributing the change in the equilibrium constant of the binding process to the heterogeneity of the surface and the variation in the heat of adsorption. Freundlich adsorption isotherm. The Freundlich adsorption isotherm is mathematically expressed as In Freundlich's notation (used for his experiments dealing with the adsorption of organic acids on coal in aqueous solutions), formula_0 signifies the ratio between the adsorbed mass or adsorbate formula_1 and the mass of the adsorbent formula_2, which in Freundlich's studies was coal. In the figure above, the x-axis represents formula_3, which denotes the equilibrium concentration of the adsorbate within the solvent. Freundlich's numerical analysis of the three organic acids for the parameters formula_4 and formula_5 according to equation 1 were: Freundlich's experimental data can also be used in a contemporary computer based fit. These values are added to appreciate the numerical work done in 1907. △ K and △ n values are the error bars of the computer based fit. The K and n values itself are used to calculate the dotted lines in the figure. Equation 1 can also be written as formula_6 Sometimes also this notation for experiments in the gas phase can be found: formula_7 x = mass of adsorbate m = mass of adsorbent p = equilibrium pressure of the gaseous adsorbate in case of experiments made in the gas phase (gas/solid interaction with gaseous species/adsorbed species) K and n are constants for a given adsorbate and adsorbent at a given temperature (from there, the term "isotherm" needed to avoid significant gas pressure fluctuations due to uncontrolled temperature variations in the case of adsorption experiments of a gas onto a solid phase). K = distribution coefficient n = correction factor At high pressure 1/"n" 0, hence extent of adsorption becomes independent of pressure. The Freundlich equation is unique; consequently, if the data fit the equation, it is only likely, but not proved, that the surface is heterogeneous. The heterogeneity of the surface can be confirmed with calorimetry. Homogeneous surfaces (or heterogeneous surfaces that exhibit homogeneous adsorption (single site)) have a constant ΔH of adsorption. On the other hand, heterogeneous adsorption (multi-site) have a variable ΔH of adsorption depending on the percent of sites occupied. When the adsorbate pressure in the gas phase (or the concentration in solution) is low, high-energy sites will be occupied first. As the pressure in the gas phase (or the concentration in solution) increases, the low-energy sites will then be occupied resulting in a weaker ΔH of adsorption. Limitation of Freundlich adsorption isotherm. Experimentally it was determined that extent of gas adsorption varies directly with pressure, and then it directly varies with pressure raised to the power 1/"n" until saturation pressure "P"s is reached. Beyond that point, the rate of adsorption saturates even after applying higher pressure. Thus, the Freundlich adsorption isotherm fails at higher pressure. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x/m" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": " m " }, { "math_id": 3, "text": "c_{\\mathrm{\\,eq}}" }, { "math_id": 4, "text": " K " }, { "math_id": 5, "text": " n " }, { "math_id": 6, "text": "\\log \\frac{x}{m} = \\log K + \\frac{1}{n} \\log c_{eq} " }, { "math_id": 7, "text": "\\log \\frac{x}{m} = \\log K + \\frac{1}{n} \\log p " } ]
https://en.wikipedia.org/wiki?curid=6474767
64761242
Zolotarev polynomials
In mathematics, Zolotarev polynomials are polynomials used in approximation theory. They are sometimes used as an alternative to the Chebyshev polynomials where accuracy of approximation near the origin is of less importance. Zolotarev polynomials differ from the Chebyshev polynomials in that two of the coefficients are fixed in advance rather than allowed to take on any value. The Chebyshev polynomials of the first kind are a special case of Zolotarev polynomials. These polynomials were introduced by Russian mathematician Yegor Ivanovich Zolotarev in 1868. Definition and properties. Zolotarev polynomials of degree formula_0 in formula_1 are of the form formula_2 where formula_3 is a prescribed value for formula_4 and the formula_5 are otherwise chosen such that the deviation of formula_6 from zero is minimum in the interval formula_7. A subset of Zolotarev polynomials can be expressed in terms of Chebyshev polynomials of the first kind, formula_8. For formula_9 then formula_10 For values of formula_3 greater than the maximum of this range, Zolotarev polynomials can be expressed in terms of elliptic functions. For formula_11, the Zolotarev polynomial is identical to the equivalent Chebyshev polynomial. For negative values of formula_3, the polynomial can be found from the polynomial of the positive value, formula_12 The Zolotarev polynomial can be expanded into a sum of Chebyshev polynomials using the relationship formula_13 In terms of Jacobi elliptic functions. The original solution to the approximation problem given by Zolotarev was in terms of Jacobi elliptic functions. Zolotarev gave the general solution where the number of zeroes to the left of the peak value (formula_14) in the interval formula_7 is not equal to the number of zeroes to the right of this peak (formula_15). The degree of the polynomial is formula_16. For many applications, formula_17 is used and then only formula_0 need be considered. The general Zolotarev polynomials are defined as formula_18 where formula_19 formula_20 formula_21 is the Jacobi eta function formula_22 is the incomplete elliptic integral of the first kind formula_23 is the quarter-wave complete elliptic integral of the first kind. That is, formula_24 formula_25 is the Jacobi elliptic modulus formula_26 is the Jacobi elliptic sine. The variation of the function within the interval [−1,1] is equiripple except for one peak which is larger than the rest. The position and width of this peak can be set independently. The position of the peak is given by formula_27 where formula_28 is the Jacobi elliptic cosine formula_29 is the Jacobi delta amplitude formula_30 is the Jacobi zeta function formula_31 is as defined above. The height of the peak is given by formula_32 where formula_33 is the incomplete elliptic integral of the third kind formula_34 formula_35 is the position on the left limb of the peak which is the same height as the equiripple peaks. Jacobi eta function. The Jacobi eta function can be defined in terms of a Jacobi auxiliary theta function, formula_36 where, formula_37 formula_38 formula_39 Applications. The polynomials were introduced by Yegor Ivanovich Zolotarev in 1868 as a means of uniformly approximating polynomials of degree formula_40 on the interval [−1,1]. Pafnuty Chebyshev had shown in 1858 that formula_40 could be approximated in this interval with a polynomial of degree at most formula_0 with an error of formula_41. In 1868, Zolotarev showed that formula_42 could be approximated with a polynomial of degree at most formula_43, two degrees lower. The error in Zolotarev's method is given by, formula_44 The procedure was further developed by Naum Achieser in 1956. Zolotarev polynomials are used in the design of Achieser-Zolotarev filters. They were first used in this role in 1970 by Ralph Levy in the design of microwave waveguide filters. Achieser-Zolotarev filters are similar to Chebyshev filters in that they have an equal ripple attenuation through the passband, except that the attenuation exceeds the preset ripple for the peak closest to the origin. Zolotarev polynomials can be used to synthesise the radiation patterns of linear antenna arrays, first suggested by D.A. McNamara in 1985. The work was based on the filter application with beam angle used as the variable instead of frequency. The Zolotarev beam pattern has equal-level sidelobes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": " Z_n(x,\\sigma) = x^n -\\sigma x^{n-1} + \\cdots + a_k x^k + \\cdots + a_0 \\ ," }, { "math_id": 3, "text": "\\sigma" }, { "math_id": 4, "text": "a_{n-1}" }, { "math_id": 5, "text": "a_k \\in \\mathbb R" }, { "math_id": 6, "text": "Z_n(x)" }, { "math_id": 7, "text": "[-1,1]" }, { "math_id": 8, "text": "T_n(x)" }, { "math_id": 9, "text": " 0 \\le \\sigma \\le \\dfrac {1}{n} \\tan^2 \\dfrac {\\pi}{2n}" }, { "math_id": 10, "text": " Z_n(x,\\sigma) = (1 + \\sigma)^n T_n \\left ( \\frac {x - \\sigma}{1 + \\sigma} \\right ) \\ ." }, { "math_id": 11, "text": "\\sigma=0" }, { "math_id": 12, "text": " Z_n(x,-\\sigma) = (-1)^n Z_n(-x,\\sigma) \\ ." }, { "math_id": 13, "text": " Z_n(x) = \\sum^n_{k=0} a_k T_k (x) \\ ." }, { "math_id": 14, "text": "q" }, { "math_id": 15, "text": "p" }, { "math_id": 16, "text": "n=p+q" }, { "math_id": 17, "text": "p=q" }, { "math_id": 18, "text": "Z_n(x|\\kappa) = \\frac {(-1)^p}{2} \\left [ \\left ( \\dfrac {H(u-v)}{H(u+v)} \\right )^n + \\left ( \\dfrac {H(u+v)}{H(u-v)} \\right )^n \\right ]" }, { "math_id": 19, "text": " u = F \\left ( \\left . \\operatorname{sn} \\left ( \\left. v \\right |\\kappa \\right ) \\sqrt { \\dfrac {1+x}{x + 2 \\operatorname{sn}^2 \\left ( \\left . v \\right | \\kappa \\right ) -1}} \\right | \\kappa \\right ) " }, { "math_id": 20, "text": " v = \\dfrac {p}{n} K(\\kappa)" }, { "math_id": 21, "text": " H(\\varphi)" }, { "math_id": 22, "text": " F(\\varphi|\\kappa)" }, { "math_id": 23, "text": " K(\\kappa)" }, { "math_id": 24, "text": " K(\\kappa)=F \\left ( \\left . \\frac{\\pi}{2} \\right| \\kappa \\right)" }, { "math_id": 25, "text": " \\kappa" }, { "math_id": 26, "text": " \\operatorname{sn} (\\varphi|\\kappa)" }, { "math_id": 27, "text": " x_\\text {max} = 1 - 2 \\operatorname {sn}^2 (v|\\kappa) + 2 \\dfrac {\\operatorname {sn} (v|\\kappa) \\operatorname {cn} (v|\\kappa)}{\\operatorname {dn} (v|\\kappa)} Z(v|\\kappa)" }, { "math_id": 28, "text": " \\operatorname{cn} (\\varphi|\\kappa)" }, { "math_id": 29, "text": " \\operatorname{dn} (\\varphi|\\kappa)" }, { "math_id": 30, "text": " Z(\\varphi|\\kappa)" }, { "math_id": 31, "text": "v" }, { "math_id": 32, "text": "Z_n(x_\\text {max}|\\kappa) = \\cosh 2n \\bigl ( \\sigma_\\text {max} Z(v|\\kappa) - \\varPi (\\sigma_\\text {max},v|\\kappa) \\bigr )" }, { "math_id": 33, "text": " \\varPi (\\phi_1,\\phi_2|\\kappa) " }, { "math_id": 34, "text": " \\sigma_\\text {max} = F \\left ( \\left . \\sin^{-1} \\left ( \\dfrac {1}{\\kappa \\operatorname {sn} (v|\\kappa)} \\sqrt \\dfrac {x_\\text {max} - x_\\mathrm L}{x_\\text {max} + 1} \\right ) \\right | \\kappa \\right ) " }, { "math_id": 35, "text": " x_\\mathrm L " }, { "math_id": 36, "text": " H(\\varphi|\\kappa) = \\theta_1 (a|b)" }, { "math_id": 37, "text": " a = \\frac {\\pi \\varphi}{2K'(\\kappa)} " }, { "math_id": 38, "text": " b = \\exp \\left ( - \\frac {\\pi K'(\\kappa)}{K(\\kappa)} \\right ) " }, { "math_id": 39, "text": " K'(\\kappa) = K(\\sqrt{1 - \\kappa^2}) \\ ." }, { "math_id": 40, "text": "x^{n+1}" }, { "math_id": 41, "text": "2^{-n}" }, { "math_id": 42, "text": "x^{n+1} - \\sigma x^n" }, { "math_id": 43, "text": "n-1" }, { "math_id": 44, "text": " 2^{-n} \\left ( \\dfrac {1 + \\sigma}{1+n} \\right )^{n+1} \\ ." }, { "math_id": 45, "text": "x^n" } ]
https://en.wikipedia.org/wiki?curid=64761242
64772763
Peter Schneider (mathematician)
German mathematician Peter Bernd Schneider (born 9 January 1953 in Karlsruhe) is a German mathematician, specializing in the "p"-adic aspects of algebraic number theory, arithmetic algebraic geometry, and representation theory. Education and career. Peter Schneider studied mathematics in Karlsruhe and Erlangen. After his "Diplom" in 1977 from the University of Erlangen-Nuremberg, he was an assistant from 1977 to 1983 at the University of Regensburg. There he received in 1980 his PhD with advisor Jürgen Neukirch and dissertation "Die Galoiscohomologie formula_0-adischer Darstellungen über Zahlkörpern" (The Galois cohomology of formula_0-adic representations of number fields). Schneider habilitated in 1982 at the University of Regensburg. He was a postdoc at Harvard University for the academic year 1983–1984 and a C2-professor at Heidelberg University for the academic year 1984–1985. He was a C4-professor from 1985 to 1994 at the University of Cologne and is since 1994 a C-4 professor at the University of Münster. His research includes Iwasawa theory, special values of formula_1-functions. and formula_0-adic representations (in the latter subject he has collaborated extensively with Jeremy Teitelbaum). In 1992 Schneider, together with Christopher Deninger, Michael Rapoport and Thomas Zink, received the Gottfried Wilhelm Leibniz Prize for their work in using arithmetic-algebraic geometry to solve Diophantine equations. In 2006 he was an invited speaker with talk " Continuous representation theory of p-adic Lie groups" at the International Congress of Mathematicians in Madrid. In 2016 he was elected a member of the German National Academy of Sciences Leopoldina and the Academia Europaea.
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "L" } ]
https://en.wikipedia.org/wiki?curid=64772763
647729
Ambiguous grammar
Type of a context-free grammar In computer science, an ambiguous grammar is a context-free grammar for which there exists a string that can have more than one leftmost derivation or parse tree. Every non-empty context-free language admits an ambiguous grammar by introducing e.g. a duplicate rule. A language that only admits ambiguous grammars is called an inherently ambiguous language. Deterministic context-free grammars are always unambiguous, and are an important subclass of unambiguous grammars; there are non-deterministic unambiguous grammars, however. For computer programming languages, the reference grammar is often ambiguous, due to issues such as the dangling else problem. If present, these ambiguities are generally resolved by adding precedence rules or other context-sensitive parsing rules, so the overall phrase grammar is unambiguous. Some parsing algorithms (such as Earley or GLR parsers) can generate sets of parse trees (or "parse forests") from strings that are syntactically ambiguous. Examples. Trivial language. The simplest example is the following ambiguous grammar (with start symbol A) for the trivial language that consists of only the empty string: A → A | ε …meaning that the nonterminal A can be derived to either itself again, or to the empty string. Thus the empty string has leftmost derivations of length 1, 2, 3, and indeed of any length, depending on how many times the rule A → A is used. This language also has an unambiguous grammar, consisting of a single production rule: A → ε …meaning that the unique production can produce only the empty string, which is the unique string in the language. In the same way, any grammar for a non-empty language can be made ambiguous by adding duplicates. Unary string. The regular language of unary strings of a given character, say codice_0 (the regular expression codice_1), has the unambiguous grammar: A → aA | ε …but also has the ambiguous grammar: A → aA | Aa | ε These correspond to producing a right-associative tree (for the unambiguous grammar) or allowing both left- and right- association. This is elaborated below. Addition and subtraction. The context free grammar A → A + A | A − A | a is ambiguous since there are two leftmost derivations for the string a + a + a: As another example, the grammar is ambiguous since there are two parse trees for the string a + a − a: The language that it generates, however, is not inherently ambiguous; the following is a non-ambiguous grammar generating the same language: A → A + a | A − a | a Dangling else. A common example of ambiguity in computer programming languages is the dangling else problem. In many languages, the codice_2 in an If–then(–else) statement is optional, which results in nested conditionals having multiple ways of being recognized in terms of the context-free grammar. Concretely, in many languages one may write conditionals in two valid forms: the if-then form, and the if-then-else form – in effect, making the else clause optional: In a grammar containing the rules Statement → if Condition then Statement | if Condition then Statement else Statement | Condition → ... some ambiguous phrase structures can appear. The expression if a then if b then s else s2 can be parsed as either if a then begin if b then s end else s2 or as if a then begin if b then s else s2 end depending on whether the codice_2 is associated with the first codice_4 or second codice_4. This is resolved in various ways in different languages. Sometimes the grammar is modified so that it is unambiguous, such as by requiring an codice_6 statement or making codice_2 mandatory. In other cases the grammar is left ambiguous, but the ambiguity is resolved by making the overall phrase grammar context-sensitive, such as by associating an codice_2 with the nearest codice_4. In this latter case the grammar is unambiguous, but the context-free grammar is ambiguous. An unambiguous grammar with multiple derivations. The existence of multiple derivations of the same string does not suffice to indicate that the grammar is ambiguous; only multiple "leftmost" derivations (or, equivalently, multiple parse trees) indicate ambiguity. For example, the simple grammar S → A + A A → 0 | 1 is an unambiguous grammar for the language { 0+0, 0+1, 1+0, 1+1 }. While each of these four strings has only one leftmost derivation, it has two different derivations, for example S ⇒ A + A ⇒ 0 + A ⇒ 0 + 0 and S ⇒ A + A ⇒ A + 0 ⇒ 0 + 0 Only the former derivation is a leftmost one. Recognizing ambiguous grammars. The decision problem of whether an arbitrary grammar is ambiguous is undecidable because it can be shown that it is equivalent to the Post correspondence problem. At least, there are tools implementing some semi-decision procedure for detecting ambiguity of context-free grammars. The efficiency of parsing a context-free grammar is determined by the automaton that accepts it. Deterministic context-free grammars are accepted by deterministic pushdown automata and can be parsed in linear time, for example by an LR parser. They are a strict subset of the context-free grammars, which are accepted by pushdown automata and can be parsed in polynomial time, for example by the CYK algorithm. Unambiguous context-free grammars can be nondeterministic. For example, the language of even-length palindromes on the alphabet of 0 and 1 has the unambiguous context-free grammar S → 0S0 | 1S1 | ε. An arbitrary string of this language cannot be parsed without reading all its symbols first, which means that a pushdown automaton has to try alternative state transitions to accommodate for the different possible lengths of a semi-parsed string. Nevertheless, removing grammar ambiguity may produce a deterministic context-free grammar and thus allow for more efficient parsing. Compiler generators such as YACC include features for resolving some kinds of ambiguity, such as by using the precedence and associativity constraints. Inherently ambiguous languages. While some context-free languages (the set of strings that can be generated by a grammar) have both ambiguous and unambiguous grammars, there exist context-free languages for which no unambiguous context-free grammar can exist. Such languages are called inherently ambiguous. There are no inherently ambiguous regular languages. The existence of inherently ambiguous context-free languages was proven with Parikh's theorem in 1961 by Rohit Parikh in an MIT research report. The language formula_0 is inherently ambiguous. Ogden's lemma can be used to prove that certain context-free languages, such as formula_1, are inherently ambiguous. See this page for a proof. The union of formula_2 with formula_3 is inherently ambiguous. This set is context-free, since the union of two context-free languages is always context-free. But give a proof that any context-free grammar for this union language cannot unambiguously parse strings of form formula_4. More examples, and a general review of techniques for proving inherent ambiguity of context-free languages, are found given by Bassino and Nicaud (2011). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{x | x=a^n b^m a^{n^{\\prime}} b^m \\text { or } x=a^n b^m a^n b^{m^{\\prime}}, \\text { where } n, n', m, m' \\geq 1\\}" }, { "math_id": 1, "text": "\\{a^nb^mc^m | m, n \\geq 1\\} \\cup \\{a^mb^mc^n | m, n \\geq 1\\}" }, { "math_id": 2, "text": "\\{a^n b^n c^m d^m \\mid n, m > 0\\}" }, { "math_id": 3, "text": "\\{a^n b^m c^m d^n \\mid n, m > 0\\}" }, { "math_id": 4, "text": "a^n b^n c^n d^n, (n > 0)" } ]
https://en.wikipedia.org/wiki?curid=647729
64772926
DRE-i with enhanced privacy
E2E verifiable e-voting system Direct Recording Electronic with Integrity and Enforced Privacy (DRE-ip) is an End-to-End (E2E) verifiable e-voting system without involving any tallying authorities, proposed by Siamak Shahandashti and Feng Hao in 2016. It improves a previous DRE-i system by using a real-time computation strategy and providing enhanced privacy. A touch-screen based prototype of the system was trialed in the Gateshead Civic Centre polling station on 2 May 2019 during the 2019 United Kingdom local elections with positive voter feedback. A proposal that includes DRE-ip as a solution for large-scale elections was ranked 3rd place in the 2016 Economist Cybersecurity Challenge jointly organized by The Economist and Kaspersky Lab. Protocol. The DRE-ip protocol is applicable to both onsite polling station voting and remote Internet voting implementations. In the specification below, it is described for polling station voting. The protocol consists of three stages: setup, voting and tallying. Setup. Let formula_0 and formula_1 be two large primes, where formula_2. formula_3 is a subgroup of formula_4 of prime order formula_1. Let formula_5 and formula_6 be two random generators of formula_3, whose discrete logarithm relationship is unknown. This can be realized by choosing a non-identity element in formula_3 as formula_5 and computing formula_6 based on applying a one-way hash function with the inclusion of election specific information such as the date, election title and questions as the input. All modulo operations are performed with respect to the modulus formula_0. Alternatively, the protocol can be implemented using an elliptic curve, while the protocol specification remains unchanged. Voting. For simplicity, the voting process is described for a single-candidate (Yes/No) election held in a polling station using a touch-screen DRE machine. There are standard ways to extend a single candidate election to support multiple candidates, e.g., providing a Yes/No selection for each of the candidates or using different encoded values for different candidates as described by Baudron et al. After being authenticated at a polling station, a voter obtains an authentication credential, which can be a random passcode or a smartcard. The authentication credential allows the voter to log onto a DRE machine in a private voting booth and cast a vote, but the machine does not know the voter's real identity. A voter casts a vote on a DRE machine in two steps. First, he is presented with "Yes" and "No" options for the displayed candidate on the screen. Once the voter makes a choice on the touch screen, the DRE prints the first part of the receipt, containing formula_7 where formula_8 is a unique ballot index number, formula_9 is a number chosen uniformly at random from formula_10, and formula_11 is either 1 or 0 (corresponding to "Yes" and "No" respectively). The cipher text also comes with a zero knowledge proof to prove that formula_12 and formula_13 are well-formed. This zero knowledge proof can be realized by using a technique due to Ronald Cramer, Ivan Damgård and Berry Schoenmakers (also called the CDS technique). The interactive CDS technique can be made non-interactive by applying Fiat-Shamir heuristics. In the second step, the voter has the option to either confirm or cancel the selection. In case of "confirm", the DRE updates the aggregated values formula_14 and formula_15 in memory as below, deletes individual values formula_9 and formula_11, and marks the ballot as "confirmed" on the receipt. formula_16. In case of “cancel”, the DRE reveals formula_9 and formula_11 on the receipt, marks the ballot as "cancelled" and prompts the voter to choose again. The voter can check if the printed formula_11 matches his previous selection and raise a dispute if it does not. The voter can cancel as many ballots as he wishes but can only cast one confirmed ballot. The canceling option allows the voter to verify if the data printed on the receipt during the first step corresponds to the correct encryption of the voter's choice, hence ensuring the vote is "cast as intended". This follows the same approach of voter-initiated auditing as proposed by Joshua Benaloh. However, in DRE-ip, voter-initiated auditing is realized without requiring the voter to understand cryptography (the voter merely needs to check whether the printed plaintext formula_11 is correct). After voting is finished, the voter leaves the voting booth with one receipt for the confirmed ballot and zero or more receipts for the canceled ballots. The same data printed on the receipts are also published on a mirrored public election website (also known as a public bulletin board) with a digital signature to prove the data authenticity. To ensure the vote is "recorded as cast", the voter just needs to check if the same receipt has been published on the election website. Tallying. Once the election has finished, the DRE publishes the final values formula_14 and formula_15 on the election website, in addition to all the receipts. Anyone will be able to verify the tallying integrity by checking the published audit data, in particular, whether the following two equations hold. This ensures that all votes are "tallied as recorded", which together with the earlier assurance on "cast as intended" and "recorded as cast" guarantees that the entire voting process is "end-to-end verifiable". An "end-to-end verifiable" voting system is also said to be "software independent", a phrase coined by Ron Rivest. The DRE-ip system differs from other E2E verifiable voting systems in that it does not require tallying authorities, hence the election management is much simpler. formula_17 and formula_18. Real-world trial. A touch-screen based prototype of DRE-ip had been implemented and trialed in a polling station in Gateshead on 2 May 2019 during the 2019 United Kingdom local elections. During the trial, voters first voted as normal using paper ballots. Upon exiting the polling station, they were invited to participate in a voluntary trial of using a DRE-ip e-voting system for a dummy election. On average, it took each voter only 33 seconds to cast a vote on the DRE-ip system. As part of the trial, voters were asked to compare their voting experiences of using paper ballots and the DRE-ip e-voting system, and indicate which system they would prefer. Among the participating voters, 11 chose "strongly prefer paper", 9 chose "prefer paper", 16 chose "neutral", 23 chose "prefer e-voting", and 32 chose "strongly prefer e-voting". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "q" }, { "math_id": 2, "text": "q\\,|\\, p-1" }, { "math_id": 3, "text": "G_q" }, { "math_id": 4, "text": "Z_p^*" }, { "math_id": 5, "text": "g_1" }, { "math_id": 6, "text": "g_2" }, { "math_id": 7, "text": "i, R_i = g_2^{r_i}, Z_i = g_1^{r_i} g_1^{v_i}" }, { "math_id": 8, "text": "i" }, { "math_id": 9, "text": "r_i" }, { "math_id": 10, "text": "[1, q-1]" }, { "math_id": 11, "text": "v_i" }, { "math_id": 12, "text": "R_i" }, { "math_id": 13, "text": "Z_i" }, { "math_id": 14, "text": "t" }, { "math_id": 15, "text": "s" }, { "math_id": 16, "text": "t = \\sum v_i, s = \\sum r_i" }, { "math_id": 17, "text": "\\prod R_i = g_2^s" }, { "math_id": 18, "text": "\\prod Z_i = g_1^s g_1^t" } ]
https://en.wikipedia.org/wiki?curid=64772926
64773022
Hongqi E-HS9
The Hongqi E-HS9 () is an electric full-size luxury SUV made by Hongqi. Overview. Originally previewed by the Hongqi E115 Concept during the 2019 International Motor Show Germany (IAA) and 2019 Guangzhou Auto Show, the production Hongqi E-HS9 was first shown at the 2020 Beijing Auto Show. The production Hongqi E-HS9 is a 5-door, 7 seat vehicle, and costs $80,000 to $110,000. The E-HS9 features a dual-color design and comes with 22-inch wheels. It has a dimensions of 5209 mm/2010 mm/1731 mm, with a 3110 mm wheelbase. The weight is , and the drag coefficient formula_0 is 0.345. Technology. The Hongqi E-HS9 is equipped with an intelligent sensor steering wheel and six smart screens, capable of functions such as AR real scene navigation and remote vehicle control by mobile phone, including unlocking, temperature regulation, smart voice control, and vehicle locating. The Hongqi E-HS9 is equipped with the L3+ autonomous driving system and OTA. Performance. The E-HS9 is available in two different performance variants. The lower-spec model features one electric motor for each axle rated at each, with combined. The top-trim model features a motor for the rear axle, with a combined power of . The acceleration of the seven-passenger SUV from is within 5 seconds. According to Hongqi, the E-HS9 can travel approximately on a charge. Battery and Charging. The E-HS9 has a 92.5-kilowatt-hour battery unit with 108 kW CCS-plug. The car supports wireless charging technology or non-contact charging, which can fully charge the vehicle in 8.4 hours. The E-HS9 vehicle battery is specially designed with the fully-covered side battery protection structure. In terms of endurance, the NEDC range of Hongqi E-HS9 can reach up to . The vehicle can autonomously park itself and adjust air suspension height for the best wireless charging alignment with charging efficiency up to 91%. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c_\\mathrm{d}" } ]
https://en.wikipedia.org/wiki?curid=64773022
647758
Kummer surface
Irreducible nodal surface In algebraic geometry, a Kummer quartic surface, first studied by Ernst Kummer (1864), is an irreducible nodal surface of degree 4 in formula_0 with the maximal possible number of 16 double points. Any such surface is the Kummer variety of the Jacobian variety of a smooth hyperelliptic curve of genus 2; i.e. a quotient of the Jacobian by the Kummer involution "x" ↦ −"x". The Kummer involution has 16 fixed points: the 16 2-torsion point of the Jacobian, and they are the 16 singular points of the quartic surface. Resolving the 16 double points of the quotient of a (possibly nonalgebraic) torus by the Kummer involution gives a K3 surface with 16 disjoint rational curves; these K3 surfaces are also sometimes called Kummer surfaces. Other surfaces closely related to Kummer surfaces include Weddle surfaces, wave surfaces, and tetrahedroids. Geometry. Singular quartic surfaces and the double plane model. Let formula_1 be a quartic surface with an ordinary double point "p", near which "K" looks like a quadratic cone. Any projective line through "p" then meets "K" with multiplicity two at "p", and will therefore meet the quartic "K" in just two other points. Identifying the lines in formula_0 through the point "p" with formula_2, we get a double cover from the blow up of "K" at "p" to formula_2; this double cover is given by sending "q" ≠ "p" ↦ formula_3, and any line in the tangent cone of "p" in "K" to itself. The ramification locus of the double cover is a plane curve "C" of degree 6, and all the nodes of "K" which are not "p" map to nodes of "C". By the genus degree formula, the maximal possible number of nodes on a sextic curve is obtained when the curve is a union of formula_4 lines, in which case we have 15 nodes. Hence the maximal number of nodes on a quartic is 16, and in this case they are all simple nodes (to show that formula_5 is simple project from another node). A quartic which obtains these 16 nodes is called a Kummer Quartic, and we will concentrate on them below. Since formula_5 is a simple node, the tangent cone to this point is mapped to a conic under the double cover. This conic is in fact tangent to the six lines (w.o proof). Conversely, given a configuration of a conic and six lines which tangent to it in the plane, we may define the double cover of the plane ramified over the union of these 6 lines. This double cover may be mapped to formula_0, under a map which blows down the double cover of the special conic, and is an isomorphism elsewhere (w.o. proof). The double plane and Kummer varieties of Jacobians. Starting from a smooth curve formula_6 of genus 2, we may identify the Jacobian formula_7 with formula_8 under the map formula_9. We now observe two facts: Since formula_6 is a hyperelliptic curve the map from the symmetric product formula_10 to formula_11, defined by formula_12, is the blow down of the graph of the hyperelliptic involution to the canonical divisor class. Moreover, the canonical map formula_13 is a double cover. Hence we get a double cover formula_14. This double cover is the one which already appeared above: The 6 lines are the images of the odd symmetric theta divisors on formula_7, while the conic is the image of the blown-up 0. The conic is isomorphic to the canonical system via the isomorphism formula_15, and each of the six lines is naturally isomorphic to the dual canonical system formula_16 via the identification of theta divisors and translates of the curve formula_6. There is a 1-1 correspondence between pairs of odd symmetric theta divisors and 2-torsion points on the Jacobian given by the fact that formula_17, where formula_18 are Weierstrass points (which are the odd theta characteristics in this in genus 2). Hence the branch points of the canonical map formula_19 appear on each of these copies of the canonical system as the intersection points of the lines and the tangency points of the lines and the conic. Finally, since we know that every Kummer quartic is a Kummer variety of a Jacobian of a hyperelliptic curve, we show how to reconstruct Kummer quartic surface directly from the Jacobian of a genus 2 curve: The Jacobian of formula_6 maps to the complete linear system formula_20 (see the article on Abelian varieties). This map factors through the Kummer variety as a degree 4 map which has 16 nodes at the images of the 2-torsion points on formula_7. Level 2 structure. Kummer's 166 configuration. There are several crucial points which relate the geometric, algebraic, and combinatorial aspects of the configuration of the nodes of the kummer quartic: Hence we have a configuration of formula_28 conics in formula_0; where each contains 6 nodes, and such that the intersection of each two is along 2 nodes. This configuration is called the formula_29 configuration or the Kummer configuration. Weil pairing. The 2-torsion points on an Abelian variety admit a symplectic bilinear form called the Weil pairing. In the case of Jacobians of curves of genus two, every nontrivial 2-torsion point is uniquely expressed as a difference between two of the six Weierstrass points of the curve. The Weil pairing is given in this case by formula_30. One can recover a lot of the group theoretic invariants of the group formula_31 via the geometry of the formula_29 configuration. Group theory, algebra and geometry. Below is a list of group theoretic invariants and their geometric incarnation in the 166 configuration. References. "This article incorporates material from the Citizendium article "", which is licensed under the but not under the ."
[ { "math_id": 0, "text": "\\mathbb{P}^3" }, { "math_id": 1, "text": "K\\subset\\mathbb{P}^3 " }, { "math_id": 2, "text": "\\mathbb{P}^2" }, { "math_id": 3, "text": "\\scriptstyle\\overline{pq}" }, { "math_id": 4, "text": "6" }, { "math_id": 5, "text": "p" }, { "math_id": 6, "text": "C" }, { "math_id": 7, "text": "Jac(C)" }, { "math_id": 8, "text": "Pic^2(C)" }, { "math_id": 9, "text": "x\\mapsto x+K_C" }, { "math_id": 10, "text": "Sym^2 C" }, { "math_id": 11, "text": "Pic^2 C" }, { "math_id": 12, "text": "\\{p,q\\}\\mapsto p+q" }, { "math_id": 13, "text": "C\\to|K_C|^*" }, { "math_id": 14, "text": "Kum(C)\\to Sym^2|K_C|^*" }, { "math_id": 15, "text": "T_0 Jac(C)\\cong |K_C|^*" }, { "math_id": 16, "text": "|K_C|^*" }, { "math_id": 17, "text": "(\\Theta+w_1)\\cap(\\Theta+w_2)=\\{w_1-w_2,0\\}" }, { "math_id": 18, "text": "w_1,w_2" }, { "math_id": 19, "text": "C\\mapsto |K_C|^*" }, { "math_id": 20, "text": "|O_{Jac(C)}(2\\Theta_C)|\\cong\\mathbb{P}^{2^2-1}" }, { "math_id": 21, "text": "\\{q-w|q\\in C\\}" }, { "math_id": 22, "text": "w'-w" }, { "math_id": 23, "text": "w'" }, { "math_id": 24, "text": "w,w'" }, { "math_id": 25, "text": "0" }, { "math_id": 26, "text": "w-w'" }, { "math_id": 27, "text": "|2\\Theta_C|" }, { "math_id": 28, "text": "16" }, { "math_id": 29, "text": "16_6" }, { "math_id": 30, "text": "\\langle p_1-p_2,p_3-p_4\\rangle=\\#\\{p_1,p_2\\}\\cap\\{p_3,p_4\\}" }, { "math_id": 31, "text": "Sp_4(2)" } ]
https://en.wikipedia.org/wiki?curid=647758
6477751
Literacy in India
Literacy in India is a key for social-economic progress. The 2011 census, indicated a 2001–2011 literacy growth of 97.2%, which is slower than the growth seen during the previous decade. An old analytical 1990 study estimated that it would take until 2060 for India to achieve universal literacy at then-current rate of progress. The census of India pegged the average literacy rate to be 73% in 2011 while National Statistical Commission surveyed literacy to be 77.7% in 2017–18. Literacy rate in urban areas was 87.7%, higher than rural areas with 73.5%. There is a wide gender disparity in the literacy rate in India and effective literacy rates (age 7 and above) was 84.7% for men and 70.3% for women. The low female literacy rate has a dramatically negative impact on family planning and population stabilisation efforts in India. Studies have indicated that female literacy is a strong predictor of the use of contraception among married Indian couples, even when women do not otherwise have economic independence. The census provided a positive indication that growth in female literacy rates (11.8%) was substantially faster than in male literacy rates (6.9%) in the 2001–2011 decadal period, which means the gender gap appears to be narrowing. Literacy involves a continuum of learning enabling individuals to achieve their goals, to develop their knowledge and potential, and to participate fully in their community and wider society." The National Literacy Mission defines literacy as acquiring the skills of reading, writing and arithmetic and the ability to apply them to one's day-to-day life. The achievement of functional literacy implies (i) self-reliance in 3 Rs, (ii) awareness of the causes of deprivation and the ability to move towards amelioration of their condition by participating in the process of development, (iii) acquiring skills to improve economic status and general well-being, and (iv) imbibing values such as national integration, conservation of the environment, women's equality, observance of small family norms. Literacy rate in India. The working definition of literacy in the Indian census since 1991 is as follows: Literacy rate &lt;templatestyles src="Block indent/styles.css"/&gt;Also called the "effective literacy rate"; the total percentage of the population of an area at a particular time aged seven years or above who can read and write with understanding. Here the denominator is the population aged seven years or more. formula_0 Crude literacy rate is the total percentage of the people of an area at a particular time who can read and write with understanding, taking the total population of the area (including below seven years of age) as the denominator. formula_1 Source: The report on 'Household Social Consumption: Education in India as part of 75th round of National Sample Survey – from July 2017 to June 2018. Other than Assam, no other state from the Northeast was included in the survey. Regional literacy comparison. The table below shows the adult and youth literacy rates for India and some neighboring countries as complied by UNESCO in 2015. Adult literacy rate is based on the 15+ years age group, while the youth literacy rate is for the 15–24 years age group (i.e. youth is a subset of adults). Literacy rate disparity. One of the main factors contributing to this relatively low literacy rate is usefulness of education and availability of schools in vicinity in rural areas. There was a shortage of classrooms to accommodate all the students in 2006–2007. In addition, there is no proper sanitation in most schools. The study of 188 government-run primary schools in central and northern India revealed that 59% of the schools had no drinking water facility and 89% no toilets. In 600,000 villages and multiplying urban slum habitats, 'free and compulsory education' is the basic literacy instruction dispensed by barely qualified 'para teachers'. The average pupil teacher ratio for all India is 42:1, implying a teacher shortage. Such inadequacies resulted in a non-standardized school system where literacy rates may differ. Furthermore, the expenditure allocated to education was never above 4.3% of the GDP from 1951 to 2002 despite the target of 6% by the Kothari Commission. This further complicates the literacy problem in India. Severe caste disparities also exist. Discrimination against lower castes has resulted in high dropout rates and low enrollment rates. The National Sample Survey Organisation and the National Family Health Survey collected data in India on the percentage of children completing primary school which are reported to be only 36.8% and 37.7% respectively. On 21 February 2005, the Prime Minister of India said that he was pained to note that "only 47 out of 100 children enrolled in class I reach class VIII, putting the dropout rate at 52.78 percent." It is estimated that at least 35 million, and possibly as many as 60 million, children aged 6–14 years are not in school. The large proportion of illiterate females is another reason for the low literacy rate in India. Inequality based on gender differences resulted in female literacy rates being lower at 65.46% than that of their male counterparts at 82.14%. Due to strong stereotyping of female and male roles, sons are thought of to be more useful and hence are educated. Females are pulled to help out on agricultural farms at home as they are increasingly replacing the males on such activities which require no formal education. Fewer than 2% of girls who engaged in agriculture work attended school. History and progress. Pre-colonial period. Prior to the colonial era, education in India typically occurred under the supervision of a guru in traditional schools called gurukulas. The gurukulas were supported by public donations and were one of the earliest forms of public school offices. According to the historian Dharampal, based on his analysis of British documents from the early 1800s, pre-colonial education in India was widespread and fairly accessible: While attendance was much lower for girls than boys, children of all castes (including Shudra and "other castes") and social strata attended the formal, out-of-home education. Dharampal notes that senior British officials, such as Thomas Munro – who reported that the Hindu temple or mosque of each village had a school attached to it and the children of all communities attended these schools – surveyed the number and types of indigenous Indian educational institutions still operating in the early nineteenth century, numbers and status of students attending, and the instruction given. In 1821, one such official, G. L. Prendergast of the Bombay Presidency Governor's Council, stated: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; William Adam, missionary and later joutnalist, reported in 1830, that there were around one hundred thousand schools in Bengal and Bihar. British period. In the colonial era, the community-funded gurukul system and temple-based charity education, began to decline as the centrally funded institutions promoted by the British colonial administration began to gradually take over. From 1881 and 1947, the number of English-language primary schools grew from 82,916 to 134,866 and the number of students attending those institutions grew from 2,061,541 to 10,525,943. Literacy rates among the Indian public, as recorded rose from an estimated 3.2 per cent in 1872, to 16.1 per cent in 1941. In 1944, the British colonial administration presented a plan, called the Sargent Scheme for the educational reconstruction of India, with a goal of producing 100% literacy in the country within 40 years, i.e. by 1984. Although the 40-year time-frame was derided at the time by leaders of the Indian independence movement as being too long a period to achieve universal literacy, India had only just crossed the 74% level by the 2011 census. The British Indian censuses identify a significant difference in literacy rates, by: sex, religion, caste and state of residence, an example of which may be seen in the table below. Post-independence. The provision of universal and compulsory education for all children in the age group of 6–14 was a cherished national ideal and had been given overriding priority by incorporation as a Directive Policy in Article 45 of the Constitution, but it is still to be achieved more than half a century since the Constitution was adopted in 1949. Parliament has passed the Constitution 86th Amendment Act, 2002, to make elementary education a Fundamental Right for children in the age group of 6–14 years. In order to provide more funds for education, an education cess of 2 percent has been imposed on all direct and indirect central taxes through the Finance (No. 2) Act, 2004. In 2000–01, there were 60,840 pre-primary and pre-basic schools, and 664,041 primary and junior basic schools. Total enrolment at the primary level has increased from 19,200,000 in 1950–51 to 109,800,000 in 2001–02. The number of high schools in 2000–01 was higher than the number of primary schools at the time of independence. The literacy rate grew from 18.33 percent in 1951, to 74.04 percent in 2011. During the same period, the population grew from 361 million to 1,210 million. &lt;templatestyles src="template:sticky header/styles.css"/&gt; Growth and variation. Every census since 1880 had indicated rising literacy in the country, but the population growth rate had been high enough that the absolute number of illiterate people rose with every decade. The 2001–2011 decade is the second census period (after the 1991–2001 census period) when the absolute number of Indian illiterate population declined (by 31,196,847 people), indicating that the literacy growth rate is now outstripping the population growth rate. India's literacy rate is at 75%. Kerala has achieved a literacy rate of 93%. Bihar is the least literate state in India, with a literacy of 61.8%. Several other social indicators of the two states are correlated with these rates, such as life expectancy at birth (71.61 for males and 75 for females in Kerala, 65.66 for males and 64.79 for females in Bihar), infant mortality per 1,000 live births (10 in Kerala, 61 in Bihar), birth rate per 1,000 people (16.9 in Kerala, 30.9 in Bihar) and death rate per 1,000 people (6.4 in Kerala, 7.9 in Bihar). Six Indian states account for about 60% of all illiterates in India: Uttar Pradesh, Bihar, Madhya Pradesh, Rajasthan, and Andhra Pradesh (including Telangana). Slightly less than half of all Indian illiterates (48.12%) are in the six states of Uttar Pradesh, Bihar, Rajasthan, Madhya Pradesh, Jharkhand and Chhattisgarh. State literacy programmes. Several states in India have executed successful programs to boost literacy rates. Over time, a set of factors have emerged as being key to success: the official will to succeed, deliberate steps to engage the community in administering the program, adequate funding for infrastructure and teachers, and provisioning additional services which are considered valuable by the community (such as free school lunches). Bihar. Bihar has significantly raised the literacy rate as per the 2011 census. Literacy rate in year 1951 was only 13.49%, 21.95% in year 1961, 23.17% in year 1971 and 32.32% in year 1981. The literacy rate has risen from 39% in 1991 to 47% in 2001 to 63.8% in 2011. The Government of Bihar has launched several programs to boost literacy, and its Department of Adult Education won a UNESCO award in 1981. Extensive impoverishment, entrenched hierarchical social divisions and the lack of correlation between educational attainment and job opportunities are often cited in studies of the hurdles literacy programs face in Bihar. Often, children receiving an education in Bihar face significant challenges due to the regions socio-cultural influences and economic factors. Children from "lower castes" are frequently denied school attendance and harassed when they do attend. In areas where there is discrimination, poor funding and impoverished families means that children often cannot afford textbooks and stationery. When children do get educated, general lack of economic progress in the state means that government jobs are the only alternative to farming labor, yet these jobs, in practice, require bribes to secure – which poorer families cannot afford. This leads to educated youths working on the farms, much as uneducated ones do, and leads parents to question the investment of sending children to school in the first place. Bihar's government schools have also faced teacher absenteeism, leading the state government to threaten to withhold of salaries of teachers who failed to conduct classes on a regular basis. To incentivize students to attend, the government announced a Rupee 1 per school-day grant to poor children who show up at school. Tripura. Tripura has the third highest literacy rate in India. According to the 2011 census, literacy level was 93.91 percent in Kerala and 91.58 percent in Mizoram, among the most literate states in the country. The national literacy rate, according to the 2011 census, was 74.04 percent. The Tripura success story is attributed to the involvement of local government bodies, including gram panchayats, NGOs and local clubs under the close supervision of the State Literacy Mission Authority (SLMA) headed by the chief minister. Tripura attained 87.75 percent literacy in the 2011 census, from the 12th position in the 2001 census to the 4th position in the 2011 census. The Tripura Chief Minister said that efforts were underway to literate leftover 5.35 percent people and achieve complete success in a state of about 3.8 million people. The programs were not just implemented to make the state literate but as long-term education programs to ensure all citizens have a certain basic minimum level of education. Tripura has 45 blocks and 23 subdivisions that are served by 68 government-run schools and 30–40 private schools. Among projects implemented by the state government to increase literacy in the state are: The holistic education system, implemented with equal interest in Agartala, remote areas and the tribal autonomic areas makes sure that people in Tripura do not just become literate but educated, officials emphasized. One pointer to the government's interest in education is the near-total absence of child labor in Tripura. Kerala. Kerala topped the Education Development Index (EDI) among 21 major states in India in the year 2006–2007. More than 94% of the rural population has access to a primary school within 1 km, while 98% of the population benefits one school within a distance of 2 km. An upper primary school within a distance of 3 km is available for more than 96% of the people, whose 98% benefit the facility for secondary education within 8 km. The access for rural students to higher educational institutions in cities is facilitated by widely subsidized transport fares. Kerala's educational system has been developed by institutions owned or aided by the government. In the educational system prevailed in the state, schooling is for 10 years which is subdivided into lower primary, upper primary and high school. After 10 years of secondary schooling, students typically enroll in Higher Secondary Schooling in one of the three major streams— liberal arts, commerce or science. Upon completing the required coursework, students can enroll in general or professional undergraduate programs. Kerala launched a "campaign for total literacy" in Ernakulam district in the late 1980s, with a "fusion between the district administration headed by its collector on one side and, on the other side, voluntary groups, social activists and others". On 4 February 1990, the Government of Kerala endeavoured to replicate the initiative on a statewide level, launching the Kerala State Literacy Campaign. First, households were surveyed with door-to-door, multistage survey visits to form an accurate picture of the literacy landscape and areas that needed special focus. Then, "Kala Jāthas" (cultural troupes) and "Sāksharata Pada Yātras" (Literacy Foot Marches) were organized to generate awareness of the campaign and create a receptive social atmosphere for the program. An integrated management system was created involving state officials, prominent social figures, local officials and senior voluntary workers to oversee the execution of the campaign. Himachal. Himachal Pradesh underwent a "Schooling Revolution" in the 1961–2001 period that has been called "even more impressive than Kerala's." Kerala has led the nation in literacy rates since the 19th century and seen sustained initiatives for over 150 years, whereas Himachal Pradesh's literacy rate in 1961 was below the national average in every age group. In the three decadal 1961–1991 period, the female literacy in the 15–19 years age group went from 11% to 86%. School attendance for both boys and girls in the 6–14-year age group stood at over 97% each, when measured in the 1998–99 school year. Mizoram. Mizoram is the second most literate state in India (91.58 percent), with Serchhip and Aizawl districts being the two most literate districts in India (literacy rate is 98.76% and 98.50%), both in Mizoram. Mizoram's literacy rate rose rapidly after independence: from 31.14% in 1951 to 88.80% in 2001. As in Himachal Pradesh, Mizoram has a social structure that is relatively free of hierarchy and strong official intent to produce total literacy. The government identified illiterates and organized an administrative structure that engaged officials and community leaders and was staffed by "animators" who were responsible for teaching five illiterates each. Mizoram established 360 continuing education centers to handle continued education beyond the initial literacy teaching and to provide an educational safety net for school drop-outs. Tamil Nadu. One of the pioneers of the scheme that started providing cooked meals to children in corporation schools in the Madras city in 1923. The programme was introduced on a large scale in the 1960s under the chief ministership of K. Kamaraj. The first major thrust came in 1982 when Chief Minister of Tamil Nadu, Dr. M. G. Ramachandran, decided to universalize the scheme for all children up to class 10. Tamil Nadu's midday meal programme is among the best-known in the country. Starting in 1982, Tamil Nadu took an approach to promote literacy based on free lunches for schoolchildren, "ignoring cynics who said it was an electoral gimmick and economists who said it made little fiscal sense." The then chief minister of Tamil Nadu, MGR launched the program, which resembled a similar initiative in 19th century Japan, because "he had experienced as a child what it was like to go hungry to school with the family having no money to buy food". Eventually, the programme covered all children under the age of 15, as well as pregnant women for the first four months of their pregnancy. Tamil Nadu's literacy rate rose from 54.4% in 1981 to 80.3% in 2011. In 2001, the Supreme Court of India instructed all state governments to implement free school lunches in all government-funded schools, but implementation has been patchy due to corruption and social issues. Despite these hurdles, 120 million receive free lunches in Indian schools every day, making it the largest school meal programme in the world. Rajasthan. Although the decadal rise from 2001 to 2011 was only 6.7% (60.4% in 2001 to 67.7% in 2011), Rajasthan had the biggest percentage decadal (1991–2001) increase in the literacy of all Indian states, from about 38% to about 61%, a leapfrog that has been termed "spectacular" by some observers. Aggressive state government action, in the form of the District Primary Education Programme, the Shiksha Karmi initiative and the Lok Jumbish programme are credited with the rapid improvement. Virtually every village in Rajasthan now has primary school coverage. When statehood was granted to Rajasthan in 1956, it was the least literate state in India with a literacy rate of 18%. Literacy promotion. The right to education is a fundamental right, and UNESCO aimed at education for all by 2015. India, along with the Arab states and sub-Saharan Africa, has a literacy level below the threshold level of 75%, but efforts are ongoing to achieve that level. The campaign to achieve at least the threshold literacy level represents the largest ever civil and military mobilization in the country. International Literacy Day is celebrated each year on 8 September with the aim to highlight the importance of literacy to individuals, communities and societies. Government efforts. Financial regulators in India such as RBI, SEBI, IRDAI, PFRDA, etc. have created a joint charter called National Strategy For Financial Education (NSFE), detailing initiatives taken by them for financial literacy in India. Also, other market participants like banks, stock exchanges, broking houses, mutual funds, and insurance companies are actively involved in it. The National Centre For Financial Education (NCFE) in consultation with relevant financial sector regulators and stakeholders has prepared the revised NSFE(2020–2025) National Literacy Mission. The "National Literacy Mission", launched in 1988, aimed at attaining a literacy rate of 75 percent by 2007. Its charter is to impart functional literacy to non-literates in the age group of 35–75 years. The "Total Literacy Campaign" is their principal strategy for the eradication of illiteracy. The "Continuing Education Scheme" provides a learning continuum to the efforts of the Total Literacy and Post Literacy programs. Sarva Shiksha Abhiyan. The "Sarva Shiksha Abhiyan" (Hindi for "Total Literacy Campaign") was launched in 2001 to ensure that all children in the 6–14-year age-group attend school and complete eight years of schooling by 2010. An important component of the scheme is the "Education Guarantee Scheme and Alternative and Innovative Education", meant primarily for children in areas with no formal school within a one-kilometer () radius. The centrally sponsored "District Primary Education Programme", launched in 1994, had opened more than 160,000 new schools by 2005, including almost 84,000 alternative schools. Non-governmental efforts. The bulk of Indian illiterates live in the country's rural areas, where social and economic barriers play an important role in keeping the lowest strata of society illiterate. Government programs alone, however well-intentioned, may not be able to dismantle barriers built over centuries. Major social reformation efforts are sometimes required to bring about a change in the rural scenario. Specific mention is to be made regarding the role of the People's Science Movements (PSMs) and Bharat Gyan Vigyan Samiti (BGVS) in the Literacy Mission in India during the early 1990s. Several non-governmental organisations such as Pratham, ITC, Rotary Club, Lions Club have worked to improve the literacy rate in India. Manthan Sampoorna Vikas Kendra. Manthan SVK is a holistic education programme initiated by Divya Jyoti Jagriti Sansthan under the guidance of Shri Ashutosh Maharajji. This initiative, started in 2008, has since then reached and spread education to over 5000 underprivileged children across India, with its centers spread in Delhi – NCR, Punjab and Bihar. The main aim of Manthan is to provide not just academic but also mental, physical and emotional education. Manthan has also been working for adult literacy through its Adult Literacy Centres for illiterate women. Vocational education is also given attention to, with Sewing and Stitching Centres for women. The motto of Manthan being "Saakshar Bharat, Sashakt Bharat", it has been providing quality education selflessly. Mamidipudi Venkatarangaiya Foundation. Shantha Sinha won a Magsaysay Award in 2003 in recognition of "Her guiding the people of Andhra Pradesh to end the scourge of child labor and send all of their children to school." As head of an extension programme at the University of Hyderabad in 1987, she organized a three-month-long camp to prepare children rescued from bonded labor to attend school. Later, in 1991, she guided her family's "Mamidipudi Venkatarangaiya Foundation" to take up this idea as part of its overriding mission in Andhra Pradesh. Her original transition camps grew into full-fledged residential "bridge schools." The foundation's aim is to create a social climate hostile to child labor, child marriage and other practices that deny children the right to a normal childhood. Today the MV Foundation's bridge schools and programs extend to 4,300 villages. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Effective literacy rate}=\\frac{\\text{number of literate persons aged 7 or above}}{\\text{population aged 7 and above}}\\times 100" }, { "math_id": 1, "text": "\\text{Crude literacy rate}=\\frac{\\text{number of literate persons}}{\\text{total population}}\\times 100" } ]
https://en.wikipedia.org/wiki?curid=6477751
64783551
Vadym Slyusar
Soviet and Ukrainian scientist Vadym Slyusar (born 15 October 1964, vil. Kolotii, Reshetylivka Raion, Poltava region, Ukraine) is a Soviet and Ukrainian scientist, Professor, Doctor of Technical Sciences, Honored Scientist and Technician of Ukraine, founder of tensor-matrix theory of digital antenna arrays (DAAs), N-OFDM and other theories in fields of radar systems, smart antennas for wireless communications and digital beamforming. Scientific results. N-OFDM theory. In 1992 Vadym Slyusar patented the 1st optimal demodulation method for N-OFDM signals after Fast Fourier transform (FFT). From this patent was started the history of N-OFDM signals theory. In this regard, W. Kozek and A. F. Molisch wrote in 1998 about N-OFDM signals with the sub-carrier spacing formula_0, that "it is not possible to recover the information from the received signal, even in the case of an ideal channel." But in 2001 Vadym Slyusar proposed such Non-orthogonal frequency digital modulation (N-OFDM) as an alternative of OFDM for communications systems. The next publication of V. Slysuar about this method has priority in July 2002 before the conference paper of I. Darwazeh and M.R.D. Rodrigues (September, 2003) regarding SEFDM. The description of the method of optimal processing for N-OFDM signals without FFT of ADC samples was transferred to publication by V. Slyusar in October 2003. The theory N-OFDM of V. Slyusar inspired numerous investigations in this area of other scientists. Tensor-matrix theory of digital antenna array. In 1996 V. Slyusar proposed the column-wise Khatri–Rao product to estimate four coordinates of signals sources at a digital antenna array. The alternative concept of the matrix product, which uses row-wise splitting of matrices with a given quantity of rows (Face-splitting product), was proposed by V. Slyusar in 1996 as well. After these results the tensor-matrix theory of digital antenna arrays and new matrix operations was evolved (such as the Block Face-splitting product, Generalized Face-splitting product, Matrix Derivative of Face-splitting product etc.), which used also in artificial intelligence and machine learning systems to minimization of convolution and tensor sketch operations, in a popular Natural Language Processing models, and hypergraph models of similarity. The Face-splitting product and his properties used for multidimensional smoothing with P-splines and Generalized linear array model in the statistic in two- and multidimensional approximations of data as well. Theory of odd-order I/Q demodulators. The theory of odd-order I/Q demodulators, which was proposed by V. Slyusar in 2014, started from his investigations of the tandem scheme of two-stage signal processing for the design of an I/Q demodulator and multistage I/Q demodulators concept in 2012. As result, Slyusar "presents a new class of I/Q demodulators with odd order derived from the even order I/Q demodulator which is characterized by linear phase-frequency relation for wideband signals". Results in the other fields of research. V. Slyusar provided numerous theoretical works realized in several experimental radar stations with DAAs which were successfully tested. He investigated electrical small antennas and new constructions of such antennas, evolved the theory of metamaterials, proposed new ideas to implementation of augmented reality, and artificial intelligence to combat vehicles as well. V. Slyusar has 68 patents, and 850 publications in the areas of digital antenna arrays for radars and wireless communications. Life data. 1981–1985 – listener of Orenburg Air Defense high military school. In this time started the scientific carrier of V. Slyusar, which published a first scientific report in 1985. June 1992 – defended the dissertation for a candidate degree (Techn. Sci.) at the Council of Military Academy of Air Defense of the Land Forces (Kyiv). The significant stage of the recognition of Vadym Slyusar’s scientific results became the defense of the dissertation for a doctoral degree (Techn. Sci.) in 2000. Professor – since 2005, Honored Scientist and Technician of Ukraine – 2008. Since 1996 – work at Central Scientific Research Institute of Armament and Military Equipment of the Armed Forces of Ukraine (Kyiv). Military Rank - Colonel. Since 2003 – participates in Ukraine-NATO cooperation as head of the national delegations, a person of contact, and national representative within experts groups of NATO Conference of National Armaments Directors and technical members of the Research Task Groups (RTG) of NATO Science and Technology Organisation (STO). Since 2009 – member of editorial board of Izvestiya Vysshikh Uchebnykh Zavedenii. Radioelektronika. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha < 1" } ]
https://en.wikipedia.org/wiki?curid=64783551
64797434
Parameter word
In the mathematical study of combinatorics on words, a parameter word is a string over a given alphabet having some number of wildcard characters. The set of strings matching a given parameter word is called a parameter set or combinatorial cube. Parameter words can be composed, to produce smaller subcubes of a given combinatorial cube. They have applications in Ramsey theory and in computer science in the detection of duplicate code. Definitions and notation. Formally, a formula_0-parameter word of length formula_1, over a given alphabet formula_2, is a sequence of formula_1 characters, some of which may be drawn from formula_2 and the others of which are formula_0 distinct wildcard characters formula_3. Each wildcard character is required to appear at least once, but may appear multiple times, and the wildcard characters must appear in the order given by their indexes: the first wildcard character in the word must be formula_4, the next one that is different from formula_4 must be formula_5, etc. As a special case, a word over the given alphabet, without any wildcard characters, is said to be a 0-parameter word. For 1-parameter words, the subscripts may be omitted, as there is no ambiguity between different wildcard characters. The set of all formula_0-parameter words over formula_2, of length formula_1, is denoted formula_6. A formula_0-parameter word represents a set of formula_7 strings (0-parameter words), obtained by substituting a symbol of formula_2 for each wildcard character. This set of strings is called a parameter set of combinatorial cube, and formula_0 is called its dimension. A one-dimensional combinatorial cube may be called a combinatorial line. In a combinatorial cube, each copy of a particular wildcard character must have the same replacement. A generalization of parameter words allows different copies of the same wildcard character to be replaced by different characters from the alphabet, in a controlled way. If formula_2 is an alphabet and formula_8 is a group with an action on formula_2, then a formula_8-labeled parameter word is a formula_0-parameter word together with an assignment of a group element to each wildcard character in the word. The first occurrence of each wildcard character must be assigned the identity element of the group. Then, the strings represented by a labeled parameter word are obtained by choosing a character of formula_2 for each wildcard character, and substituting the result of combining that character with the group element labeling each copy of that character. The set of all formula_8-labeled formula_0-parameter words over formula_2, of length formula_1, is denoted formula_9. Example. In the game of tic-tac-toe, the cells of the game board can be given two integer coordinates formula_10 from the alphabet formula_11. Concatenating these two coordinates produces a string representing each cell, one of the nine strings formula_12 or formula_13. There are seven one-parameter words of length two over this alphabet, the words formula_14 and formula_15. The corresponding combinatorial lines form seven of the eight lines of three cells in a row of the tic-tac-toe board; for instance, the one-parameter word formula_16 corresponds to the combinatorial line formula_17, and the one-parameter word formula_15 corresponds to the combinatorial line formula_18. However, one of the eight winning lines of the tic-tac-toe game is missing from this set of combinatorial lines: the antidiagonal line formula_19. It is possible to obtain this line as a combinatorial line (without including any other combinations of cells that would be invalid for tic-tac-toe) by using a group with two elements, and an action in which the non-identity element swaps the alphabet letters formula_20 and formula_21 while leaving the element formula_22 in place. There are eight labeled one-parameter words of length two for this action, seven of which are obtained from the unlabeled one-parameter words by using the identity label for all wildcards. These seven have the same combinatorial lines as before. The eighth labeled word consists of the word formula_15 labeled by the identity element for its first formula_23 and the reversing non-identity element for the second formula_23; its combinatorial line is the final winning line of the tic-tac-toe board, formula_19. Composition. For three given integer parameters formula_24, it is possible to combine two parameter words, formula_25 and formula_26, to produce another parameter word formula_27. To do so, simply replace each copy of the formula_28th wildcard symbol in formula_29 by the formula_28th character in formula_30. This will necessarily produce a word of length formula_1 that uses each of the wildcard symbols in formula_30 at least once, in ascending order, so it produces a valid formula_0-parameter word of length formula_1. This notion of composition can also be extended to composition of labeled parameter words (both using the same alphabet and group action), by applying the group action to the non-wildcard substituted characters and composing the group labels for the wildcard substituted characters. A subset of a combinatorial cube is a smaller combinatorial cube if it can be obtained by a composition in this way. Combinatorial enumeration. The number of parameter words in formula_6 for an alphabet of size formula_31 is an formula_31-Stirling number of the second kind formula_32. These numbers count the number of partitions of the integers in the range formula_33 into formula_34 non-empty subsets such that the first formula_31 integers belong to distinct subsets. Partitions of this type can be placed into a bijective equivalence with the parameter words, by creating a word with a character for each of the formula_1 integers in the range formula_35, setting this character value to be either an integer in formula_36 belonging to the same subset of the partition, or a wildcard character for each subset of the partition that does not contain an integer in formula_36. The formula_31-Stirling numbers obey a simple recurrence relation by which they may easily be calculated. Applications. In Ramsey theory, parameter words and combinatorial cubes may be used to formulate the Graham–Rothschild theorem, according to which, for every finite alphabet and group action, and every combination of integer values formula_37, formula_0, and formula_31, there exists a sufficiently large number formula_1 such that if each formula_0-dimensional combinatorial cube over strings of length formula_1 is assigned one of formula_31 colors, then there exists a formula_37-dimensional combinatorial cube all of whose formula_0-dimensional subcubes have the same color. This result is a key foundation for structural Ramsey theory, and is used to define Graham's number, an enormous number used to estimate the value of formula_1 for a certain combination of values. In computer science, in the problem of searching for duplicate code, the source code for a given routine or module may be transformed into a parameter word by converting it into a sequence of tokens, and for each variable or subroutine name, replacing each copy of the same name with the same wildcard character. If code is duplicated, the resulting parameter words will remain equal even if some of the variables or subroutines have been renamed. More sophisticated searching algorithms can find long duplicate code sections that form substrings of larger source code repositories, by allowing the wildcard characters to be substituted for each other. An important special case of parameter words, well-studied in the combinatorics of words, is given by the partial words. These are strings with wildcard characters that may be substituted independently of each other, without requiring that some of the substituted characters be equal or controlled by a group action. In the language of parameter words, a partial word may be described as a parameter word in which each wildcard symbol appears exactly once. However, because there is no repetition of wildcard symbols, partial words may be written more simply by omitting the subscripts on the wildcard symbols. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "*_1,*_2,\\ldots, *_k" }, { "math_id": 4, "text": "*_1" }, { "math_id": 5, "text": "*_2" }, { "math_id": 6, "text": "A\\tbinom{n}{k}" }, { "math_id": 7, "text": "|A|^k" }, { "math_id": 8, "text": "G" }, { "math_id": 9, "text": "[A,G]\\tbinom{n}{k}" }, { "math_id": 10, "text": "(x,y)" }, { "math_id": 11, "text": "\\{1,2,3\\}" }, { "math_id": 12, "text": "11, 12, 13, 21, 22, 23, 31, 32," }, { "math_id": 13, "text": "33" }, { "math_id": 14, "text": "1*, 2*, 3*, *1, *2, *3," }, { "math_id": 15, "text": "**" }, { "math_id": 16, "text": "2*" }, { "math_id": 17, "text": "\\{21, 22, 23\\}" }, { "math_id": 18, "text": "\\{11, 22, 33\\}" }, { "math_id": 19, "text": "\\{13, 22, 31\\}" }, { "math_id": 20, "text": "1" }, { "math_id": 21, "text": "3" }, { "math_id": 22, "text": "2" }, { "math_id": 23, "text": "*" }, { "math_id": 24, "text": "n\\ge m\\ge k" }, { "math_id": 25, "text": "f\\in A\\tbinom{n}{m}" }, { "math_id": 26, "text": "g\\in A\\tbinom{m}{k}" }, { "math_id": 27, "text": "f\\circ g\\in A\\tbinom{n}{k}" }, { "math_id": 28, "text": "i" }, { "math_id": 29, "text": "f" }, { "math_id": 30, "text": "g" }, { "math_id": 31, "text": "r" }, { "math_id": 32, "text": "\\textstyle\\left\\{ {r+n \\atop r+k} \\right\\}_r" }, { "math_id": 33, "text": "[1,r+n]" }, { "math_id": 34, "text": "r+k" }, { "math_id": 35, "text": "[r+1,n+r]" }, { "math_id": 36, "text": "[1,r]" }, { "math_id": 37, "text": "m" } ]
https://en.wikipedia.org/wiki?curid=64797434
64799250
Algorithmic Combinatorics on Partial Words
Algorithmic Combinatorics on Partial Words is a book in the area of combinatorics on words, and more specifically on partial words. It was written by Francine Blanchet-Sadri, and published in 2008 by Chapman &amp; Hall/CRC in their Discrete Mathematics and its Applications book series. Topics. A partial word is a string whose characters may either belong to a given alphabet or be a wildcard character. Such a word can represent a set of strings over the alphabet without wildcards, by allowing each wildcard character to be replaced by any single character of the alphabet, independently of the replacements of the other wildcard characters. Two partial words are compatible when they agree on their non-wildcard characters, or equivalently when there is a string that they both match; one partial word formula_0 contains another partial word formula_1 if they are compatible and the non-wildcard positions of formula_0 contain those of formula_1; equivalently, the strings matched by formula_0 are a subset of those matched by formula_1. The book has 12 chapters, which can be grouped into five larger parts. The first part consists of two introductory chapters defining partial words, compatibility and containment, and related concepts. The second part generalizes to partial words some standard results on repetitions in strings, and the third part studies the problem of characterizing and recognizing primitive partial words, the partial words that have no repetition. Part four concerns codes defined from sets of partial words, in the sense that no two distinct concatenations of partial words from the set can be compatible with each other. A final part includes three chapters on advanced topics including the construction of repetitions of given numbers of copies of partial words that are compatible with each other, enumeration of the possible patterns of repetitions of partial words, and sets of partial words with the property that every infinite string contains a substring matching the set. Each chapter includes a set of exercises, and the end of the book provides hints to some of these exercises. Audience and reception. Although "Algorithmic Combinatorics on Partial Words" is primarily aimed at the graduate level, reviewer Miklós Bóna writes that it is for the most part "remarkably easy to read" and suggests that it could also be read by advanced undergraduates. However, Bóna criticizes the book as being too focused on the combinatorics on words as an end in itself, with no discussion of how to translate mathematical structures of other types into partial words so that the methods of this book can be applied to them. Because of this lack of generality and application, he suggests that the audience for the book is likely to consist only of other researchers specializing in this area. Similarly, although Patrice Séébold notes that this area can be motivated by applications to gene comparison, he criticizes the book as being largely a catalog of its author's own research results in partial words, without the broader thematic overview or identification of the fundamental topics and theorems that one would expect of a textbook, and suggests that a textbook that accomplishes these goals is still waiting to be written. However, reviewer Jan Kratochvíl is more positive, calling this "the first reference book on the theory of partial words", praising its pacing from introductory material to more advanced topics, and writing that it well supports its underlying thesis that many of the main results in the combinatorics of words without wildcards can be extended to partial words. He summarizes it as "an excellent textbook as well as a reference book for interested researchers". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "y" } ]
https://en.wikipedia.org/wiki?curid=64799250
647994
E8 (mathematics)
248-dimensional exceptional simple Lie group In mathematics, E8 is any of several closely related exceptional simple Lie groups, linear algebraic groups or Lie algebras of dimension 248; the same notation is used for the corresponding root lattice, which has rank 8. The designation E8 comes from the Cartan–Killing classification of the complex simple Lie algebras, which fall into four infinite series labeled A"n", B"n", C"n", D"n", and five exceptional cases labeled G2, F4, E6, E7, and E8. The E8 algebra is the largest and most complicated of these exceptional cases. Basic description. The Lie group E8 has dimension 248. Its rank, which is the dimension of its maximal torus, is eight. Therefore, the vectors of the root system are in eight-dimensional Euclidean space: they are described explicitly later in this article. The Weyl group of E8, which is the group of symmetries of the maximal torus that are induced by conjugations in the whole group, has order 214 35 52 7 = . The compact group E8 is unique among simple compact Lie groups in that its non-trivial representation of smallest dimension is the adjoint representation (of dimension 248) acting on the Lie algebra E8 itself; it is also the unique one that has the following four properties: trivial center, compact, simply connected, and simply laced (all roots have the same length). There is a Lie algebra E"k" for every integer "k" ≥ 3. The largest value of "k" for which E"k" is finite-dimensional is "k" = 8, that is, E"k" is infinite-dimensional for any "k" &gt; 8. Real and complex forms. There is a unique complex Lie algebra of type E8, corresponding to a complex group of complex dimension 248. The complex Lie group E8 of complex dimension 248 can be considered as a simple real Lie group of real dimension 496. This is simply connected, has maximal compact subgroup the compact form (see below) of E8, and has an outer automorphism group of order 2 generated by complex conjugation. As well as the complex Lie group of type E8, there are three real forms of the Lie algebra, three real forms of the group with trivial center (two of which have non-algebraic double covers, giving two further real forms), all of real dimension 248, as follows: For a complete list of real forms of simple Lie algebras, see the list of simple Lie groups. E8 as an algebraic group. By means of a Chevalley basis for the Lie algebra, one can define E8 as a linear algebraic group over the integers and, consequently, over any commutative ring and in particular over any field: this defines the so-called split (sometimes also known as "untwisted") form of E8. Over an algebraically closed field, this is the only form; however, over other fields, there are often many other forms, or "twists" of E8, which are classified in the general framework of Galois cohomology (over a perfect field "k") by the set H1("k",Aut(E8)), which, because the Dynkin diagram of E8 (see below) has no automorphisms, coincides with H1("k",E8). Over R, the real connected component of the identity of these algebraically twisted forms of E8 coincide with the three real Lie groups mentioned above, but with a subtlety concerning the fundamental group: all forms of E8 are simply connected in the sense of algebraic geometry, meaning that they admit no non-trivial algebraic coverings; the non-compact and simply connected real Lie group forms of E8 are therefore not algebraic and admit no faithful finite-dimensional representations. Over finite fields, the Lang–Steinberg theorem implies that H1("k",E8)=0, meaning that E8 has no twisted forms: see below. The characters of finite dimensional representations of the real and complex Lie algebras and Lie groups are all given by the Weyl character formula. The dimensions of the smallest irreducible representations are (sequence in the OEIS): 1, 248, 3875, 27000, 30380, 147250, 779247, 1763125, 2450240, 4096000, 4881384, 6696000, 26411008, 70680000, 76271625, 79143000, 146325270, 203205000, 281545875, 301694976, 344452500, 820260000, 1094951000, 2172667860, 2275896000, 2642777280, 2903770000, 3929713760, 4076399250, 4825673125, 6899079264, 8634368000 (twice), 12692520960... The 248-dimensional representation is the adjoint representation. There are two non-isomorphic irreducible representations of dimension 8634368000 (it is not unique; however, the next integer with this property is 175898504162692612600853299200000 (sequence in the OEIS)). The fundamental representations are those with dimensions 3875, 6696000, 6899079264, 146325270, 2450240, 30380, 248 and 147250 (corresponding to the eight nodes in the Dynkin diagram in the order chosen for the Cartan matrix below, i.e., the nodes are read in the seven-node chain first, with the last node being connected to the third). The coefficients of the character formulas for infinite dimensional irreducible representations of E8 depend on some large square matrices consisting of polynomials, the Lusztig–Vogan polynomials, an analogue of Kazhdan–Lusztig polynomials introduced for reductive groups in general by George Lusztig and David Kazhdan (1983). The values at 1 of the Lusztig–Vogan polynomials give the coefficients of the matrices relating the standard representations (whose characters are easy to describe) with the irreducible representations. These matrices were computed after four years of collaboration by a group of 18 mathematicians and computer scientists, led by Jeffrey Adams, with much of the programming done by Fokko du Cloux. The most difficult case (for exceptional groups) is the split real form of E8 (see above), where the largest matrix is of size 453060×453060. The Lusztig–Vogan polynomials for all other exceptional simple groups have been known for some time; the calculation for the split form of "E"8 is far longer than any other case. The announcement of the result in March 2007 received extraordinary attention from the media (see the external links), to the surprise of the mathematicians working on it. The representations of the E8 groups over finite fields are given by Deligne–Lusztig theory. Constructions. One can construct the (compact form of the) E8 group as the automorphism group of the corresponding e8 Lie algebra. This algebra has a 120-dimensional subalgebra so(16) generated by "J""ij" as well as 128 new generators "Q""a" that transform as a Weyl–Majorana spinor of spin(16). These statements determine the commutators formula_0 as well as formula_1 while the remaining commutators (not anticommutators!) between the spinor generators are defined as formula_2 It is then possible to check that the Jacobi identity is satisfied. Geometry. The compact real form of E8 is the isometry group of the 128-dimensional exceptional compact Riemannian symmetric space EVIII (in Cartan's classification). It is known informally as the "octooctonionic projective plane" because it can be built using an algebra that is the tensor product of the octonions with themselves, and is also known as a Rosenfeld projective plane, though it does not obey the usual axioms of a projective plane. This can be seen systematically using a construction known as the "magic square", due to Hans Freudenthal and Jacques Tits . E8 root system. A root system of rank "r" is a particular finite configuration of vectors, called "roots", which span an "r"-dimensional Euclidean space and satisfy certain geometrical properties. In particular, the root system must be invariant under reflection through the hyperplane perpendicular to any root. The E8 root system is a rank 8 root system containing 240 root vectors spanning R8. It is irreducible in the sense that it cannot be built from root systems of smaller rank. All the root vectors in E8 have the same length. It is convenient for a number of purposes to normalize them to have length √2. These 240 vectors are the vertices of a semi-regular polytope discovered by Thorold Gosset in 1900, sometimes known as the 421 polytope. Construction. In the so-called "even coordinate system", E8 is given as the set of all vectors in R8 with length squared equal to 2 such that coordinates are either all integers or all half-integers and the sum of the coordinates is even. Explicitly, there are 112 roots with integer entries obtained from formula_3 by taking an arbitrary combination of signs and an arbitrary permutation of coordinates, and 128 roots with half-integer entries obtained from formula_4 by taking an even number of minus signs (or, equivalently, requiring that the sum of all the eight coordinates be even). There are 240 roots in all. The 112 roots with integer entries form a D8 root system. The E8 root system also contains a copy of A8 (which has 72 roots) as well as E6 and E7 (in fact, the latter two are usually "defined" as subsets of E8). In the "odd coordinate system", E8 is given by taking the roots in the even coordinate system and changing the sign of any one coordinate. The roots with integer entries are the same while those with half-integer entries have an odd number of minus signs rather than an even number. Dynkin diagram. The Dynkin diagram for E8 is given by . This diagram gives a concise visual summary of the root structure. Each node of this diagram represents a simple root. A line joining two simple roots indicates that they are at an angle of 120° to each other. Two simple roots that are not joined by a line are orthogonal. Cartan matrix. The Cartan matrix of a rank "r" root system is an "r × r" matrix whose entries are derived from the simple roots. Specifically, the entries of the Cartan matrix are given by formula_5 where ( , ) is the Euclidean inner product and "αi" are the simple roots. The entries are independent of the choice of simple roots (up to ordering). The Cartan matrix for E8 is given by formula_6 The determinant of this matrix is equal to 1. Simple roots. A set of simple roots for a root system Φ is a set of roots that form a basis for the Euclidean space spanned by Φ with the special property that each root has components with respect to this basis that are either all nonnegative or all nonpositive. Given the E8 Cartan matrix (above) and a Dynkin diagram node ordering of: One choice of simple roots is given by the rows of the following matrix: formula_7 With this numbering of nodes in the Dynkin diagram, the highest root in the root system has Coxeter labels (2, 3, 4, 5, 6, 4, 2, 3). Using this representation of the simple roots, the lowest root is given by formula_8 The only simple root that can be added to the lowest root to obtain another root is the one corresponding to node 1 in this labeling of the Dynkin diagram — as is to be expected from the affine Dynkin diagram for formula_9. The Hasse diagram to the right enumerates the 120 roots of positive height relative to any particular choice of simple roots consistent with this node numbering. Note that the Hasse diagram does not represent the full Lie algebra, or even the full root system. The 120 roots of negative height relative to the same set of simple roots can be adequately represented by a second copy of the Hasse diagram with the arrows reversed; but it is less straightforward to connect these two diagrams via a basis for the eight-dimensional Cartan subalgebra. In the notation of the exposition of Chevalley generators and Serre relations: Insofar as an arrow represents the Lie bracket by the generator formula_10 associated with a simple root, each root in the height -1 layer of the reversed Hasse diagram must correspond to some formula_11 and can have only one upward arrow, connected to a node in the height 0 layer representing the element of the Cartan subalgebra given by formula_12. But the upward arrows from the height 0 layer must then represent formula_13, where formula_14 is (the transpose of) the Cartan matrix. One could draw multiple upward arrows from each formula_15 associated with all formula_10 for which formula_16 is nonzero; but this neither captures the numerical entries in the Cartan matrix nor reflects the fact that each formula_10 only has nonzero Lie bracket with one degree of freedom in the Cartan subalgebra (just not the same degree of freedom as formula_17). More fundamentally, this organization implies that the "span" of the generators designated as "the" Cartan subalgebra is somehow inherently special, when in most applications, any mutually commuting set of eight of the 248 Lie algebra generators (of which there are many!) — or any eight linearly independent, mutually commuting Lie derivations on any manifold with E8 structure — would have served just as well. Once a Cartan subalgebra has been selected (or defined "a priori", as in the case of a lattice), a basis of "Cartan generators" (the formula_17 among the Chevalley generators) and a root system are a useful way to describe structure "relative to this subalgebra". But the root system map is not the Lie algebra (let alone group!) territory. Given a set of Chevalley generators, most degrees of freedom in a Lie algebra and their sparse Lie brackets with formula_18 can be represented schematically as circles and arrows, but this simply breaks down on the chosen Cartan subalgebra. Such are the hazards of schematic visual representations of mathematical structures. Weyl group. The Weyl group of E8 is of order 696729600, and can be described as O(2): it is of the form 2."G".2 (that is, a stem extension by the cyclic group of order 2 of an extension of the cyclic group of order 2 by a group "G") where "G" is the unique simple group of order 174182400 (which can be described as PSΩ8+(2)). E8 root lattice. The integral span of the E8 root system forms a lattice in R8 naturally called the E8 root lattice. This lattice is rather remarkable in that it is the only (nontrivial) even, unimodular lattice with rank less than 16. Simple subalgebras of E8. The Lie algebra E8 contains as subalgebras all the exceptional Lie algebras as well as many other important Lie algebras in mathematics and physics. The height of the Lie algebra on the diagram approximately corresponds to the rank of the algebra. A line from an algebra down to a lower algebra indicates that the lower algebra is a subalgebra of the higher algebra. Chevalley groups of type E8. showed that the points of the (split) algebraic group E8 (see above) over a finite field with "q" elements form a finite Chevalley group, generally written E8("q"), which is simple for any "q", and constitutes one of the infinite families addressed by the classification of finite simple groups. Its number of elements is given by the formula (sequence in the OEIS): formula_19 The first term in this sequence, the order of E8(2), namely ≈ , is already larger than the size of the Monster group. This group E8(2) is the last one described (but without its character table) in the ATLAS of Finite Groups. The Schur multiplier of E8("q") is trivial, and its outer automorphism group is that of field automorphisms (i.e., cyclic of order "f" if "q" = "pf", where "p" is prime). described the unipotent representations of finite groups of type "E"8. Subgroups, subalgebras, and extensions. The smaller exceptional groups E7 and E6 sit inside E8. In the compact group, both E6×SU(3)/(Z / 3Z) and E7×SU(2)/(+1,−1) are maximal subgroups of E8. The 248-dimensional adjoint representation of E8 may be considered in terms of its restricted representation to the first of these subgroups. It transforms under E6×SU(3) as a sum of tensor product representations, which may be labelled as a pair of dimensions as (78,1) + (1,8) + (27,3) + (27,3. (Since the maximal subgroup is actually the quotient of this group product by a finite group, these notations may strictly be taken as indicating the infinitesimal (Lie algebra) representations.) Since the adjoint representation can be described by the roots together with the generators in the Cartan subalgebra, we may choose a particular E6 root system within E8 and decompose the sum representation relative to this E6. In this description, The 248-dimensional adjoint representation of E8, when similarly restricted to the second maximal subgroup, transforms under E7×SU(2) as: (133,1) + (1,3) + (56,2). We may again see the decomposition by looking at the roots together with the generators in the Cartan subalgebra. In this description, The connection between these two descriptions is given by the graded exceptional Lie algebra constructions of J. Tits and B. N. Allison. Any 27-dimensional representation of E6 can be equipped with a non-associative (but strictly power-associative) Jordan product operation to form an Albert algebra (an important exceptional case in algebraic constructions). The Kantor–Koecher–Tits construction applied to this Albert algebra recovers the 78-dimensional formula_20 as the reduced structure algebra of the Albert algebra. This formula_20, together with the 27 and 27 representations and the grade operator (the element of the Cartan subalgebra with weight -1 on the 27, +1 on the 27, and 0 on the 78), forms an formula_22 3-graded Lie algebra. A complete exposition of this construction may be found in standard texts on Jordan algebras such as Jacobson 1968 or McCrimmon 2004. Starting this 3-graded Lie algebra construction with any particular 27-dimensional representation, embedded within formula_24, of any particular E6 subgroup of E8 produces the corresponding formula_22 subalgebra. The particular formula_22 in the E7×SU(2) decomposition given above corresponds to choosing the 27 consisting of all roots with (1,0,0), (&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2,-&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2,-&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2), or (0,-1,-1) in the last three dimensions (in order), with the grade operator having weight (-1,&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2,&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2) in these dimensions; or equivalently to choosing the "27" consisting of all roots with (-1,0,0), (−&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2,&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2,&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2), or (0,1,1) in the last three dimensions (in order), with the grade operator having weight (1,-&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2,-&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2) in these dimensions. Note that there is nothing special about this choice of dimensions — the formula_25 within which the root system is embedded is not the set of eight independent but non-orthogonal axes corresponding to the simple roots, and any three dimensions will do — and there are also constructions using other equivalent groupings of roots. What matters is that the kernel of the Lie bracket with the generator chosen as the "grade operator" be an formula_20 subalgebra (plus a central formula_26 associated with the grade operator itself and the remaining generator of the Cartan subalgebra), not formula_27, formula_28, formula_29, "etc." The distinguished formula_23 in the E7×SU(2) decomposition above is then given by the subalgebra of formula_21 that commutes with the grade operator (which lies in the Cartan subalgebra of this formula_21). Of the four remaining roots in the formula_21, two are of grade &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 and two are of grade -&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2. In the convention where the 27 of E6 used to construct the formula_22 has grade -1 and the 27 has grade +1, the other two 27's have grade +&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 and the other two 27's have grade -&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2, as is apparent from permuting the values of the last three roots in the description above. Grouping these 4×(1+27)=112 generators to form the grade +&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 and -&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 subspaces of formula_24 (relative to the original choice of grade operator within formula_22), each subspace may be given a quite particular non-associative (nor even power-associative) product operation, resulting in two copies of Brown's 56-dimensional structurable algebra. Allison's 5-graded Lie algebra construction based on this structurable algebra recovers the original formula_24. (Allison's 5-grading differs from the above by a factor -2.) Grouping these generators differently, based on their weights relative to the Cartan generator of the formula_23 "orthogonal" to formula_22, gives two 56-dimensional subspaces that each carry the lowest-dimensional non-trivial irreducible representation of E7. Either of these may be combined with the Cartan generator to form a 57-dimensional Heisenberg algebra, and adjoining this to formula_22 produces the (non-simple) Lie algebra E7 1/2 described by Landsberg and Manivel. From the perspective in which the 27-dimensional grade -1 subspace of formula_24 (relative to a choice of grade operator) plays the role of "vector" representation of E6 and the 27 with roots opposite it plays the role of "covector" representation, it is natural to look for "spinor" representations in the grade +&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 and -&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 subspaces, or in some other combination of the (27,3) and (27,3) representations of E6×SU(3), and to attempt to relate these to geometrical spinors in the Clifford algebra sense as employed in quantum field theory. Variations on this idea are common in the physics literature. See Distler and Garibaldi 2009 for discussion of the mathematical obstacles to constructing a "chiral" gauge theory based on E8. The structure of formula_24 relative to its formula_20 subalgebra, together with the conventional scaling of elements of the Cartan subalgebra, invites extensions by geometric analogy but does not necessarily imply a relationship to low-dimensional geometry or low-energy physics. The same may be said of connections to Jordan and Heisenberg algebras, whose historical origins are intertwined with the development of quantum mechanics. Not every visual representation evocative of a tobacco pipe will hold tobacco. The finite quasisimple groups that can embed in (the compact form of) E8 were found by . The Dempwolff group is a subgroup of (the compact form of) E8. It is contained in the Thompson sporadic group, which acts on the underlying vector space of the Lie group E8 but does not preserve the Lie bracket. The Thompson group fixes a lattice and does preserve the Lie bracket of this lattice mod 3, giving an embedding of the Thompson group into E8(F3). The embeddings of the maximal subgroups of E8 up to dimension 248 are shown to the right. Applications. The E8 Lie group has applications in theoretical physics and especially in string theory and supergravity. E8×E8 is the gauge group of one of the two types of heterotic string and is one of two anomaly-free gauge groups that can be coupled to the "N" = 1 supergravity in ten dimensions. E8 is the U-duality group of supergravity on an eight-torus (in its split form). One way to incorporate the standard model of particle physics into heterotic string theory is the symmetry breaking of E8 to its maximal subalgebra SU(3)×E6. In 1982, Michael Freedman used the E8 lattice to construct an example of a topological 4-manifold, the E8 manifold, which has no smooth structure. Antony Garrett Lisi's incomplete "An Exceptionally Simple Theory of Everything" attempts to describe all known fundamental interactions in physics as part of the E8 Lie algebra. R. Coldea, D. A. Tennant, and E. M. Wheeler et al. (2010) reported an experiment where the electron spins of a cobalt-niobium crystal exhibited, under certain conditions, two of the eight peaks related to E8 that were predicted by . History. Wilhelm Killing (1888a, 1888b, 1889, 1890) discovered the complex Lie algebra E8 during his classification of simple compact Lie algebras, though he did not prove its existence, which was first shown by Élie Cartan. Cartan determined that a complex simple Lie algebra of type E8 admits three real forms. Each of them gives rise to a simple Lie group of dimension 248, exactly one of which (as for any complex simple Lie algebra) is compact. introduced algebraic groups and Lie algebras of type E8 over other fields: for example, in the case of finite fields they lead to an infinite family of finite simple groups of Lie type. E8 continues to be an area of active basic research by Atlas of Lie Groups and Representations, which aims to determine the unitary representations of all the Lie groups. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left[J_{ij}, J_{k\\ell}\\right] = \\delta_{jk}J_{i\\ell} - \\delta_{j\\ell}J_{ik} - \\delta_{ik}J_{j\\ell} + \\delta_{i\\ell}J_{jk}" }, { "math_id": 1, "text": "\\left[J_{ij}, Q_a\\right] = \\frac 14 \\left(\\gamma_i\\gamma_j-\\gamma_j\\gamma_i\\right)_{ab} Q_b," }, { "math_id": 2, "text": "\\left[Q_a, Q_b\\right] = \\gamma^{[i}_{ac}\\gamma^{j]}_{cb} J_{ij}." }, { "math_id": 3, "text": "\\left(\\pm 1,\\pm 1,0,0,0,0,0,0\\right)\\," }, { "math_id": 4, "text": "\\left(\\pm\\tfrac12,\\pm\\tfrac12,\\pm\\tfrac12,\\pm\\tfrac12,\\pm\\tfrac12,\\pm\\tfrac12,\\pm\\tfrac12,\\pm\\tfrac12\\right) \\," }, { "math_id": 5, "text": "A_{ij} = 2\\frac{\\left(\\alpha_i, \\alpha_j\\right)}{\\left(\\alpha_i, \\alpha_i\\right)}" }, { "math_id": 6, "text": "\\left [\n\\begin{array}{rr}\n 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n-1 & 2 & -1& 0 & 0 & 0 & 0 & 0 \\\\\n 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & -1 & 2 & -1 & 0 & -1 \\\\\n 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & -1 & 2 & 0 \\\\\n 0 & 0 & 0 & 0 & -1 & 0 & 0 & 2\n\\end{array}\\right ]." }, { "math_id": 7, "text": "\\left [\\begin{array}{rr}\n1&-1&0&0&0&0&0&0 \\\\\n0&1&-1&0&0&0&0&0 \\\\\n0&0&1&-1&0&0&0&0 \\\\\n0&0&0&1&-1&0&0&0 \\\\\n0&0&0&0&1&-1&0&0 \\\\\n0&0&0&0&0&1&1&0 \\\\\n-\\frac{1}{2}&-\\frac{1}{2}&-\\frac{1}{2}&-\\frac{1}{2}&-\\frac{1}{2}&-\\frac{1}{2}&-\\frac{1}{2}&-\\frac{1}{2}\\\\\n0&0&0&0&0&1&-1&0 \\\\\n\\end{array}\\right ]." }, { "math_id": 8, "text": "\\left [\\begin{array}{rr}\n-1&0&0&0&0&0&0&1 \\\\\n\\end{array}\\right ]." }, { "math_id": 9, "text": "{\\tilde{\\mathrm{E}}}_{8}" }, { "math_id": 10, "text": "e_i" }, { "math_id": 11, "text": "f_i" }, { "math_id": 12, "text": "h_i = [e_i, f_i]" }, { "math_id": 13, "text": "[e_i, h_j] = -a_{ji} e_i" }, { "math_id": 14, "text": "a_{ji}" }, { "math_id": 15, "text": "h_j" }, { "math_id": 16, "text": "[e_i, h_j]" }, { "math_id": 17, "text": "h_i" }, { "math_id": 18, "text": "{e_i}" }, { "math_id": 19, "text": "q^{120}\\left(q^{30} - 1\\right)\\left(q^{24} - 1\\right)\\left(q^{20} - 1\\right)\\left(q^{18} - 1\\right)\\left(q^{14} - 1\\right)\\left(q^{12} - 1\\right)\\left(q^8 - 1\\right)\\left(q^2 - 1\\right)" }, { "math_id": 20, "text": "\\mathfrak{e}_6" }, { "math_id": 21, "text": "\\mathfrak{su}(3)" }, { "math_id": 22, "text": "\\mathfrak{e}_7" }, { "math_id": 23, "text": "\\mathfrak{su}(2)" }, { "math_id": 24, "text": "\\mathfrak{e}_8" }, { "math_id": 25, "text": "\\mathbb R^{8}" }, { "math_id": 26, "text": "\\mathbb{R}^{2}" }, { "math_id": 27, "text": "\\mathfrak{a}_6 \\oplus \\mathbb{R}^{2}" }, { "math_id": 28, "text": "\\mathfrak{a}_7 \\oplus \\mathbb{R}" }, { "math_id": 29, "text": "\\mathfrak{e}_7 \\oplus \\mathbb{R}" } ]
https://en.wikipedia.org/wiki?curid=647994
64799753
Mackey–Glass equations
Nonlinear time delay differential equation In mathematics and mathematical biology, the Mackey–Glass equations, named after Michael Mackey and Leon Glass, refer to a family of delay differential equations whose behaviour manages to mimic both healthy and pathological behaviour in certain biological contexts, controlled by the equation's parameters. Originally, they were used to model the variation in the relative quantity of mature cells in the blood. The equations are defined as: and where formula_0 represents the density of cells over time, and formula_1 are parameters of the equations. Equation (2), in particular, is notable in dynamical systems since it can result in chaotic attractors with various dimensions. Introduction. There exist an enormous number of physiological systems that involve or rely on the periodic behaviour of certain subcomponents of the system. For example, many homeostatic processes rely on negative feedback to control the concentration of substances in the blood; breathing, for instance, is promoted by the detection, by the brain, of high CO2 concentration in the blood. One way to model such systems mathematically is with the following simple ordinary differential equation: formula_4 where formula_5 is the rate at which a "substance" is produced, and formula_6 controls how the current level of the substance "discourages" the continuation of its production. The solutions of this equation can be found via an integrating factor, and have the form: formula_7 where formula_8 is any initial condition for the initial value problem. However, the above model assumes that variations in the substance concentration is detected immediately, which often not the case in physiological systems. In order to ease this problem, proposed changing the production rate to a function formula_9 of the concentration at an earlier point formula_10 in time, in hope that this would better reflect the fact that there is a significant delay before the bone marrow produces and releases mature cells in the blood, after detecting low cell concentration in the blood. By taking the production rate formula_5 as being: formula_11 we obtain Equations (1) and (2), respectively. The values used by were formula_12, formula_13 and formula_14, with initial condition formula_15. The value of formula_16 is not relevant for the purpose of analyzing the dynamics of Equation (2), since the change of variable formula_17 reduces the equation to: formula_18 This is why, in this context, plots often place formula_19 in the formula_20-axis. Dynamical behaviour. It is of interest to study the behaviour of the equation solutions when formula_21 is varied, since it represents the time taken by the physiological system to react to the concentration variation of a substance. An increase in this delay can be caused by a pathology, which in turn can result in chaotic solutions for the Mackey–Glass equations, especially Equation (2). When formula_2, we obtain a very regular periodic solution, which can be seen as characterizing "healthy" behaviour; on the other hand, when formula_3 the solution gets much more erratic. The Mackey–Glass attractor can be visualized by plotting the pairs formula_22. This is somewhat justified because delay differential equations can (sometimes) be reduced to a system of ordinary differential equations, and also because they are approximately infinite dimensional maps. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(t)" }, { "math_id": 1, "text": "\\beta_0, \\theta, n, \\tau, \\gamma" }, { "math_id": 2, "text": "\\tau = 6" }, { "math_id": 3, "text": "\\tau = 20" }, { "math_id": 4, "text": "\n y'(t) = k - c y(t)\n" }, { "math_id": 5, "text": "k" }, { "math_id": 6, "text": "c" }, { "math_id": 7, "text": "y(t) = \\frac{k}{c} + f(y_0) e^{- c t}" }, { "math_id": 8, "text": "y_0" }, { "math_id": 9, "text": "k(y(t - \\tau))" }, { "math_id": 10, "text": "t - \\tau" }, { "math_id": 11, "text": "\n\\frac{\\beta_0 \\theta^n}{\\theta^n + P(t - \\tau)^n} ~~ \\text{ or } ~~ \\frac{\\beta_0 \\theta^n P(t - \\tau)}{\\theta^n + P(t - \\tau)^n}\n" }, { "math_id": 12, "text": "\\gamma = 0.1" }, { "math_id": 13, "text": "\\beta_0 = 0.2" }, { "math_id": 14, "text": "n = 10" }, { "math_id": 15, "text": "P(0) = 0.1" }, { "math_id": 16, "text": "\\theta" }, { "math_id": 17, "text": "P(t) = \\theta \\cdot Q(t)" }, { "math_id": 18, "text": "\n Q'(t) = \\frac{\\beta_0 Q(t - \\tau)}{1 + Q(t - \\tau)^n} - \\gamma Q(t).\n" }, { "math_id": 19, "text": "Q(t) = P(t) / \\theta" }, { "math_id": 20, "text": "y" }, { "math_id": 21, "text": "\\tau" }, { "math_id": 22, "text": "(P(t), P(t - \\tau))" } ]
https://en.wikipedia.org/wiki?curid=64799753
648004
Hilbert–Pólya conjecture
Mathematical conjecture about the Riemann zeta function. In mathematics, the Hilbert–Pólya conjecture states that the non-trivial zeros of the Riemann zeta function correspond to eigenvalues of a self-adjoint operator. It is a possible approach to the Riemann hypothesis, by means of spectral theory. History. In a letter to Andrew Odlyzko, dated January 3, 1982, George Pólya said that while he was in Göttingen around 1912 to 1914 he was asked by Edmund Landau for a physical reason that the Riemann hypothesis should be true, and suggested that this would be the case if the imaginary parts "t" of the zeros formula_0 of the Riemann zeta function corresponded to eigenvalues of a self-adjoint operator. The earliest published statement of the conjecture seems to be in . David Hilbert did not work in the central areas of analytic number theory, but his name has become known for the Hilbert–Pólya conjecture due to a story told by Ernst Hellinger, a student of Hilbert, to André Weil. Hellinger said that Hilbert announced in his seminar in the early 1900s that he expected the Riemann Hypothesis would be a consequence of Fredholm's work on integral equations with a symmetric kernel. 1950s and the Selberg trace formula. At the time of Pólya's conversation with Landau, there was little basis for such speculation. However Selberg in the early 1950s proved a duality between the length spectrum of a Riemann surface and the eigenvalues of its Laplacian. This so-called Selberg trace formula bore a striking resemblance to the explicit formulae, which gave credibility to the Hilbert–Pólya conjecture. 1970s and random matrices. Hugh Montgomery investigated and found that the statistical distribution of the zeros on the critical line has a certain property, now called Montgomery's pair correlation conjecture. The zeros tend not to cluster too closely together, but to repel. Visiting at the Institute for Advanced Study in 1972, he showed this result to Freeman Dyson, one of the founders of the theory of random matrices. Dyson saw that the statistical distribution found by Montgomery appeared to be the same as the pair correlation distribution for the eigenvalues of a random Hermitian matrix. These distributions are of importance in physics — the eigenstates of a Hamiltonian, for example the energy levels of an atomic nucleus, satisfy such statistics. Subsequent work has strongly borne out the connection between the distribution of the zeros of the Riemann zeta function and the eigenvalues of a random Hermitian matrix drawn from the Gaussian unitary ensemble, and both are now believed to obey the same statistics. Thus the Hilbert–Pólya conjecture now has a more solid basis, though it has not yet led to a proof of the Riemann hypothesis. Later developments. In 1998, Alain Connes formulated a trace formula that is actually equivalent to the Riemann hypothesis. This strengthened the analogy with the Selberg trace formula to the point where it gives precise statements. He gives a geometric interpretation of the explicit formula of number theory as a trace formula on noncommutative geometry of Adele classes. Possible connection with quantum mechanics. A possible connection of Hilbert–Pólya operator with quantum mechanics was given by Pólya. The Hilbert–Pólya conjecture operator is of the form formula_1 where formula_2 is the Hamiltonian of a particle of mass formula_3 that is moving under the influence of a potential formula_4. The Riemann conjecture is equivalent to the assertion that the Hamiltonian is Hermitian, or equivalently that formula_5 is real. Using perturbation theory to first order, the energy of the "n"th eigenstate is related to the expectation value of the potential: formula_6 where formula_7 and formula_8 are the eigenvalues and eigenstates of the free particle Hamiltonian. This equation can be taken to be a Fredholm integral equation of first kind, with the energies formula_9. Such integral equations may be solved by means of the resolvent kernel, so that the potential may be written as formula_10 where formula_11 is the resolvent kernel, formula_12 is a real constant and formula_13 where formula_14 is the Dirac delta function, and the formula_15 are the "non-trivial" roots of the zeta function formula_16. Michael Berry and Jonathan Keating have speculated that the Hamiltonian "H" is actually some quantization of the classical Hamiltonian "xp", where "p" is the canonical momentum associated with "x" The simplest Hermitian operator corresponding to "xp" is formula_17 This refinement of the Hilbert–Pólya conjecture is known as the "Berry conjecture" (or the "Berry–Keating conjecture"). As of 2008, it is still quite far from being concrete, as it is not clear on which space this operator should act in order to get the correct dynamics, nor how to regularize it in order to get the expected logarithmic corrections. Berry and Keating have conjectured that since this operator is invariant under dilations perhaps the boundary condition "f"("nx") = "f"("x") for integer "n" may help to get the correct asymptotic results valid for large "n" formula_18 A paper was published in March 2017, written by Carl M. Bender, Dorje C. Brody, and Markus P. Müller, which builds on Berry's approach to the problem. There the operator formula_19 was introduced, which they claim satisfies a certain modified versions of the conditions of the Hilbert–Pólya conjecture. Jean Bellissard has criticized this paper, and the authors have responded with clarifications. Moreover, Frederick Moxley has approached the problem with a Schrödinger equation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\tfrac12 + it " }, { "math_id": 1, "text": "\\tfrac{1}{2}+iH" }, { "math_id": 2, "text": "H" }, { "math_id": 3, "text": "m" }, { "math_id": 4, "text": "V(x)" }, { "math_id": 5, "text": "V" }, { "math_id": 6, "text": " E_{n}=E_{n}^{0}+ \\left. \\left \\langle \\varphi^{0}_n \\right | V \\left | \\varphi^{0}_n \\right. \\right \\rangle " }, { "math_id": 7, "text": "E^{0}_n" }, { "math_id": 8, "text": "\\varphi^{0}_n" }, { "math_id": 9, "text": "E_n" }, { "math_id": 10, "text": " V(x)=A\\int_{-\\infty}^{\\infty} \\left (g(k)+\\overline{g(k)}-E_{k}^{0} \\right )\\,R(x,k)\\,dk " }, { "math_id": 11, "text": "R(x,k)" }, { "math_id": 12, "text": "A" }, { "math_id": 13, "text": " g(k)=i \\sum_{n=0}^{\\infty} \\left(\\frac{1}{2}-\\rho_n \\right)\\delta(k-n) " }, { "math_id": 14, "text": "\\delta(k-n)" }, { "math_id": 15, "text": "\\rho_n" }, { "math_id": 16, "text": "\\zeta (\\rho_n)=0 " }, { "math_id": 17, "text": "\\hat{H} = \\tfrac1{2} (\\hat{x}\\hat{p}+\\hat{p}\\hat{x}) = - i \\left( x \\frac{\\mathrm{d}}{\\mathrm{d} x} + \\frac1{2} \\right)." }, { "math_id": 18, "text": " \\frac{1}{2} + i \\frac{ 2\\pi n}{\\log n}. " }, { "math_id": 19, "text": "\\hat{H} = \\frac{1}{1-e^{-i\\hat{p}}} \\left (\\hat{x}\\hat{p}+\\hat{p}\\hat{x} \\right ) \\left (1-e^{-i\\hat{p}} \\right )" }, { "math_id": 20, "text": " \\exp(i\\gamma) " } ]
https://en.wikipedia.org/wiki?curid=648004
648008
Asymptotic freedom
Property of gauge theories in particle physics In quantum field theory, asymptotic freedom is a property of some gauge theories that causes interactions between particles to become asymptotically weaker as the energy scale increases and the corresponding length scale decreases. (Alternatively, and perhaps contrarily, in applying an S-matrix, asymptotically free refers to free particles states in the distant past or the distant future.) Asymptotic freedom is a feature of quantum chromodynamics (QCD), the quantum field theory of the strong interaction between quarks and gluons, the fundamental constituents of nuclear matter. Quarks interact weakly at high energies, allowing perturbative calculations. At low energies, the interaction becomes strong, leading to the confinement of quarks and gluons within composite hadrons. The asymptotic freedom of QCD was discovered in 1973 by David Gross and Frank Wilczek, and independently by David Politzer in the same year. For this work all three shared the 2004 Nobel Prize in Physics. Discovery. Asymptotic freedom in QCD was discovered in 1973 by David Gross and Frank Wilczek, and independently by David Politzer in the same year. The same phenomenon had previously been observed (in quantum electrodynamics with a charged vector field, by V.S. Vanyashin and M.V. Terent'ev in 1965; and Yang–Mills theory by Iosif Khriplovich in 1969 and Gerard 't Hooft in 1972), but its physical significance was not realized until the work of Gross, Wilczek and Politzer, which was recognized by the 2004 Nobel Prize in Physics. Experiments at the Stanford Linear Accelerator showed that inside protons, quarks behaved as if they were free. This was a great surprise, as many believed quarks to be tightly bound by the strong interaction, and so they should rapidly dissipate their motion by strong interaction radiation when they got violently accelerated, much like how electrons emit electromagnetic radiation when accelerated. The discovery was instrumental in "rehabilitating" quantum field theory. Prior to 1973, many theorists suspected that field theory was fundamentally inconsistent because the interactions become infinitely strong at short distances. This phenomenon is usually called a Landau pole, and it defines the smallest length scale that a theory can describe. This problem was discovered in field theories of interacting scalars and spinors, including quantum electrodynamics (QED), and Lehmann positivity led many to suspect that it is unavoidable. Asymptotically free theories become weak at short distances, there is no Landau pole, and these quantum field theories are believed to be completely consistent down to any length scale. Electroweak theory within the Standard Model is not asymptotically free. So a Landau pole exists in the Standard Model. With the Landau pole a problem arises when Higgs boson is being considered. Quantum triviality can be used to bound or predict parameters such as the Higgs boson mass. This leads to a predictable Higgs mass in asymptotic safety scenarios. In other scenarios, interactions are weak so that any inconsistency arises at distances shorter than the Planck length. Screening and antiscreening. The variation in a physical coupling constant under changes of scale can be understood qualitatively as coming from the action of the field on virtual particles carrying the relevant charge. The Landau pole behavior of QED (related to quantum triviality) is a consequence of "screening" by virtual charged particle–antiparticle pairs, such as electron–positron pairs, in the vacuum. In the vicinity of a charge, the vacuum becomes "polarized": virtual particles of opposing charge are attracted to the charge, and virtual particles of like charge are repelled. The net effect is to partially cancel out the field at any finite distance. Getting closer and closer to the central charge, one sees less and less of the effect of the vacuum, and the effective charge increases. In QCD the same thing happens with virtual quark-antiquark pairs; they tend to screen the color charge. However, QCD has an additional wrinkle: its force-carrying particles, the gluons, themselves carry color charge, and in a different manner. Each gluon carries both a color charge and an anti-color magnetic moment. The net effect of polarization of virtual gluons in the vacuum is not to screen the field but to "augment" it and change its color. This is sometimes called "antiscreening". Getting closer to a quark diminishes the antiscreening effect of the surrounding virtual gluons, so the contribution of this effect would be to weaken the effective charge with decreasing distance. Since the virtual quarks and the virtual gluons contribute opposite effects, which effect wins out depends on the number of different kinds, or flavors, of quark. For standard QCD with three colors, as long as there are no more than 16 flavors of quark (not counting the antiquarks separately), antiscreening prevails and the theory is asymptotically free. In fact, there are only 6 known quark flavors. Calculating asymptotic freedom. Asymptotic freedom can be derived by calculating the beta-function describing the variation of the theory's coupling constant under the renormalization group. For sufficiently short distances or large exchanges of momentum (which probe short-distance behavior, roughly because of the inverse relationship between a quantum's momentum and De Broglie wavelength), an asymptotically free theory is amenable to perturbation theory calculations using Feynman diagrams. Such situations are therefore more theoretically tractable than the long-distance, strong-coupling behavior also often present in such theories, which is thought to produce confinement. Calculating the beta-function is a matter of evaluating Feynman diagrams contributing to the interaction of a quark emitting or absorbing a gluon. Essentially, the beta-function describes how the coupling constants vary as one scales the system formula_0. The calculation can be done using rescaling in position space or momentum space (momentum shell integration). In non-abelian gauge theories such as QCD, the existence of asymptotic freedom depends on the gauge group and number of flavors of interacting particles. To lowest nontrivial order, the beta-function in an SU(N) gauge theory with formula_1 kinds of quark-like particle is formula_2 where formula_3 is the theory's equivalent of the fine-structure constant, formula_4 in the units favored by particle physicists. If this function is negative, the theory is asymptotically free. For SU(3), one has formula_5 and the requirement that formula_6 gives formula_7 Thus for SU(3), the color charge gauge group of QCD, the theory is asymptotically free if there are 16 or fewer flavors of quarks. Besides QCD, asymptotic freedom can also be seen in other systems like the nonlinear formula_8-model in 2 dimensions, which has a structure similar to the SU(N) invariant Yang–Mills theory in 4 dimensions. Finally, one can find theories that are asymptotically free and reduce to the full Standard Model of electromagnetic, weak and strong forces at low enough energies. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "x \\rightarrow bx" }, { "math_id": 1, "text": "n_f" }, { "math_id": 2, "text": "\\beta_1(\\alpha) = { \\alpha^2 \\over \\pi} \\left( -{11N \\over 6} + {n_f \\over 3} \\right) " }, { "math_id": 3, "text": "\\alpha" }, { "math_id": 4, "text": "g^2/(4 \\pi)" }, { "math_id": 5, "text": "N = 3," }, { "math_id": 6, "text": "\\beta_1 < 0 " }, { "math_id": 7, "text": "n_f < {33 \\over 2}." }, { "math_id": 8, "text": "\\sigma" } ]
https://en.wikipedia.org/wiki?curid=648008
648042
ADE classification
In mathematics, the ADE classification (originally "A-D-E" classifications) is a situation where certain kinds of objects are in correspondence with simply laced Dynkin diagrams. The question of giving a common origin to these classifications, rather than a posteriori verification of a parallelism, was posed in . The complete list of simply laced Dynkin diagrams comprises formula_0 Here "simply laced" means that there are no multiple edges, which corresponds to all simple roots in the root system forming angles of formula_1 (no edge between the vertices) or formula_2 (single edge between the vertices). These are two of the four families of Dynkin diagrams (omitting formula_3 and formula_4), and three of the five exceptional Dynkin diagrams (omitting formula_5 and formula_6). This list is non-redundant if one takes formula_7 for formula_8 If one extends the families to include redundant terms, one obtains the exceptional isomorphisms formula_9 and corresponding isomorphisms of classified objects. The "A", "D", "E" nomenclature also yields the simply laced finite Coxeter groups, by the same diagrams: in this case the Dynkin diagrams exactly coincide with the Coxeter diagrams, as there are no multiple edges. Lie algebras. In terms of complex semisimple Lie algebras: In terms of compact Lie algebras and corresponding simply laced Lie groups: Binary polyhedral groups. The same classification applies to discrete subgroups of formula_19, the binary polyhedral groups; properly, binary polyhedral groups correspond to the simply laced "affine" Dynkin diagrams formula_20 and the representations of these groups can be understood in terms of these diagrams. This connection is known as the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;McKay correspondence after John McKay. The connection to Platonic solids is described in . The correspondence uses the construction of McKay graph. Note that the ADE correspondence is "not" the correspondence of Platonic solids to their reflection group of symmetries: for instance, in the ADE correspondence the tetrahedron, cube/octahedron, and dodecahedron/icosahedron correspond to formula_21 while the reflection groups of the tetrahedron, cube/octahedron, and dodecahedron/icosahedron are instead representations of the Coxeter groups formula_22 and formula_23 The orbifold of formula_24 constructed using each discrete subgroup leads to an ADE-type singularity at the origin, termed a du Val singularity. The McKay correspondence can be extended to multiply laced Dynkin diagrams, by using a "pair" of binary polyhedral groups. This is known as the Slodowy correspondence, named after Peter Slodowy – see . Labeled graphs. The ADE graphs and the extended (affine) ADE graphs can also be characterized in terms of labellings with certain properties, which can be stated in terms of the discrete Laplace operators or Cartan matrices. Proofs in terms of Cartan matrices may be found in . The affine ADE graphs are the only graphs that admit a positive labeling (labeling of the nodes by positive real numbers) with the following property: Twice any label is the sum of the labels on adjacent vertices. That is, they are the only positive functions with eigenvalue 1 for the discrete Laplacian (sum of adjacent vertices minus value of vertex) – the positive solutions to the homogeneous equation: formula_25 Equivalently, the positive functions in the kernel of formula_26 The resulting numbering is unique up to scale, and if normalized such that the smallest number is 1, consists of small integers – 1 through 6, depending on the graph. The ordinary ADE graphs are the only graphs that admit a positive labeling with the following property: Twice any label minus two is the sum of the labels on adjacent vertices. In terms of the Laplacian, the positive solutions to the inhomogeneous equation: formula_27 The resulting numbering is unique (scale is specified by the "2") and consists of integers; for E8 they range from 58 to 270, and have been observed as early as . Other classifications. The elementary catastrophes are also classified by the ADE classification. The ADE diagrams are exactly the quivers of finite type, via Gabriel's theorem. There is also a link with generalized quadrangles, as the three non-degenerate GQs with three points on each line correspond to the three exceptional root systems "E"6, "E"7 and "E"8. The classes "A" and "D" correspond degenerate cases where the line set is empty or we have all lines passing through a fixed point, respectively. It was suggested that symmetries of small droplet clusters may be subject to an ADE classification. The minimal models of two-dimensional conformal field theory have an ADE classification. Four dimensional formula_28 superconformal gauge quiver theories with unitary gauge groups have an ADE classification. Extension of the classification. Arnold has subsequently proposed many further extensions in this classification scheme, in the idea to revisit and generalize the Coxeter classification and Dynkin classification under the single umbrella of root systems. He tried to introduce informal concepts of Complexification and Symplectization based on analogies between Picard–Lefschetz theory which he interprets as the Complexified version of Morse theory and then extend them to other areas of mathematics. He tries also to identify hierarchies and dictionaries between mathematical objects and theories where for example diffeomorphism corresponds to the A type of the Dynkyn classification, volume preserving diffeomorphism corresponds to B type and Symplectomorphisms corresponds to C type. In the same spirit he revisits analogies between different mathematical objects where for example the Lie bracket in the scope of Diffeomorphisms becomes analogous (and at the same time includes as a special case) the Poisson bracket of Symplectomorphism. Trinities. Arnold extended this further under the rubric of "mathematical trinities". McKay has extended his correspondence along parallel and sometimes overlapping lines. Arnold terms these "trinities" to evoke religion, and suggest that (currently) these parallels rely more on faith than on rigorous proof, though some parallels are elaborated. Further trinities have been suggested by other authors. Arnold's trinities begin with R/C/H (the real numbers, complex numbers, and quaternions), which he remarks "everyone knows", and proceeds to imagine the other trinities as "complexifications" and "quaternionifications" of classical (real) mathematics, by analogy with finding symplectic analogs of classic Riemannian geometry, which he had previously proposed in the 1970s. In addition to examples from differential topology (such as characteristic classes), Arnold considers the three Platonic symmetries (tetrahedral, octahedral, icosahedral) as corresponding to the reals, complexes, and quaternions, which then connects with McKay's more algebraic correspondences, below. McKay's correspondences are easier to describe. Firstly, the extended Dynkin diagrams formula_29 (corresponding to tetrahedral, octahedral, and icosahedral symmetry) have symmetry groups formula_30 respectively, and the associated foldings are the diagrams formula_31 (note that in less careful writing, the extended (tilde) qualifier is often omitted). More significantly, McKay suggests a correspondence between the nodes of the formula_32 diagram and certain conjugacy classes of the monster group, which is known as "McKay's E8 observation;" see also monstrous moonshine. McKay further relates the nodes of formula_33 to conjugacy classes in 2."B" (an order 2 extension of the baby monster group), and the nodes of formula_34 to conjugacy classes in 3."Fi"24' (an order 3 extension of the Fischer group) – note that these are the three largest sporadic groups, and that the order of the extension corresponds to the symmetries of the diagram. Turning from large simple groups to small ones, the corresponding Platonic groups formula_35 have connections with the projective special linear groups PSL(2,5), PSL(2,7), and PSL(2,11) (orders 60, 168, and 660), which is deemed a "McKay correspondence". These groups are the only (simple) values for "p" such that PSL(2,"p") acts non-trivially on "p" points, a fact dating back to Évariste Galois in the 1830s. In fact, the groups decompose as products of sets (not as products of groups) as: formula_36 formula_37 and formula_38 These groups also are related to various geometries, which dates to Felix Klein in the 1870s; see icosahedral symmetry: related geometries for historical discussion and for more recent exposition. Associated geometries (tilings on Riemann surfaces) in which the action on "p" points can be seen are as follows: PSL(2,5) is the symmetries of the icosahedron (genus 0) with the compound of five tetrahedra as a 5-element set, PSL(2,7) of the Klein quartic (genus 3) with an embedded (complementary) Fano plane as a 7-element set (order 2 biplane), and PSL(2,11) the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;buckminsterfullerene surface (genus 70) with embedded Paley biplane as an 11-element set (order 3 biplane). Of these, the icosahedron dates to antiquity, the Klein quartic to Klein in the 1870s, and the buckyball surface to Pablo Martin and David Singerman in 2008. Algebro-geometrically, McKay also associates E6, E7, E8 respectively with: the 27 lines on a cubic surface, the 28 bitangents of a plane quartic curve, and the 120 tritangent planes of a canonic sextic curve of genus 4. The first of these is well-known, while the second is connected as follows: projecting the cubic from any point not on a line yields a double cover of the plane, branched along a quartic curve, with the 27 lines mapping to 27 of the 28 bitangents, and the 28th line is the image of the exceptional curve of the blowup. Note that the fundamental representations of E6, E7, E8 have dimensions 27, 56 (28·2), and 248 (120+128), while the number of roots is 27+45 = 72, 56+70 = 126, and 112+128 = 240. This should also fit into the scheme of relating E8,7,6 with the largest three of the sporadic simple groups, Monster, Baby and Fischer 24', cf. monstrous moonshine. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "A_n, \\, D_n, \\, E_6, \\, E_7, \\, E_8." }, { "math_id": 1, "text": "\\pi/2 = 90^\\circ" }, { "math_id": 2, "text": "2\\pi/3 = 120^\\circ" }, { "math_id": 3, "text": "B_n" }, { "math_id": 4, "text": "C_n" }, { "math_id": 5, "text": "F_4" }, { "math_id": 6, "text": "G_2" }, { "math_id": 7, "text": "n \\geq 4" }, { "math_id": 8, "text": "D_n." }, { "math_id": 9, "text": "D_3 \\cong A_3, E_4 \\cong A_4, E_5 \\cong D_5," }, { "math_id": 10, "text": "A_n" }, { "math_id": 11, "text": "\\mathfrak{sl}_{n+1}(\\mathbf{C})," }, { "math_id": 12, "text": "D_n" }, { "math_id": 13, "text": "\\mathfrak{so}_{2n}(\\mathbf{C})," }, { "math_id": 14, "text": "E_6, E_7, E_8" }, { "math_id": 15, "text": "\\mathfrak{su}_{n+1}," }, { "math_id": 16, "text": "SU(n+1);" }, { "math_id": 17, "text": "\\mathfrak{so}_{2n}(\\mathbf{R})," }, { "math_id": 18, "text": "PSO(2n)" }, { "math_id": 19, "text": "SU(2)" }, { "math_id": 20, "text": "\\tilde A_n, \\tilde D_n, \\tilde E_k," }, { "math_id": 21, "text": "E_6, E_7, E_8," }, { "math_id": 22, "text": "A_3, BC_3," }, { "math_id": 23, "text": "H_3." }, { "math_id": 24, "text": "\\mathbf{C}^2" }, { "math_id": 25, "text": "\\Delta \\phi = \\phi.\\ " }, { "math_id": 26, "text": "\\Delta - I." }, { "math_id": 27, "text": "\\Delta \\phi = \\phi - 2.\\ " }, { "math_id": 28, "text": "\\mathcal{N}=2" }, { "math_id": 29, "text": "\\tilde E_6, \\tilde E_7, \\tilde E_8" }, { "math_id": 30, "text": "S_3, S_2, S_1," }, { "math_id": 31, "text": "\\tilde G_2, \\tilde F_4, \\tilde E_8" }, { "math_id": 32, "text": "\\tilde E_8" }, { "math_id": 33, "text": "\\tilde E_7" }, { "math_id": 34, "text": "\\tilde E_6" }, { "math_id": 35, "text": "A_4, S_4, A_5" }, { "math_id": 36, "text": "A_4 \\times Z_5," }, { "math_id": 37, "text": "S_4 \\times Z_7," }, { "math_id": 38, "text": "A_5 \\times Z_{11}." } ]
https://en.wikipedia.org/wiki?curid=648042
648062
G2 manifold
Seven-dimensional Riemannian manifold In differential geometry, a "G"2 manifold or Joyce manifold is a seven-dimensional Riemannian manifold with holonomy group contained in "G"2. The group formula_0 is one of the five exceptional simple Lie groups. It can be described as the automorphism group of the octonions, or equivalently, as a proper subgroup of special orthogonal group SO(7) that preserves a spinor in the eight-dimensional spinor representation or lastly as the subgroup of the general linear group GL(7) which preserves the non-degenerate 3-form formula_1, the associative form. The Hodge dual, formula_2 is then a parallel 4-form, the coassociative form. These forms are calibrations in the sense of Reese Harvey and H. Blaine Lawson, and thus define special classes of 3- and 4-dimensional submanifolds. Properties. All formula_0-manifold are 7-dimensional, Ricci-flat, orientable spin manifolds. In addition, any compact manifold with holonomy equal to formula_0 has finite fundamental group, non-zero first Pontryagin class, and non-zero third and fourth Betti numbers. History. The fact that formula_0 might possibly be the holonomy group of certain Riemannian 7-manifolds was first suggested by the 1955 classification theorem of Marcel Berger, and this remained consistent with the simplified proof later given by Jim Simons in 1962. Although not a single example of such a manifold had yet been discovered, Edmond Bonan nonetheless made a useful contribution by showing that, if such a manifold did in fact exist, it would carry both a parallel 3-form and a parallel 4-form, and that it would necessarily be Ricci-flat. The first local examples of 7-manifolds with holonomy formula_0 were finally constructed around 1984 by Robert Bryant, and his full proof of their existence appeared in the Annals in 1987. Next, complete (but still noncompact) 7-manifolds with holonomy formula_0 were constructed by Bryant and Simon Salamon in 1989. The first compact 7-manifolds with holonomy formula_0 were constructed by Dominic Joyce in 1994. Compact formula_0 manifolds are therefore sometimes known as "Joyce manifolds", especially in the physics literature. In 2013, it was shown by M. Firat Arikan, Hyunjoo Cho, and Sema Salur that any manifold with a spin structure, and, hence, a formula_0-structure, admits a compatible almost contact metric structure, and an explicit compatible almost contact structure was constructed for manifolds with formula_0-structure. In the same paper, it was shown that certain classes of formula_0-manifolds admit a contact structure. In 2015, a new construction of compact formula_0 manifolds, due to Alessio Corti, Mark Haskins, Johannes Nordstrőm, and Tommaso Pacini, combined a gluing idea suggested by Simon Donaldson with new algebro-geometric and analytic techniques for constructing Calabi–Yau manifolds with cylindrical ends, resulting in tens of thousands of diffeomorphism types of new examples. Connections to physics. These manifolds are important in string theory. They break the original supersymmetry to 1/8 of the original amount. For example, M-theory compactified on a formula_0 manifold leads to a realistic four-dimensional (11-7=4) theory with N=1 supersymmetry. The resulting low energy effective supergravity contains a single supergravity supermultiplet, a number of chiral supermultiplets equal to the third Betti number of the formula_0 manifold and a number of U(1) vector supermultiplets equal to the second Betti number. Recently it was shown that almost contact structures (constructed by Sema Salur et al.) play an important role in formula_0 geometry". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G_2" }, { "math_id": 1, "text": "\\phi" }, { "math_id": 2, "text": "\\psi=*\\phi" } ]
https://en.wikipedia.org/wiki?curid=648062
6481151
OFFSystem
The Owner-Free File System ("OFF System", or "OFFS" for short) is a peer-to-peer distributed file system in which all shared files are represented by randomized multi-used data blocks. Instead of anonymizing the network, the data blocks are anonymized and therefore, only data garbage is ever exchanged and stored and no forwarding via intermediate nodes is required. "OFFS" claims to have been created with the expressed intention "to cut off some gangrene-infested bits of the copyright industry." History. "OFFS" development started within the hacktivism group The Big Hack in 2003 by the hackers Cheater512, CaptainMorgan, Aqlo and WhiteRaven. In 2004, a rudimentary version was finished, written in PHP, which was distributed as two demo CDs. Following these, SpectralMorning re-implemented the functionality in 2004 in C++, which led to the current "mainline" "OFFS" client. On August 14, 2006, CaptainMorgan posted a letter of "closing" addressed to the "Copyright Industry Associations of America", such as the RIAA and MPAA, stating that they have created "OFFS" with the purpose of ending "all of your problems with consumer copyright infringement." In 2008, the network consisted of around 50 nodes. On April 11, 2008, a beta test was held with a network size of over 100 nodes. Since SpectralMorning stopped work on "OFFS" in late 2008, only minor bug fix releases were made to mainline "OFF". Starting from 2007, an alternative, but compatible client was developed, called BlocksNet. Written in Ruby and well-maintained, it saw major improvements over recent time. It has been under development until 2011. The client OFFLoad is a fork from mainline "OFFS", which seemingly adds no features. Reasons for the fork are unclear. Another distantly related program is Monolith, which uses a similar principle to "OFFS". It was created after "OFFS" and features no multi-use of blocks and no networking. Functional Principle. The "OFF System" is a kind of anonymous, fully decentralized P2P file sharing program and network. In contrast to other anonymous file sharing networks, which derive their anonymity from forwarding their data blocks via intermediate network nodes, "OFFS" derives its anonymity from anonymizing the data files. Thus, the system refers to itself as a "brightnet" to contrast its method of operation with that of private file sharing systems known as darknets and with traditional, forwarding anonymous P2P programs. Store Procedure. In order to store a file into the local OFFS storage, resp. "block cache", choose the tuple size formula_0 (default 3), split the source file formula_1 into blocks formula_2 of size 128 KiB (pad with random data to fit) and for each, do the following: Finally, store the "descriptor list" in its own block (or blocks, if the list is larger than 128 KiB) and insert these blocks formula_8 into the block cache and generate an "OFFS URL" for referencing the source file and output it to the user or into the local "OFFS URL" database. Retrieve Procedure. To retrieve, obtain the descriptor block or blocks and for each contained set of size formula_0, do the following: Anonymity. "OFFS" derives its anonymity from the following: Efficiency. Because "OFFS" anonymizes the data blocks being exchanged instead of the network, no forwarding via intermediate nodes is required. Therefore, this method has a higher degree of efficiency than traditional, forwarding-based anonymous P2P systems. The forwarding method requires that a data block is uploaded and downloaded several times before it reaches its destination, which happens between 5 and 15 times., which, according to the resulting formula formula_12, is equivalent to an overhead of 900 to 2900%, while the overhead of "OFFS" without optimizations is about 200%. (formula_1 is the source file size, formula_13 the inbound tunnel length and formula_14 the outbound tunnel length. Plus 1 for the hop between the "Outbound Endpoint" and the "Inbound Gateway".) Efficiency can be further increased: External links. OFF implementations:
[ { "math_id": 0, "text": "t" }, { "math_id": 1, "text": "s" }, { "math_id": 2, "text": "s_i" }, { "math_id": 3, "text": "t-1" }, { "math_id": 4, "text": "o_i = s_i \\oplus r_1 \\oplus r_2 \\oplus ... \\oplus r_{t-1}" }, { "math_id": 5, "text": "o_i" }, { "math_id": 6, "text": "\\oplus" }, { "math_id": 7, "text": "\\{o_i, r_1, r_2 ... r_{t-1}\\}" }, { "math_id": 8, "text": "d_i" }, { "math_id": 9, "text": "b_1, b_2 ... b_t" }, { "math_id": 10, "text": "o_i, r_1, r_2 ... r_{t-1}" }, { "math_id": 11, "text": "s_i = b_1 \\oplus b_2 \\oplus ... \\oplus b_t" }, { "math_id": 12, "text": "s * (hi + ho + 1) * 2 - s" }, { "math_id": 13, "text": "hi" }, { "math_id": 14, "text": "ho" }, { "math_id": 15, "text": "s * (t-1) * \\frac {e}{100}" }, { "math_id": 16, "text": "e" } ]
https://en.wikipedia.org/wiki?curid=6481151
648166
Transfer (group theory)
In the mathematical field of group theory, the transfer defines, given a group "G" and a subgroup "H" of finite index, a group homomorphism from "G" to the abelianization of "H". It can be used in conjunction with the Sylow theorems to obtain certain numerical results on the existence of finite simple groups. The transfer was defined by Issai Schur (1902) and rediscovered by Emil Artin (1929). Construction. The construction of the map proceeds as follows: Let ["G":"H"] = "n" and select coset representatives, say formula_0 for "H" in "G", so "G" can be written as a disjoint union formula_1 Given "y" in "G", each "yxi" is in some coset "xjH" and so formula_2 for some index "j" and some element "h""i" of "H". The value of the transfer for "y" is defined to be the image of the product formula_3 in "H"/"H"′, where "H"′ is the commutator subgroup of "H". The order of the factors is irrelevant since "H"/"H"′ is abelian. It is straightforward to show that, though the individual "hi" depends on the choice of coset representatives, the value of the transfer does not. It is also straightforward to show that the mapping defined this way is a homomorphism. Example. If "G" is cyclic then the transfer takes any element "y" of "G" to "y"["G":"H"]. A simple case is that seen in the Gauss lemma on quadratic residues, which in effect computes the transfer for the multiplicative group of non-zero residue classes modulo a prime number "p", with respect to the subgroup {1, −1}. One advantage of looking at it that way is the ease with which the correct generalisation can be found, for example for cubic residues in the case that "p" − 1 is divisible by three. Homological interpretation. This homomorphism may be set in the context of group homology. In general, given any subgroup "H" of "G" and any "G"-module "A", there is a corestriction map of homology groups formula_4 induced by the inclusion map formula_5, but if we have that "H" is of finite index in "G", there are also restriction maps formula_6. In the case of "n =" 1 and formula_7 with the trivial "G"-module structure, we have the map formula_8. Noting that formula_9 may be identified with formula_10 where formula_11 is the commutator subgroup, this gives the transfer map via formula_12, with formula_13 denoting the natural projection. The transfer is also seen in algebraic topology, when it is defined between classifying spaces of groups. Terminology. The name "transfer" translates the German "Verlagerung", which was coined by Helmut Hasse. Commutator subgroup. If "G" is finitely generated, the commutator subgroup "G"′ of "G" has finite index in "G" and "H=G"′, then the corresponding transfer map is trivial. In other words, the map sends "G" to 0 in the abelianization of "G"′. This is important in proving the principal ideal theorem in class field theory. See the Emil Artin-John Tate "Class Field Theory" notes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_1, \\dots, x_n,\\," }, { "math_id": 1, "text": "G = \\bigcup\\ x_i H." }, { "math_id": 2, "text": "yx_i = x_jh_i" }, { "math_id": 3, "text": "\\textstyle \\prod_{i=1}^n h_i " }, { "math_id": 4, "text": "\\mathrm{Cor} : H_n(H,A) \\to H_n(G,A)" }, { "math_id": 5, "text": "i: H \\to G" }, { "math_id": 6, "text": "\\mathrm{Res} : H_n(G,A) \\to H_n(H,A)" }, { "math_id": 7, "text": "A=\\mathbb{Z}" }, { "math_id": 8, "text": "\\mathrm{Res} : H_1(G,\\mathbb{Z}) \\to H_1(H,\\mathbb{Z})" }, { "math_id": 9, "text": "H_1(G,\\mathbb{Z})" }, { "math_id": 10, "text": "G/G'" }, { "math_id": 11, "text": "G'" }, { "math_id": 12, "text": "G \\xrightarrow{\\pi} G/G' \\xrightarrow{\\mathrm{Res}} H/H'" }, { "math_id": 13, "text": "\\pi" } ]
https://en.wikipedia.org/wiki?curid=648166
6482379
Frisch elasticity of labor supply
The Frisch elasticity of labor supply captures the elasticity of hours worked to the wage rate, given a constant marginal utility of wealth. Marginal utility is constant for risk-neutral individuals according to microeconomics. In other words, the Frisch elasticity measures the substitution effect of a change in the wage rate on labor supply. This concept was proposed by the economist Ragnar Frisch after whom the elasticity of labor supply is named. The value of the Frisch elasticity is interpreted as willingness to work when wage is changed. The higher the Frisch elasticity, the more willing are people to work if the wage increases. The Frisch elasticity can be also referred to as “λ-constant elasticity”, where λ denotes marginal utility of wealth, or also in some macro literature it is referred to as “macro elasticity” as macroeconomic models are set in terms of the Frisch elasticity, while the term “micro elasticity” is used to refer to the intensive margin elasticity of hours conditional on employment. The Frisch elasticity of labor supply is important for economic analysis and for understanding business cycle fluctuations. It also controls intertemporal substitution responses to fluctuations of wage. Moreover, it determines the reaction of effects to fiscal policy interventions, taxation or money transfers. Let's denote the Frisch elasticity as FE. Then "formula_0". This is formula for overall Frisch elasticity, where h and w denote hours of work and wage, respectively. The overall effect of the Frisch elasticity, however, can be distinguished into extensive and intensive. The extensive effect can be explained as a decision whether to work at all. The intensive effect refers to a decision of an employee on the number of hours to work. Under certain circumstances, a constant marginal utility of wealth implies a constant marginal utility of consumption. Also the Frisch elasticity corresponds to the elasticity of substitution of labor supply. The hidden unemployed. [This paragraph seems speculative.] Calculations done by the BLS have shown that unemployment is in measurements often different depending on the current definition of what it means to be unemployed. Being on temporary layoff is one of the conditions for someone to be considered unemployed. The second option for a person to be considered unemployed is to state that he or she has been looking actively for a job in the past four weeks. “Out of labor force” is someone who does not fulfill previously mentioned criteria and is therefore not considered to be unemployed. Also, be aware that some people are presenting themselves as actively looking for a job even though they have in reality no willingness to work at all. By this act, they want to obtain the benefits of being unemployed. This, as you can see, leads to unemployment statistics being different from each other. The 2009 harsh recession has become the main theme of BLS statistics, in which/During the 2009 harsh recession, for example, it is often claimed that the statistical unemployment rate (BLS statistic) underestimates the reality of severe recession and tough economic obstacles. The difficulties that had to be faced by unemployed people when they were trying to find a job were simply for someone to great to overcome. This resulted in many people dropping off their willingness to find a job and thus leaving the labour market along with losing the status of unemployed. Some may insist that these people considered hidden unemployed should be included in the overall statistics of unemployed people in order to show that the problem of unemployment is much worse than the BLS data indicated. Another way of measuring the aggregate economic activity is the employment rate. This function shows us the current part of the population with a job. However, it combines people who claim to be unemployed with those who are identified as being out of the labor force. Even though the second group has some hidden unemployed insight, it also consists of people with rather little tendency to work, such as retirees, women with small children and students enrolled in school. Reduction in the employment could be caused by higher unemployment or by unassociated extension in fertility or school enrollment rates. We can assume that for the purpose of measuring the fluctuations in economic activity, it is in reality better to use employment rate rather than the unemployment rate. Budget constraint. Meaning that, the money value of costs on goods (C) must equal the total of wage (wh) and nonlabor income (V). The rate of wage is essential when it comes to choosing labor supply. Now lets think that the wage rate is constant for a person, who is unable to change his hourly wage according to his time spent at work. Additionally we will define the “marginal” wage as money earned for the last hour worked. This, of course, depends on the number of hours which are spent working. Someone who works more than 40 hours per week usually gets more money as an overtime premium. Also the wage of part-time jobs tends to be inferior to the wage of full-time jobs. Now, let's also not include the possibility that someone's marginal wage is related to the number of hours spent working. With the condition of a constant wage rate, we can put the budget constraint into a graph. Work or leisure are the only options someone has when it comes to choosing the way of spending his/her time. Time given to either work or leisure will then be similar to the time in the overall period. We will denote it as T hours in a week, so that T = h + L The equation of a budget constraint can also be written as C = w(T -L) + V or C = (wT + V) – wL The last equation is formed by a line, and the inclination is the negative of the wage rate (-w). Even if the person spends the whole time (T) working or at leisure, it is still available for him to buy consumption goods at the price of V. Giving up one hour of leisure would result in moving up the budget line and thus being able to buy additional w dollars of goods. Of course, this effect is relevant every time the person is willing to exchange an hour of leisure for an hour of work, resulting in the ability to buy additional w dollars of goods. Meaning that, every hour of leisure has some cost and the cost is dependent on the wage rate. By giving up all the free time activities, the person gets to the interface of the budget line and can buy (wT + V) worth of goods. Additionally, the worker has access to all the combinations on the budget line and thus creating worker's opportunity set ( set of all the baskets of consumption that the worker is able to purchase.) Relationship with income. The Frisch elasticity of labor supply is often higher for low-income workers than for high-income workers. This is because low-income workers are more likely to have to work to make ends meet, and therefore may be more responsive to changes in wages. Role in gender wage gap. The Frisch elasticity of labor supply can also help to explain the gender wage gap. Women often have a lower Frisch elasticity of labor supply than men, which means that they may be less responsive to changes in wages. This can result in lower wages for women, as employers may be less willing to offer higher wages if they believe that women are less likely to leave their jobs for higher-paying opportunities. Measurement challenges. There are several challenges to measuring the Frisch elasticity of labor supply accurately. For example, it is difficult to control for other factors that may influence labor supply, such as changes in the cost of living or changes in social norms around work. Additionally, there may be differences in the way that men and women respond to changes in wages, which can make it challenging to compare elasticities across gender. The difference between the Frisch elasticity of labor supply and the general concept of elasticity of labor supply. Elasticity of labor supply refers to the responsiveness of labor supply to changes in the wage rate. It is typically measured as the percentage change in the quantity of labor supplied divided by the percentage change in the wage rate. The elasticity of labor supply can be influenced by various factors, including the availability of alternative sources of income, the extent of non-labor income, the extent to which individuals can adjust their hours of work, and other factors. The Frisch elasticity of labor supply is a specific type of elasticity of labor supply that considers the intertemporal substitution of work effort. It measures the responsiveness of labor supply to changes in the real wage, which is the wage adjusted for changes in the cost of living. In contrast to the general concept of elasticity of labor supply, the Frisch elasticity also takes into account the effects of changes in income on the amount of work that people are willing to supply. Application. The Frisch elasticity of labor supply is not only important for economic analysis but also has implications for policy making. Governments can use the Frisch elasticity to determine the effectiveness of policies aimed at increasing employment and reducing unemployment. For example, a policy that increases wages in a certain sector can increase labor supply, but the extent of the increase will depend on the Frisch elasticity. Similarly, policies aimed at reducing taxes or increasing welfare benefits can also have an impact on the Frisch elasticity of labor supply. Moreover, the Frisch elasticity can help policymakers understand the impact of technological change on the labor market. Technological change can increase the productivity of labor, which can lead to an increase in wages. However, it can also lead to a reduction in the demand for labor in certain sectors, which can lead to unemployment. The Frisch elasticity can help policymakers understand the extent to which workers will respond to changes in wages and employment opportunities. Variations in Frisch elasticity among workers. It is worth noting that the Frisch elasticity is not constant across all individuals. Different groups of workers may have different Frisch elasticities due to differences in preferences, job opportunities, and other factors. For example, workers with higher levels of education and training may have higher Frisch elasticities than workers with lower levels of education and training because they may have more flexibility in their job options and may be able to switch between different types of jobs more easily. Similarly, workers in certain industries or occupations may have higher Frisch elasticities than workers in other industries or occupations. For instance, workers in industries that experience rapid technological change may have higher Frisch elasticities because they are more likely to be affected by fluctuations in wages due to changes in technology. Moreover, other factors such as income level, gender, and age can also affect the Frisch elasticity of labor supply. For instance, low-income workers may have lower Frisch elasticities because they may have fewer job opportunities or may face greater financial constraints that make it harder for them to adjust their labor supply in response to wage changes. Women may also have lower Frisch elasticities than men due to differences in labor market opportunities and social norms surrounding work and family. Finally, older workers may have lower Frisch elasticities than younger workers because they may have stronger preferences for leisure time or may be less willing or able to retrain for new jobs. Values. Values of the Frisch elasticity vary depending on the population being analyzed. However, it is generally agreed that the Frisch elasticity is positive, meaning that an increase in wages leads to an increase in labor supply. In addition to the positive relationship between wages and labor supply, it is important to note that the magnitude of the Frisch elasticity can provide insights into the behavior of workers. A Frisch elasticity of 0 indicates that workers do not respond to changes in wages, while a Frisch elasticity of 1 indicates that workers are highly responsive to changes in wages. The magnitude of the Frisch elasticity is typically between 0 and 1, indicating that the increase in labor supply is less than proportional to the increase in wages. For example, if the Frisch elasticity is 0.5, a 10% increase in wages would lead to a 5% increase in labor supply. In other words, workers would increase their hours worked by 5% in response to a 10% increase in wages. International differences. International differences in the Frisch elasticity of labor supply can provide insights into the impact of social welfare policies on the labor market. Studies have shown that countries with lower levels of social welfare provision tend to have higher Frisch elasticities. This is because workers in countries with more limited social welfare benefits may have fewer alternatives to working and may be more willing to supply labor even when wages are low. In contrast, workers in countries with more comprehensive social welfare systems may have more options and be less likely to work in low-wage jobs. Moreover, international differences in the Frisch elasticity of labor supply can reflect broader differences in labor market institutions and policies. Countries with more flexible labor markets and weaker labor protections may have higher Frisch elasticities, as workers may have fewer protections against job loss and may be more willing to work even when wages are low. In contrast, countries with stronger labor protections may have lower Frisch elasticities, as workers may be less willing to accept low-wage jobs or may have greater bargaining power to negotiate for higher wages. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "FE=\\frac{d ln({h_t})}{d ln({w_t})}\\Biggl|_{\\lambda_t = const}" } ]
https://en.wikipedia.org/wiki?curid=6482379
648311
Algebraic surface
Zero set of a polynomial in three variables In mathematics, an algebraic surface is an algebraic variety of dimension two. In the case of geometry over the field of complex numbers, an algebraic surface has complex dimension two (as a complex manifold, when it is non-singular) and so of dimension four as a smooth manifold. The theory of algebraic surfaces is much more complicated than that of algebraic curves (including the compact Riemann surfaces, which are genuine surfaces of (real) dimension two). Many results were obtained, but, in the Italian school of algebraic geometry , and are up to 100 years old. Classification by the Kodaira dimension. In the case of dimension one, varieties are classified by only the topological genus, but, in dimension two, one needs to distinguish the arithmetic genus formula_0 and the geometric genus formula_1 because one cannot distinguish birationally only the topological genus. Then, irregularity is introduced for the classification of varieties. A summary of the results (in detail, for each kind of surface refers to each redirection), follows: Examples of algebraic surfaces include (κ is the Kodaira dimension): For more examples see the list of algebraic surfaces. The first five examples are in fact birationally equivalent. That is, for example, a cubic surface has a function field isomorphic to that of the projective plane, being the rational functions in two indeterminates. The Cartesian product of two curves also provides examples. Birational geometry of surfaces. The birational geometry of algebraic surfaces is rich, because of blowing up (also known as a monoidal transformation), under which a point is replaced by the "curve" of all limiting tangent directions coming into it (a projective line). Certain curves may also be blown "down", but there is a restriction (self-intersection number must be −1). Castelnuovo's Theorem. One of the fundamental theorems for the birational geometry of surfaces is Castelnuovo's theorem. This states that any birational map between algebraic surfaces is given by a finite sequence of blowups and blowdowns. Properties. The Nakai criterion says that: A Divisor "D" on a surface "S" is ample if and only if "D2 &gt; 0" and for all irreducible curve "C" on "S" "D•C &gt; 0. Ample divisors have a nice property such as it is the pullback of some hyperplane bundle of projective space, whose properties are very well known. Let formula_2 be the abelian group consisting of all the divisors on "S". Then due to the intersection theorem formula_3 is viewed as a quadratic form. Let formula_4 then formula_5 becomes to be a numerical equivalent class group of "S" and formula_6 also becomes to be a quadratic form on formula_7, where formula_8 is the image of a divisor "D" on "S". (In the below the image formula_8 is abbreviated with "D".) For an ample line bundle "H" on "S", the definition formula_9 is used in the surface version of the Hodge index theorem: for formula_10, i.e. the restriction of the intersection form to formula_11 is a negative definite quadratic form. This theorem is proven using the Nakai criterion and the Riemann-Roch theorem for surfaces. The Hodge index theorem is used in Deligne's proof of the Weil conjecture. Basic results on algebraic surfaces include the Hodge index theorem, and the division into five groups of birational equivalence classes called the classification of algebraic surfaces. The "general type" class, of Kodaira dimension 2, is very large (degree 5 or larger for a non-singular surface in P3 lies in it, for example). There are essential three Hodge number invariants of a surface. Of those, "h"1,0 was classically called the irregularity and denoted by "q"; and "h"2,0 was called the geometric genus "p""g". The third, "h"1,1, is not a birational invariant, because blowing up can add whole curves, with classes in "H"1,1. It is known that Hodge cycles are algebraic and that algebraic equivalence coincides with homological equivalence, so that "h"1,1 is an upper bound for ρ, the rank of the Néron-Severi group. The arithmetic genus "p""a" is the difference geometric genus − irregularity. This explains why the irregularity got its name, as a kind of 'error term'. Riemann-Roch theorem for surfaces. The Riemann-Roch theorem for surfaces was first formulated by Max Noether. The families of curves on surfaces can be classified, in a sense, and give rise to much of their interesting geometry.
[ { "math_id": 0, "text": "p_a" }, { "math_id": 1, "text": "p_g" }, { "math_id": 2, "text": "\\mathcal{D}(S)" }, { "math_id": 3, "text": "\\mathcal{D}(S)\\times\\mathcal{D}(S)\\rightarrow\\mathbb{Z}:(X,Y)\\mapsto X\\cdot Y" }, { "math_id": 4, "text": "\\mathcal{D}_0(S):=\\{D\\in\\mathcal{D}(S)|D\\cdot X=0,\\text{for all } X\\in\\mathcal{D}(S)\\}" }, { "math_id": 5, "text": "\\mathcal{D}/\\mathcal{D}_0(S):=Num(S)" }, { "math_id": 6, "text": "Num(S)\\times Num(S)\\mapsto\\mathbb{Z}=(\\bar{D},\\bar{E})\\mapsto D\\cdot E" }, { "math_id": 7, "text": "Num(S)" }, { "math_id": 8, "text": "\\bar{D}" }, { "math_id": 9, "text": "\\{H\\}^\\perp:=\\{D\\in Num(S)|D\\cdot H=0\\}." }, { "math_id": 10, "text": "D\\in\\{\\{H\\}^\\perp|D\\ne0\\}, D\\cdot D < 0" }, { "math_id": 11, "text": "\\{H\\}^\\perp" } ]
https://en.wikipedia.org/wiki?curid=648311
648326
Quantum Zeno effect
Quantum measurement phenomenon The quantum Zeno effect (also known as the Turing paradox) is a feature of quantum-mechanical systems allowing a particle's time evolution to be slowed down by measuring it frequently enough with respect to some chosen measurement setting. Sometimes this effect is interpreted as "a system cannot change while you are watching it". One can "freeze" the evolution of the system by measuring it frequently enough in its known initial state. The meaning of the term has since expanded, leading to a more technical definition, in which time evolution can be suppressed not only by measurement: the quantum Zeno effect is the suppression of unitary time evolution in quantum systems provided by a variety of sources: measurement, interactions with the environment, stochastic fields, among other factors. As an outgrowth of study of the quantum Zeno effect, it has become clear that applying a series of sufficiently strong and fast pulses with appropriate symmetry can also "decouple" a system from its decohering environment. The first rigorous and general derivation of the quantum Zeno effect was presented in 1974 by Degasperis, Fonda, and Ghirardi, although it had previously been described by Alan Turing. The comparison with Zeno's paradox is due to a 1977 article by Baidyanath Misra &amp; E. C. George Sudarshan.The name comes by analogy to Zeno's arrow paradox, which states that because an arrow in flight is not seen to move during any single instant, it cannot possibly be moving at all. In the quantum Zeno effect an unstable state seems frozen – to not 'move' – due to a constant series of observations. According to the reduction postulate, each measurement causes the wavefunction to collapse to an eigenstate of the measurement basis. In the context of this effect, an "observation" can simply be the "absorption" of a particle, without the need of an observer in any conventional sense. However, there is controversy over the interpretation of the effect, sometimes referred to as the "measurement problem" in traversing the interface between microscopic and macroscopic objects. Another crucial problem related to the effect is strictly connected to the time–energy indeterminacy relation (part of the indeterminacy principle). If one wants to make the measurement process more and more frequent, one has to correspondingly decrease the time duration of the measurement itself. But the request that the measurement last only a very short time implies that the energy spread of the state in which reduction occurs becomes increasingly large. However, the deviations from the exponential decay law for small times is crucially related to the inverse of the energy spread, so that the region in which the deviations are appreciable shrinks when one makes the measurement process duration shorter and shorter. An explicit evaluation of these two competing requests shows that it is inappropriate, without taking into account this basic fact, to deal with the actual occurrence and emergence of Zeno's effect. Closely related (and sometimes not distinguished from the quantum Zeno effect) is the "watchdog effect", in which the time evolution of a system is affected by its continuous coupling to the environment. Description. Unstable quantum systems are predicted to exhibit a short-time deviation from the exponential decay law. This universal phenomenon has led to the prediction that frequent measurements during this nonexponential period could inhibit decay of the system, one form of the quantum Zeno effect. Subsequently, it was predicted that measurements applied more slowly could also "enhance" decay rates, a phenomenon known as the quantum anti-Zeno effect. In quantum mechanics, the interaction mentioned is called "measurement" because its result can be interpreted in terms of classical mechanics. Frequent measurement prohibits the transition. It can be a transition of a particle from one half-space to another (which could be used for an atomic mirror in an atomic nanoscope) as in the time-of-arrival problem, a transition of a photon in a waveguide from one mode to another, and it can be a transition of an atom from one quantum state to another. It can be a transition from the subspace without decoherent loss of a qubit to a state with a qubit lost in a quantum computer. In this sense, for the qubit correction, it is sufficient to determine whether the decoherence has already occurred or not. All these can be considered as applications of the Zeno effect. By its nature, the effect appears only in systems with distinguishable quantum states, and hence is inapplicable to classical phenomena and macroscopic bodies. The mathematician Robin Gandy recalled Turing's formulation of the quantum Zeno effect in a letter to fellow mathematician Max Newman, shortly after Turing's death: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;[I]t is easy to show using standard theory that if a system starts in an eigenstate of some observable, and measurements are made of that observable "N" times a second, then, even if the state is not a stationary one, the probability that the system will be in the same state after, say, one second, tends to one as "N" tends to infinity; that is, that continual observations will prevent motion. Alan and I tackled one or two theoretical physicists with this, and they rather pooh-poohed it by saying that continual observation is not possible. But there is nothing in the standard books (e.g., Dirac's) to this effect, so that at least the paradox shows up an inadequacy of Quantum Theory as usually presented. As a result of Turing's suggestion, the quantum Zeno effect is also sometimes known as the "Turing paradox". The idea is implicit in the early work of John von Neumann on the mathematical foundations of quantum mechanics, and in particular the rule sometimes called the "reduction postulate". It was later shown that the quantum Zeno effect of a single system is equivalent to the indetermination of the quantum state of a single system. Various realizations and general definition. The treatment of the Zeno effect as a paradox is not limited to the processes of quantum decay. In general, the term "Zeno effect" is applied to various transitions, and sometimes these transitions may be very different from a mere "decay" (whether exponential or non-exponential). One realization refers to the observation of an object (Zeno's arrow, or any quantum particle) as it leaves some region of space. In the 20th century, the trapping (confinement) of a particle in some region by its observation outside the region was considered as nonsensical, indicating some non-completeness of quantum mechanics. Even as late as 2001, confinement by absorption was considered as a paradox. Later, similar effects of the suppression of Raman scattering was considered an expected "effect", not a paradox at all. The absorption of a photon at some wavelength, the release of a photon (for example one that has escaped from some mode of a fiber), or even the relaxation of a particle as it enters some region, are all processes that can be interpreted as measurement. Such a measurement suppresses the transition, and is called the Zeno effect in the scientific literature. In order to cover all of these phenomena (including the original effect of suppression of quantum decay), the Zeno effect can be defined as a class of phenomena in which some transition is suppressed by an interaction – one that allows the interpretation of the resulting state in the terms 'transition did not yet happen' and 'transition has already occurred', or 'The proposition that the evolution of a quantum system is halted' if the state of the system is continuously measured by a macroscopic device to check whether the system is still in its initial state. Periodic measurement of a quantum system. Consider a system in a state formula_0, which is the eigenstate of some measurement operator. Say the system under free time evolution will decay with a certain probability into state formula_1. If measurements are made periodically, with some finite interval between each one, at each measurement, the wave function collapses to an eigenstate of the measurement operator. Between the measurements, the system evolves away from this eigenstate into a superposition state of the states "formula_0" and "formula_1". When the superposition state is measured, it will again collapse, either back into state "formula_0" as in the first measurement, or away into state "formula_1". However, its probability of collapsing into state "formula_1" after a very short amount of time formula_2 is proportional to formula_3, since probabilities are proportional to squared amplitudes, and amplitudes behave linearly. Thus, in the limit of a large number of short intervals, with a measurement at the end of every interval, the probability of making the transition to "formula_1" goes to zero. According to decoherence theory, the collapse of the wave function is not a discrete, instantaneous event. A "measurement" is equivalent to strongly coupling the quantum system to the noisy thermal environment for a brief period of time, and continuous strong coupling is equivalent to frequent "measurement". The time it takes for the wave function to "collapse" is related to the decoherence time of the system when coupled to the environment. The stronger the coupling is, and the shorter the decoherence time, the faster it will collapse. So in the decoherence picture, a perfect implementation of the quantum Zeno effect corresponds to the limit where a quantum system is continuously coupled to the environment, and where that coupling is infinitely strong, and where the "environment" is an infinitely large source of thermal randomness. Experiments and discussion. Experimentally, strong suppression of the evolution of a quantum system due to environmental coupling has been observed in a number of microscopic systems. In 1989, David J. Wineland and his group at NIST observed the quantum Zeno effect for a two-level atomic system that was interrogated during its evolution. Approximately 5,000 ions were stored in a cylindrical Penning trap and laser-cooled to below 250 mK. A resonant RF pulse was applied, which, if applied alone, would cause the entire ground-state population to migrate into an excited state. After the pulse was applied, the ions were monitored for photons emitted due to relaxation. The ion trap was then regularly "measured" by applying a sequence of ultraviolet pulses during the RF pulse. As expected, the ultraviolet pulses suppressed the evolution of the system into the excited state. The results were in good agreement with theoretical models. In 2001, Mark G. Raizen and his group at the University of Texas at Austin observed the quantum Zeno effect for an unstable quantum system, as originally proposed by Sudarshan and Misra. They also observed an anti-Zeno effect. Ultracold sodium atoms were trapped in an accelerating optical lattice, and the loss due to tunneling was measured. The evolution was interrupted by reducing the acceleration, thereby stopping quantum tunneling. The group observed suppression or enhancement of the decay rate, depending on the regime of measurement. In 2015, Mukund Vengalattore and his group at Cornell University demonstrated a quantum Zeno effect as the modulation of the rate of quantum tunnelling in an ultracold lattice gas by the intensity of light used to image the atoms. The quantum Zeno effect is used in commercial atomic magnetometers and proposed to be part of birds' magnetic compass sensory mechanism (magnetoreception). It is still an open question how closely one can approach the limit of an infinite number of interrogations due to the Heisenberg uncertainty involved in shorter measurement times. It has been shown, however, that measurements performed at a finite frequency can yield arbitrarily strong Zeno effects. In 2006, Streed "et al." at MIT observed the dependence of the Zeno effect on measurement pulse characteristics. The interpretation of experiments in terms of the "Zeno effect" helps describe the origin of a phenomenon. Nevertheless, such an interpretation does not bring any principally new features not described with the Schrödinger equation of the quantum system. Even more, the detailed description of experiments with the "Zeno effect", especially at the limit of high frequency of measurements (high efficiency of suppression of transition, or high reflectivity of a ridged mirror) usually do not behave as expected for an idealized measurement. It was shown that the quantum Zeno effect persists in the many-worlds and relative-states interpretations of quantum mechanics. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "t" }, { "math_id": 3, "text": "t^2" } ]
https://en.wikipedia.org/wiki?curid=648326
64835672
Set identification
In statistics and econometrics, set identification (or partial identification) extends the concept of identifiability (or "point identification") in statistical models to environments where the model and the distribution of observable variables are not sufficient to determine a unique value for the model parameters, but instead constrain the parameters to lie in a strict subset of the parameter space. Statistical models that are set (or partially) identified arise in a variety of settings in economics, including game theory and the Rubin causal model. Unlike approaches that deliver point-identification of the model parameters, methods from the literature on partial identification are used to obtain set estimates that are valid under weaker modelling assumptions. History. Early works containing the main ideas of set identification included and . However, the methods were significantly developed and promoted by Charles Manski, beginning with and . Partial identification continues to be a major theme in research in econometrics. named partial identification as an example of theoretical progress in the econometrics literature, and list partial identification as “one of the most prominent recent themes in econometrics.” Definition. Let formula_0 denote a vector of latent variables, let formula_1 denote a vector of observed (possibly endogenous) explanatory variables, and let formula_2 denote a vector of observed endogenous outcome variables. A structure is a pair formula_3, where formula_4 represents a collection of conditional distributions, and formula_5 is a structural function such that formula_6 for all realizations formula_7 of the random vectors formula_8. A model is a collection of admissible (i.e. possible) structures formula_9. Let formula_10 denote the collection of conditional distributions of formula_11 consistent with the structure formula_12. The admissible structures formula_12 and formula_13 are said to be observationally equivalent if formula_14. Let formula_15 denotes the true (i.e. data-generating) structure. The model is said to be point-identified if for every formula_16 we have formula_17. More generally, the model is said to be set (or partially) identified if there exists at least one admissible formula_18 such that formula_19. The identified set of structures is the collection of admissible structures that are observationally equivalent to formula_15. In most cases the definition can be substantially simplified. In particular, when formula_20 is independent of formula_21 and has a known (up to some finite-dimensional parameter) distribution, and when formula_5 is known up to some finite-dimensional vector of parameters, each structure formula_12 can be characterized by a finite-dimensional parameter vector formula_22. If formula_23 denotes the true (i.e. data-generating) vector of parameters, then the identified set, often denoted as formula_24, is the set of parameter values that are observationally equivalent to formula_25. Example: missing data. This example is due to . Suppose there are two binary random variables, "Y" and "Z". The econometrician is interested in formula_26. There is a missing data problem, however: "Y" can only be observed if formula_27. By the law of total probability, formula_28 The only unknown object is formula_29, which is constrained to lie between 0 and 1. Therefore, the identified set is formula_30 Given the missing data constraint, the econometrician can only say that formula_31. This makes use of all available information. Statistical inference. Set estimation cannot rely on the usual tools for statistical inference developed for point estimation. A literature in statistics and econometrics studies methods for statistical inference in the context of set-identified models, focusing on constructing confidence intervals or confidence regions with appropriate properties. For example, a method developed by constructs confidence regions that cover the identified set with a given probability. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " U \\in \\mathcal{U} \\subseteq \\mathbb{R}^{d_{u}} " }, { "math_id": 1, "text": " Z \\in \\mathcal{Z} \\subseteq \\mathbb{R}^{d_{z}} " }, { "math_id": 2, "text": " Y \\in \\mathcal{Y} \\subseteq \\mathbb{R}^{d_{y}} " }, { "math_id": 3, "text": " s= (h,\\mathcal{P}_{U\\mid Z})" }, { "math_id": 4, "text": " \\mathcal{P}_{U\\mid Z} " }, { "math_id": 5, "text": " h " }, { "math_id": 6, "text": " h(y,z,u) = 0 " }, { "math_id": 7, "text": " (y,z,u) " }, { "math_id": 8, "text": " (Y,Z,U) " }, { "math_id": 9, "text": " s " }, { "math_id": 10, "text": " \\mathcal{P}_{Y\\mid Z}(s) " }, { "math_id": 11, "text": " Y \\mid Z " }, { "math_id": 12, "text": " s " }, { "math_id": 13, "text": " s' " }, { "math_id": 14, "text": " \\mathcal{P}_{Y\\mid Z}(s) = \\mathcal{P}_{Y\\mid Z}(s')" }, { "math_id": 15, "text": " s^\\star " }, { "math_id": 16, "text": " s \\neq s' " }, { "math_id": 17, "text": " \\mathcal{P}_{Y\\mid Z}(s) \\neq \\mathcal{P}_{Y\\mid Z}(s^\\star)" }, { "math_id": 18, "text": " s\\neq s^\\star " }, { "math_id": 19, "text": " \\mathcal{P}_{Y\\mid Z}(s)\\neq \\mathcal{P}_{Y\\mid Z}(s^\\star) " }, { "math_id": 20, "text": " U " }, { "math_id": 21, "text": " Z " }, { "math_id": 22, "text": " \\theta \\in \\Theta \\subset \\mathbb{R}^{d_{\\theta}}" }, { "math_id": 23, "text": " \\theta_0 " }, { "math_id": 24, "text": " \\Theta_{I} \\subset \\Theta " }, { "math_id": 25, "text": "\\theta_0" }, { "math_id": 26, "text": "\\mathrm P(Y = 1)" }, { "math_id": 27, "text": "Z = 1" }, { "math_id": 28, "text": "\\mathrm P(Y = 1) = \\mathrm P(Y = 1 \\mid Z = 1) \\mathrm P(Z = 1) + \\mathrm P(Y = 1 \\mid Z = 0) \\mathrm P(Z = 0)." }, { "math_id": 29, "text": "\\mathrm P(Y = 1 \\mid Z = 0)" }, { "math_id": 30, "text": "\\Theta_I = \\{ p \\in [0, 1] : p = \\mathrm P(Y = 1 \\mid Z = 1) \\mathrm P(Z = 1) + q \\mathrm P(Z = 0), \\text{ for some } q \\in [0,1]\\}." }, { "math_id": 31, "text": "\\mathrm P(Y = 1) \\in \\Theta_I" } ]
https://en.wikipedia.org/wiki?curid=64835672
648409
Glomerulus (kidney)
Functional unit of nephron The glomerulus (pl.: glomeruli) is a network of small blood vessels (capillaries) known as a "tuft", located at the beginning of a nephron in the kidney. Each of the two kidneys contains about one million nephrons. The tuft is structurally supported by the mesangium (the space between the blood vessels), composed of intraglomerular mesangial cells. The blood is filtered across the capillary walls of this tuft through the glomerular filtration barrier, which yields its filtrate of water and soluble substances to a cup-like sac known as Bowman's capsule. The filtrate then enters the renal tubule of the nephron. The glomerulus receives its blood supply from an afferent arteriole of the renal arterial circulation. Unlike most capillary beds, the glomerular capillaries exit into efferent arterioles rather than venules. The resistance of the efferent arterioles causes sufficient hydrostatic pressure within the glomerulus to provide the force for ultrafiltration. The glomerulus and its surrounding Bowman's capsule constitute a renal corpuscle, the basic filtration unit of the kidney. The rate at which blood is filtered through all of the glomeruli, and thus the measure of the overall kidney function, is the glomerular filtration rate. Structure. The glomerulus is a tuft of capillaries located within Bowman's capsule within the kidney. Glomerular mesangial cells structurally support the tufts. Blood enters the capillaries of the glomerulus by a single arteriole called an afferent arteriole and leaves by an efferent arteriole. The capillaries consist of a tube lined by endothelial cells with a central lumen. The gaps between these endothelial cells are called fenestrae. The walls have a unique structure: there are pores between the cells that allow water and soluble substances to exit and after passing through the glomerular basement membrane and between podocyte foot processes, enter the capsule as ultrafiltrate. Lining. Capillaries of the glomerulus are lined by endothelial cells. These contain numerous pores—also called fenestrae—, 50–100 nm in diameter. Unlike those of other capillaries with fenestrations, these fenestrations are not spanned by diaphragms. They allow for the filtration of fluid, blood plasma solutes and protein, at the same time preventing the filtration of red blood cells, white blood cells, and platelets. The glomerulus has a glomerular basement membrane sandwiched between the glomerular capillaries and the podocytes. It consists mainly of laminins, type IV collagen, agrin, and nidogen, which are synthesized and secreted by both endothelial cells and podocytes. The glomerular basement membrane is 250–400 nm in thickness, which is thicker than basement membranes of other tissue. It is a barrier to blood proteins such as albumin and globulin. The part of the podocyte in contact with the glomerular basement membrane is called a "podocyte foot process" or "pedicle" (Fig. 3): there are gaps between the foot processes through which the filtrate flows into Bowman's capsule. The space between adjacent podocyte foot processes is spanned by slit diaphragms consisting of a mat of proteins, including podocin and nephrin. In addition, foot processes have a negatively charged coat (glycocalyx) that repels negatively charged molecules such as serum albumin. Mesangium. The mesangium is a space which is continuous with the smooth muscles of the arterioles. It is outside the capillary lumen but surrounded by capillaries. It is in the middle (meso) between the capillaries (angis). It is contained by the basement membrane, which surrounds both the capillaries and the mesangium. The mesangium contains mainly: Blood supply. The glomerulus receives its blood supply from an afferent arteriole of the renal arterial circulation. Unlike most capillary beds, the glomerular capillaries exit into efferent arterioles rather than venules. The resistance of the efferent arterioles causes sufficient hydrostatic pressure within the glomerulus to provide the force for ultrafiltration. Blood exits the glomerular capillaries by an efferent arteriole instead of a venule, as is seen in the majority of capillary systems (Fig. 4). This provides tighter control over the blood flow through the glomerulus, since arterioles dilate and constrict more readily than venules, owing to their thick circular smooth muscle layer (tunica media). The blood exiting the efferent arteriole enters a renal venule, which in turn enters a renal interlobular vein and then into the renal vein. Cortical nephrons near the corticomedullary junction (15% of all nephrons) are called juxtamedullary nephrons. The blood exiting the efferent arterioles of these nephrons enter the vasa recta, which are straight capillary branches that deliver blood to the renal medulla. These vasa recta run adjacent to the descending and ascending loop of Henle and participate in the maintenance of the medullary countercurrent exchange system. Filtrate drainage. The filtrate that has passed through the three-layered filtration unit enters Bowman's capsule. From there, it flows into the renal tubule—the nephron—which follows a U-shaped path to the collecting ducts, finally exiting into a renal calyx as urine. Function. Filtration. The main function of the glomerulus is to filter plasma to produce glomerular filtrate, which passes down the length of the nephron tubule to form urine. The rate at which the glomerulus produces filtrate from plasma (the glomerular filtration rate) is much higher than in systemic capillaries because of the particular anatomical characteristics of the glomerulus. Unlike systemic capillaries, which receive blood from high-resistance arterioles and drain to low-resistance venules, glomerular capillaries are connected in both ends to high-resistance arterioles: the afferent arteriole, and the efferent arteriole. This arrangement of two arterioles in series determines the high hydrostatic pressure on glomerular capillaries, which is one of the forces that favor filtration to Bowman's capsule. If a substance has passed through the glomerular capillary endothelial cells, glomerular basement membrane, and podocytes, then it enters the lumen of the tubule and is known as glomerular filtrate. Otherwise, it exits the glomerulus through the efferent arteriole and continues circulation as discussed below and as shown on the picture. Permeability. The structures of the layers determine their permeability-selectivity ("permselectivity"). The factors that influence permselectivity are the negative charge of the basement membrane and the podocytic epithelium, as well as the effective pore size of the glomerular wall (8 nm). As a result, large and/or negatively charged molecules will pass through far less frequently than small and/or positively charged ones. For instance, small ions such as sodium and potassium pass freely, while larger proteins, such as hemoglobin and albumin have practically no permeability at all. The oncotic pressure on glomerular capillaries is one of the forces that resist filtration. Because large and negatively charged proteins have a low permeability, they cannot filtrate easily to Bowman's capsule. Therefore, the concentration of these proteins tends to increase as the glomerular capillaries filtrate plasma, increasing the oncotic pressure along the glomerular capillary. Starling equation. The rate of filtration from the glomerulus to Bowman's capsule is determined (as in systemic capillaries) by the Starling equation: formula_0 Blood pressure regulation. The walls of the afferent arteriole contain specialized smooth muscle cells that synthesize renin. These juxtaglomerular cells play a major role in the renin–angiotensin system, which helps regulate blood volume and pressure. Clinical significance. Damage to the glomerulus by disease can allow passage through the glomerular filtration barrier of red blood cells, white blood cells, platelets, and blood proteins such as albumin and globulin. Underlying causes for glomerular injury can be inflammatory, toxic or metabolic. These can be seen in the urine (urinalysis) on microscopic and chemical (dipstick) examination. Glomerular diseases include diabetic kidney disease, glomerulonephritis (inflammation), glomerulosclerosis (hardening of the glomeruli), and IgA nephropathy. Due to the connection between the glomerulus and the glomerular filtration rate, the glomerular filtration rate is of clinical significance when suspecting a kidney disease, or when following up a case with known kidney disease, or when risking a development of renal damage such as beginning medications with known nephrotoxicity. History. In 1666, Italian biologist and anatomist Marcello Malpighi first described the glomeruli and demonstrated their continuity with the renal vasculature (281,282). About 175 years later, surgeon and anatomist William Bowman elucidated in detail the capillary architecture of the glomerulus and the continuity between its surrounding capsule and the proximal tubule. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ GFR = K_\\mathrm{f} ((P_\\mathrm{gc} - P_\\mathrm{bc}) - (\\pi_\\mathrm{gc} - \\pi_\\mathrm{bc}))" } ]
https://en.wikipedia.org/wiki?curid=648409
648614
Exponential integral
Special function defined by an integral In mathematics, the exponential integral Ei is a special function on the complex plane. It is defined as one particular definite integral of the ratio between an exponential function and its argument. Definitions. For real non-zero values of "x", the exponential integral Ei("x") is defined as formula_0 The Risch algorithm shows that Ei is not an elementary function. The definition above can be used for positive values of "x", but the integral has to be understood in terms of the Cauchy principal value due to the singularity of the integrand at zero. For complex values of the argument, the definition becomes ambiguous due to branch points at 0 and formula_1. Instead of Ei, the following notation is used, formula_2 For positive values of "x", we have formula_3. In general, a branch cut is taken on the negative real axis and "E"1 can be defined by analytic continuation elsewhere on the complex plane. For positive values of the real part of formula_4, this can be written formula_5 The behaviour of "E"1 near the branch cut can be seen by the following relation: formula_6 Properties. Several properties of the exponential integral below, in certain cases, allow one to avoid its explicit evaluation through the definition above. Convergent series. For real or complex arguments off the negative real axis, formula_9 can be expressed as formula_10 where formula_11 is the Euler–Mascheroni constant. The sum converges for all complex formula_4, and we take the usual value of the complex logarithm having a branch cut along the negative real axis. This formula can be used to compute formula_12 with floating point operations for real formula_13 between 0 and 2.5. For formula_14, the result is inaccurate due to cancellation. A faster converging series was found by Ramanujan: formula_15 Asymptotic (divergent) series. Unfortunately, the convergence of the series above is slow for arguments of larger modulus. For example, more than 40 terms are required to get an answer correct to three significant figures for formula_16. However, for positive values of x, there is a divergent series approximation that can be obtained by integrating formula_17 by parts: formula_18 The relative error of the approximation above is plotted on the figure to the right for various values of formula_19, the number of terms in the truncated sum (formula_20 in red, formula_21 in pink). Asymptotics beyond all orders. Using integration by parts, we can obtain an explicit formulaformula_22For any fixed formula_4, the absolute value of the error term formula_23 decreases, then increases. The minimum occurs at formula_24, at which point formula_25. This bound is said to be "asymptotics beyond all orders". Exponential and logarithmic behavior: bracketing. From the two series suggested in previous subsections, it follows that formula_7 behaves like a negative exponential for large values of the argument and like a logarithm for small values. For positive real values of the argument, formula_7 can be bracketed by elementary functions as follows: formula_26 The left-hand side of this inequality is shown in the graph to the left in blue; the central part formula_12 is shown in black and the right-hand side is shown in red. Definition by Ein. Both formula_8 and formula_7 can be written more simply using the entire function formula_27 defined as formula_28 (note that this is just the alternating series in the above definition of formula_7). Then we have formula_29 formula_30 Relation with other functions. Kummer's equation formula_31 is usually solved by the confluent hypergeometric functions formula_32 and formula_33 But when formula_34 and formula_35 that is, formula_36 we have formula_37 for all "z". A second solution is then given by E1(−"z"). In fact, formula_38 with the derivative evaluated at formula_39 Another connexion with the confluent hypergeometric functions is that "E1" is an exponential times the function "U"(1,1,"z"): formula_40 The exponential integral is closely related to the logarithmic integral function li("x") by the formula formula_41 for non-zero real values of formula_42. Generalization. The exponential integral may also be generalized to formula_43 which can be written as a special case of the upper incomplete gamma function: formula_44 The generalized form is sometimes called the Misra function formula_45, defined as formula_46 Many properties of this generalized form can be found in the NIST Digital Library of Mathematical Functions. Including a logarithm defines the generalized integro-exponential function formula_47 The indefinite integral: formula_48 is similar in form to the ordinary generating function for formula_49, the number of divisors of formula_50: formula_51 Derivatives. The derivatives of the generalised functions formula_52 can be calculated by means of the formula formula_53 Note that the function formula_54 is easy to evaluate (making this recursion useful), since it is just formula_55. Exponential integral of imaginary argument. If formula_4 is imaginary, it has a nonnegative real part, so we can use the formula formula_56 to get a relation with the trigonometric integrals formula_57 and formula_58: formula_59 The real and imaginary parts of formula_60 are plotted in the figure to the right with black and red curves. Approximations. There have been a number of approximations for the exponential integral function. These include: Inverse function of the Exponential Integral. We can express the Inverse function of the exponential integral in power series form: formula_68 where formula_69 is the Ramanujan–Soldner constant and formula_70 is polynomial sequence defined by the following recurrence relation: formula_71 For formula_72, formula_73 and we have the formula : formula_74 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\operatorname{Ei}(x) = -\\int_{-x}^\\infty \\frac{e^{-t}}t\\,dt = \\int_{-\\infty}^x \\frac{e^t}t\\,dt." }, { "math_id": 1, "text": "\\infty" }, { "math_id": 2, "text": "E_1(z) = \\int_z^\\infty \\frac{e^{-t}}{t}\\, dt,\\qquad|{\\rm Arg}(z)|<\\pi" }, { "math_id": 3, "text": "-E_1(x) = \\operatorname{Ei}(-x)" }, { "math_id": 4, "text": "z" }, { "math_id": 5, "text": "E_1(z) = \\int_1^\\infty \\frac{e^{-tz}}{t}\\, dt = \\int_0^1 \\frac{e^{-z/u}}{u}\\, du ,\\qquad \\Re(z) \\ge 0." }, { "math_id": 6, "text": "\\lim_{\\delta\\to0+} E_1(-x \\pm i\\delta) = -\\operatorname{Ei}(x) \\mp i\\pi,\\qquad x>0." }, { "math_id": 7, "text": "E_1" }, { "math_id": 8, "text": "\\operatorname{Ei}" }, { "math_id": 9, "text": "E_1(z)" }, { "math_id": 10, "text": "E_1(z) = -\\gamma - \\ln z - \\sum_{k=1}^{\\infty} \\frac{(-z)^k}{k\\; k!} \\qquad (\\left| \\operatorname{Arg}(z) \\right| < \\pi)" }, { "math_id": 11, "text": "\\gamma" }, { "math_id": 12, "text": "E_1(x)" }, { "math_id": 13, "text": "x" }, { "math_id": 14, "text": "x > 2.5" }, { "math_id": 15, "text": "{\\rm Ei} (x) = \\gamma + \\ln x + \\exp{(x/2)} \\sum_{n=1}^\\infty \\frac{ (-1)^{n-1} x^n} {n! \\, 2^{n-1}} \\sum_{k=0}^{\\lfloor (n-1)/2 \\rfloor} \\frac{1}{2k+1}" }, { "math_id": 16, "text": "E_1(10)" }, { "math_id": 17, "text": "x e^x E_1(x)" }, { "math_id": 18, "text": "E_1(x)=\\frac{\\exp(-x)} x \\left(\\sum_{n=0}^{N-1} \\frac{n!}{(-x)^n} +O(N!x^{-N}) \\right)" }, { "math_id": 19, "text": "N" }, { "math_id": 20, "text": "N=1" }, { "math_id": 21, "text": "N=5" }, { "math_id": 22, "text": "\\operatorname{Ei}(z) = \\frac{e^{z}} {z} \\left (\\sum _{k=0}^{n} \\frac{k!} {z^{k}} + e_{n}(z)\\right), \\quad e_{n}(z) \\equiv (n + 1)!\\ ze^{-z}\\int _{ -\\infty }^{z} \\frac{e^{t}} {t^{n+2}}\\,dt" }, { "math_id": 23, "text": "|e_n(z)|" }, { "math_id": 24, "text": "n\\sim |z|" }, { "math_id": 25, "text": "\\vert e_{n}(z)\\vert \\leq \\sqrt{\\frac{2\\pi } {\\vert z\\vert }}e^{-\\vert z\\vert }" }, { "math_id": 26, "text": "\n\\frac 1 2 e^{-x}\\,\\ln\\!\\left( 1+\\frac 2 x \\right)\n< E_1(x) < e^{-x}\\,\\ln\\!\\left( 1+\\frac 1 x \\right)\n\\qquad x>0\n" }, { "math_id": 27, "text": "\\operatorname{Ein}" }, { "math_id": 28, "text": "\n\\operatorname{Ein}(z)\n= \\int_0^z (1-e^{-t})\\frac{dt}{t}\n= \\sum_{k=1}^\\infty \\frac{(-1)^{k+1}z^k}{k\\; k!}\n" }, { "math_id": 29, "text": "\nE_1(z) \\,=\\, -\\gamma-\\ln z + {\\rm Ein}(z)\n\\qquad \\left| \\operatorname{Arg}(z) \\right| < \\pi\n" }, { "math_id": 30, "text": "\\operatorname{Ei}(x) \\,=\\, \\gamma+\\ln{x} - \\operatorname{Ein}(-x)\n\\qquad x \\neq 0\n" }, { "math_id": 31, "text": "z\\frac{d^2w}{dz^2} + (b-z)\\frac{dw}{dz} - aw = 0" }, { "math_id": 32, "text": "M(a,b,z)" }, { "math_id": 33, "text": "U(a,b,z)." }, { "math_id": 34, "text": "a=0" }, { "math_id": 35, "text": "b=1," }, { "math_id": 36, "text": "z\\frac{d^2w}{dz^2} + (1-z)\\frac{dw}{dz} = 0" }, { "math_id": 37, "text": "M(0,1,z)=U(0,1,z)=1" }, { "math_id": 38, "text": "E_1(-z)=-\\gamma-i\\pi+\\frac{\\partial[U(a,1,z)-M(a,1,z)]}{\\partial a},\\qquad 0<{\\rm Arg}(z)<2\\pi" }, { "math_id": 39, "text": "a=0." }, { "math_id": 40, "text": "E_1(z)=e^{-z}U(1,1,z)" }, { "math_id": 41, "text": "\\operatorname{li}(e^x) = \\operatorname{Ei}(x)" }, { "math_id": 42, "text": "x " }, { "math_id": 43, "text": "E_n(x) = \\int_1^\\infty \\frac{e^{-xt}}{t^n}\\, dt," }, { "math_id": 44, "text": "E_n(x) =x^{n-1}\\Gamma(1-n,x)." }, { "math_id": 45, "text": "\\varphi_m(x)" }, { "math_id": 46, "text": "\\varphi_m(x)=E_{-m}(x)." }, { "math_id": 47, "text": "E_s^j(z)= \\frac{1}{\\Gamma(j+1)}\\int_1^\\infty \\left(\\log t\\right)^j \\frac{e^{-zt}}{t^s}\\,dt." }, { "math_id": 48, "text": " \\operatorname{Ei}(a \\cdot b) = \\iint e^{a b} \\, da \\, db" }, { "math_id": 49, "text": "d(n)" }, { "math_id": 50, "text": "n" }, { "math_id": 51, "text": " \\sum\\limits_{n=1}^{\\infty} d(n)x^{n} = \\sum\\limits_{a=1}^{\\infty} \\sum\\limits_{b=1}^{\\infty} x^{a b}" }, { "math_id": 52, "text": "E_n" }, { "math_id": 53, "text": "\nE_n '(z) = - E_{n-1}(z)\n\\qquad (n=1,2,3,\\ldots)\n" }, { "math_id": 54, "text": "E_0" }, { "math_id": 55, "text": "e^{-z}/z" }, { "math_id": 56, "text": "\nE_1(z) = \\int_1^\\infty\n\\frac{e^{-tz}} t \\, dt\n" }, { "math_id": 57, "text": "\\operatorname{Si}" }, { "math_id": 58, "text": "\\operatorname{Ci}" }, { "math_id": 59, "text": "\nE_1(ix) = i\\left[ -\\tfrac{1}{2}\\pi + \\operatorname{Si}(x)\\right] - \\operatorname{Ci}(x)\n\\qquad (x > 0)\n" }, { "math_id": 60, "text": "\\mathrm{E}_1(ix)" }, { "math_id": 61, "text": "E_1(x) = \\left (A^{-7.7}+B \\right )^{-0.13}," }, { "math_id": 62, "text": "\\begin{align}\nA &= \\ln\\left [\\left (\\frac{0.56146}{x}+0.65\\right)(1+x)\\right] \\\\\nB &= x^4e^{7.7x}(2+x)^{3.7}\n\\end{align}" }, { "math_id": 63, "text": "E_1(x) = \\begin{cases} - \\ln x +\\textbf{a}^T\\textbf{x}_5,&x\\leq1 \\\\ \\frac{e^{-x}} x \\frac{\\textbf{b}^T \\textbf{x}_3}{\\textbf{c}^T\\textbf{x}_3},&x\\geq1 \\end{cases}" }, { "math_id": 64, "text": "\\begin{align}\n\\textbf{a} & \\triangleq [-0.57722, 0.99999, -0.24991, 0.05519, -0.00976, 0.00108]^T \\\\\n\\textbf{b} & \\triangleq[0.26777,8.63476, 18.05902, 8.57333]^T \\\\\n\\textbf{c} & \\triangleq[3.95850, 21.09965, 25.63296, 9.57332]^T \\\\\n\\textbf{x}_k &\\triangleq[x^0,x^1,\\dots, x^k]^T\n\\end{align}" }, { "math_id": 65, "text": "E_1(x) = \\cfrac{e^{-x}}{x+\\cfrac{1}{1+\\cfrac{1}{x+\\cfrac{2}{1+\\cfrac{2}{x+\\cfrac{3}{\\ddots}}}}}}." }, { "math_id": 66, "text": "E_1(x) = \\frac{e^{-x}}{G+(1-G)e^{-\\frac{x}{1-G}}}\\ln\\left[1+\\frac G x -\\frac{1-G}{(h+bx)^2}\\right]," }, { "math_id": 67, "text": "\\begin{align}\nh &= \\frac{1}{1+x\\sqrt{x}}+\\frac{h_{\\infty}q}{1+q} \\\\\nq &=\\frac{20}{47}x^{\\sqrt{\\frac{31}{26}}} \\\\\nh_{\\infty} &= \\frac{(1-G)(G^2-6G+12)}{3G(2-G)^2b} \\\\\nb &=\\sqrt{\\frac{2(1-G)}{G(2-G)}} \\\\\nG &= e^{-\\gamma}\n\\end{align}" }, { "math_id": 68, "text": "\\forall |x| < \\frac{\\mu}{\\ln(\\mu)},\\quad \\mathrm{Ei}^{-1}(x) = \\sum_{n=0}^\\infty \\frac{x^n}{n!} \\frac{P_n(\\ln(\\mu))}{\\mu^n}" }, { "math_id": 69, "text": "\\mu" }, { "math_id": 70, "text": "(P_n)" }, { "math_id": 71, "text": "P_0(x) = x,\\ P_{n+1}(x) = x(P_n'(x) - nP_n(x))." }, { "math_id": 72, "text": "n > 0" }, { "math_id": 73, "text": "\\deg P_n = n" }, { "math_id": 74, "text": "P_n(x) = \\left.\\left(\\frac{\\mathrm d}{\\mathrm dt}\\right)^{n-1} \\left(\\frac{te^x}{\\mathrm{Ei}(t+x)-\\mathrm{Ei}(x)}\\right)^n\\right|_{t=0}." } ]
https://en.wikipedia.org/wiki?curid=648614
64862660
Graham–Rothschild theorem
In mathematics, the Graham–Rothschild theorem is a theorem that applies Ramsey theory to combinatorics on words and combinatorial cubes. It is named after Ronald Graham and Bruce Lee Rothschild, who published its proof in 1971. Through the work of Graham, Rothschild, and Klaus Leeb in 1972, it became part of the foundations of structural Ramsey theory. A special case of the Graham–Rothschild theorem motivates the definition of Graham's number, a number that was popularized by Martin Gardner in "Scientific American" and listed in the "Guinness Book of World Records" as the largest number ever appearing in a mathematical proof. Background. The theorem involves sets of strings, all having the same length formula_0, over a finite alphabet, together with a group acting on the alphabet. A combinatorial cube is a subset of strings determined by constraining some positions of the string to contain a fixed letter of the alphabet, and by constraining other pairs of positions to be equal to each other or to be related to each other by the group action. This determination can be specified more formally by means of a labeled parameter word, a string with wildcard characters in the positions that are not constrained to contain a fixed letter and with additional labels describing which wildcard characters must be equal or related by the group action. The dimension of the combinatorial cube is the number of free choices that can be made for these wildcard characters. A combinatorial cube of dimension one is called a combinatorial line. For instance, in the game of tic-tac-toe, the nine cells of a tic-tac-toe board can be specified by strings of length two over the three-symbol alphabet {1,2,3} (the Cartesian coordinates of the cells), and the winning lines of three cells form combinatorial lines. Horizontal lines are obtained by fixing the formula_1-coordinate (the second position of the length-two string) and letting the formula_2-coordinate be chosen freely, and vertical lines are obtained by fixing the formula_2-coordinate and letting the formula_1-coordinate be chosen freely. The two diagonal lines of the tic-tac-toe board can be specified by a parameter word with two wildcard characters that are either constrained to be equal (for the main diagonal) or constrained to be related by a group action that swaps the 1 and 3 characters (for the antidiagonal). The set of all combinatorial cubes of dimension formula_3, for strings of length formula_0 over an alphabet formula_4 with group action formula_5, is denoted formula_6. A "subcube" of a combinatorial cube is another combinatorial cube of smaller dimension that forms a subset of the set of strings in the larger combinatorial cube. The subcubes of a combinatorial cube can also be described by a natural composition action on parameter words, obtained by substituting the symbols of one parameter word for the wildcards of another. Statement. With the notation above, the Graham–Rothschild theorem takes as parameters an alphabet formula_4, group action formula_5, finite number of colors formula_7, and two dimensions of combinatorial cubes formula_8 and formula_9 with formula_10. It states that, for every combination of formula_4, formula_5, formula_7, formula_8, and formula_9, there exists a string length formula_11 such that, if each combinatorial cube in formula_12 is assigned one of formula_7 colors, then there exists a combinatorial cube in formula_13 all of whose formula_9-dimensional subcubes are assigned the same color. An infinitary version of the Graham–Rothschild theorem is also known. Applications. The special case of the Graham–Rothschild theorem with formula_14, formula_15, and the trivial group action is the Hales–Jewett theorem, stating that if all long-enough strings over a given alphabet are colored, then there exists a monochromatic combinatorial line. Graham's number is a bound for the Graham–Rothschild theorem with formula_16, formula_17, formula_18, formula_19, and a nontrivial group action. For these parameters, the set of strings of length formula_0 over a binary alphabet describes the vertices of an formula_0-dimensional hypercube, every two of which form a combinatorial line. The set of all combinatorial lines can be described as the edges of a complete graph on the vertices. The theorem states that, for a high-enough dimension formula_0, whenever this set of edges of the complete graph is assigned two colors, there exists a monochromatic combinatorial plane: a set of four hypercube vertices that belong to a common geometric plane and have all six edges assigned the same color. Graham's number is an upper bound for this number formula_0, calculated using repeated exponentiation; it is believed to be significantly larger than the smallest formula_0 for which the statement of the Graham–Rothschild theorem is true. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "y" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "d" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "G" }, { "math_id": 6, "text": "[A,G]\\tbinom{n}{d}" }, { "math_id": 7, "text": "r" }, { "math_id": 8, "text": "m" }, { "math_id": 9, "text": "k" }, { "math_id": 10, "text": "m > k" }, { "math_id": 11, "text": "n\\ge m" }, { "math_id": 12, "text": "[A,G]\\tbinom{n}{k}" }, { "math_id": 13, "text": "[A,G]\\tbinom{n}{m}" }, { "math_id": 14, "text": "m=1" }, { "math_id": 15, "text": "k=0" }, { "math_id": 16, "text": "|A|=2" }, { "math_id": 17, "text": "r=2" }, { "math_id": 18, "text": "m=2" }, { "math_id": 19, "text": "k=1" } ]
https://en.wikipedia.org/wiki?curid=64862660
648954
Visual acuity
Clarity of vision Visual acuity (VA) commonly refers to the clarity of vision, but technically rates an animal's ability to recognize small details with precision. Visual acuity depends on optical and neural factors. Optical factors of the eye influence the sharpness of an image on its retina. Neural factors include the health and functioning of the retina, of the neural pathways to the brain, and of the interpretative faculty of the brain. The most commonly referred-to visual acuity is "distance acuity" or "far acuity" (e.g., "20/20 vision"), which describes someone's ability to recognize small details at a far distance. This ability is compromised in people with myopia, also known as short-sightedness or near-sightedness. Another visual acuity is "near acuity", which describes someone's ability to recognize small details at a near distance. This ability is compromised in people with hyperopia, also known as long-sightedness or far-sightedness. A common optical cause of low visual acuity is refractive error (ametropia): errors in how the light is refracted in the eye. Causes of refractive errors include aberrations in the shape of the eye or the cornea, and reduced ability of the lens to focus light. When the combined refractive power of the cornea and lens is too high for the length of the eye, the retinal image will be in focus in front of the retina and out of focus on the retina, yielding myopia. A similar poorly focused retinal image happens when the combined refractive power of the cornea and lens is too low for the length of the eye except that the focused image is behind the retina, yielding hyperopia. Normal refractive power is referred to as emmetropia. Other optical causes of low visual acuity include astigmatism, in which contours of a particular orientation are blurred, and more complex corneal irregularities. Refractive errors can mostly be corrected by optical means (such as eyeglasses, contact lenses, and refractive surgery). For example, in the case of myopia, the correction is to reduce the power of the eye's refraction by a so-called minus lens. Neural factors that limit acuity are located in the retina, in the pathways to the brain, or in the brain. Examples of conditions affecting the retina include detached retina and macular degeneration. Examples of conditions affecting the brain include amblyopia (caused by the visual brain not having developed properly in early childhood) and by brain damage, such as from traumatic brain injury or stroke. When optical factors are corrected for, acuity can be considered a measure of neural functioning. Visual acuity is typically measured while fixating, i.e. as a measure of central (or foveal) vision, for the reason that it is highest in the very center. However, acuity in peripheral vision can be of equal importance in everyday life. Acuity declines towards the periphery first steeply and then more gradually, in an inverse-linear fashion (i.e. the decline follows approximately a hyperbola). The decline is according to "E"2/("E"2+"E"), where "E" is eccentricity in degrees visual angle, and "E"2 is a constant of approximately 2 degrees. At 2 degrees eccentricity, for example, acuity is half the foveal value. Visual acuity is a measure of how well small details are resolved in the very center of the visual field; it therefore does not indicate how larger patterns are recognized. Visual acuity alone thus cannot determine the overall quality of visual function. Definition. Visual is a measure of the spatial resolution of the visual processing system. VA, as it is sometimes referred to by optical professionals, is tested by requiring the person whose vision is being tested to identify so-called optotypes – stylized letters, Landolt rings, pediatric symbols, symbols for the illiterate, standardized Cyrillic letters in the Golovin–Sivtsev table, or other patterns – on a printed chart (or some other means) from a set viewing distance. Optotypes are represented as black symbols against a white background (i.e. at maximum contrast). The distance between the person's eyes and the testing chart is set so as to approximate "optical infinity" in the way the lens attempts to focus (far acuity), or at a defined reading distance (near acuity). A reference value above which visual acuity is considered normal is called 6/6 vision, the USC equivalent of which is 20/20 vision: At 6 metres or 20 feet, a human eye with that performance is able to separate contours that are approximately 1.75 mm apart. Vision of 6/12 corresponds to lower performance, while vision of 6/3 to better performance. Normal individuals have an acuity of 6/4 or better (depending on age and other factors). In the expression 6/x vision, the numerator (6) is the distance in metres between the subject and the chart and the denominator (x) the distance at which a person with 6/6 acuity would discern the same optotype. Thus, 6/12 means that a person with 6/6 vision would discern the same optotype from 12 metres away (i.e. at twice the distance). This is equivalent to saying that with 6/12 vision, the person possesses half the spatial resolution and needs twice the size to discern the optotype. A simple and efficient way to state acuity is by converting the fraction to a decimal: 6/6 then corresponds to an acuity (or a Visus) of 1.0 (see "Expression" below), while 6/3 corresponds to 2.0, which is often attained by well-corrected healthy young subjects with binocular vision. Stating acuity as a decimal number is the standard in European countries, as required by the European norm (EN ISO 8596, previously DIN 58220). The precise distance at which acuity is measured is not important as long as it is sufficiently far away and the size of the optotype on the retina is the same. That size is specified as a visual angle, which is the angle, at the eye, under which the optotype appears. For 6/6 = 1.0 acuity, the size of a letter on the Snellen chart or Landolt C chart is a visual angle of 5 arc minutes (1 arc min = 1/60 of a degree), which is a 43 point font at 20 feet. By the design of a typical optotype (like a Snellen E or a Landolt C), the critical gap that needs to be resolved is 1/5 this value, i.e., 1 arc min. The latter is the value used in the international definition of visual acuity: acuity Acuity is a measure of visual performance and does not relate to the eyeglass prescription required to correct vision. Instead, an eye exam seeks to find the prescription that will provide the best corrected visual performance achievable. The resulting acuity may be greater or less than 6/6 = 1.0. Indeed, a subject diagnosed as having 6/6 vision will often actually have higher visual acuity because, once this standard is attained, the subject is considered to have normal (in the sense of undisturbed) vision and smaller optotypes are not tested. Subjects with 6/6 vision or "better" (20/15, 20/10, etc.) may still benefit from an eyeglass correction for other problems related to the visual system, such as hyperopia, ocular injuries, or presbyopia. Measurement. Visual acuity is measured by a psychophysical procedure and as such relates the physical characteristics of a stimulus to a subject's percept and their resulting responses. Measurement can be taken by using an eye chart invented by Ferdinand Monoyer, by optical instruments, or by computerized tests like the FrACT. Care must be taken that viewing conditions correspond to the standard, such as correct illumination of the room and the eye chart, correct viewing distance, enough time for responding, error allowance, and so forth. In European countries, these conditions are standardized by the European norm (EN ISO 8596, previously DIN 58220). Physiology. Daylight vision (i.e. photopic vision) is subserved by cone receptor cells which have high spatial density (in the central fovea) and allow high acuity of 6/6 or better. In low light (i.e., scotopic vision), cones do not have sufficient sensitivity and vision is subserved by rods. Spatial resolution is then much lower. This is due to spatial summation of rods, i.e. a number of rods merge into a bipolar cell, in turn connecting to a ganglion cell, and the resulting unit for resolution is large, and acuity small. There are no rods in the very center of the visual field (the foveola), and highest performance in low light is achieved in near peripheral vision. The maximum angular resolution of the human eye is 28 arc seconds or 0.47 arc minutes; this gives an angular resolution of 0.008 degrees, and at a distance of 1 km corresponds to 136 mm. This is equal to 0.94 arc minutes per line pair (one white and one black line), or 0.016 degrees. For a pixel pair (one white and one black pixel) this gives a pixel density of 128 pixels per degree (PPD). 6/6 vision is defined as the ability to resolve two points of light separated by a visual angle of one minute of arc, corresponding to 60 PPD, or about 290–350 pixels per inch for a display on a device held 250 to 300 mm from the eye. Thus, visual acuity, or resolving power (in daylight, central vision), is the property of cones. To resolve detail, the eye's optical system has to project a focused image on the fovea, a region inside the macula having the highest density of cone photoreceptor cells (the only kind of photoreceptors existing in the fovea's very center of 300 μm diameter), thus having the highest resolution and best color vision. Acuity and color vision, despite being mediated by the same cells, are different physiologic functions that do not interrelate except by position. Acuity and color vision can be affected independently. The grain of a photographic mosaic has just as limited resolving power as the "grain" of the retinal mosaic. To see detail, two sets of receptors must be intervened by a middle set. The maximum resolution is that 30 seconds of arc, corresponding to the foveal cone diameter or the angle subtended at the nodal point of the eye. To get reception from each cone, as it would be if vision was on a mosaic basis, the "local sign" must be obtained from a single cone via a chain of one bipolar, ganglion, and lateral geniculate cell each. A key factor of obtaining detailed vision, however, is inhibition. This is mediated by neurons such as the amacrine and horizontal cells, which functionally render the spread or convergence of signals inactive. This tendency to one-to-one shuttle of signals is powered by brightening of the center and its surroundings, which triggers the inhibition leading to a one-to-one wiring. This scenario, however, is rare, as cones may connect to both midget and flat (diffuse) bipolars, and amacrine and horizontal cells can merge messages just as easily as inhibit them. Light travels from the fixation object to the fovea through an imaginary path called the visual axis. The eye's tissues and structures that are in the visual axis (and also the tissues adjacent to it) affect the quality of the image. These structures are: tear film, cornea, anterior chamber, pupil, lens, vitreous, and finally the retina. The posterior part of the retina, called the retinal pigment epithelium (RPE) is responsible for, among many other things, absorbing light that crosses the retina so it cannot bounce to other parts of the retina. In many vertebrates, such as cats, where high visual acuity is not a priority, there is a reflecting tapetum layer that gives the photoreceptors a "second chance" to absorb the light, thus improving the ability to see in the dark. This is what causes an animal's eyes to seemingly glow in the dark when a light is shone on them. The RPE also has a vital function of recycling the chemicals used by the rods and cones in photon detection. If the RPE is damaged and does not clean up this "shed" blindness can result. As in a photographic lens, visual acuity is affected by the size of the pupil. Optical aberrations of the eye that decrease visual acuity are at a maximum when the pupil is largest (about 8 mm), which occurs in low-light conditions. When the pupil is small (1–2 mm), image sharpness may be limited by diffraction of light by the pupil (see diffraction limit). Between these extremes is the pupil diameter that is generally best for visual acuity in normal, healthy eyes; this tends to be around 3 or 4 mm. If the optics of the eye were otherwise perfect, theoretically, acuity would be limited by pupil diffraction, which would be a diffraction-limited acuity of 0.4 minutes of arc (minarc) or 6/2.6 acuity. The smallest cone cells in the fovea have sizes corresponding to 0.4 minarc of the visual field, which also places a lower limit on acuity. The optimal acuity of 0.4 minarc or 6/2.6 can be demonstrated using a laser interferometer that bypasses any defects in the eye's optics and projects a pattern of dark and light bands directly on the retina. Laser interferometers are now used routinely in patients with optical problems, such as cataracts, to assess the health of the retina before subjecting them to surgery. The visual cortex is the part of the cerebral cortex in the posterior part of the brain responsible for processing visual stimuli, called the occipital lobe. The central 10° of field (approximately the extension of the macula) is represented by at least 60% of the visual cortex. Many of these neurons are believed to be involved directly in visual acuity processing. Proper development of normal visual acuity depends on a human or an animal having normal visual input when it is very young. Any visual deprivation, that is, anything interfering with such input over a prolonged period of time, such as a cataract, severe eye turn or strabismus, anisometropia (unequal refractive error between the two eyes), or covering or patching the eye during medical treatment, will usually result in a severe and permanent decrease in visual acuity and pattern recognition in the affected eye if not treated early in life, a condition known as amblyopia. The decreased acuity is reflected in various abnormalities in cell properties in the visual cortex. These changes include a marked decrease in the number of cells connected to the affected eye as well as cells connected to both eyes in cortical area V1, resulting in a loss of stereopsis, i.e. depth perception by binocular vision (colloquially: "3D vision"). The period of time over which an animal is highly sensitive to such visual deprivation is referred to as the critical period. The eye is connected to the visual cortex by the optic nerve coming out of the back of the eye. The two optic nerves come together behind the eyes at the optic chiasm, where about half of the fibers from each eye cross over to the opposite side and join fibers from the other eye representing the corresponding visual field, the combined nerve fibers from both eyes forming the optic tract. This ultimately forms the physiological basis of binocular vision. The tracts project to a relay station in the midbrain called the lateral geniculate nucleus, part of the thalamus, and then to the visual cortex along a collection of nerve fibers called the optic radiation. Any pathological process in the visual system, even in older humans beyond the critical period, will often cause decreases in visual acuity. Thus measuring visual acuity is a simple test in accessing the health of the eyes, the visual brain, or pathway to the brain. Any relatively sudden decrease in visual acuity is always a cause for concern. Common causes of decreases in visual acuity are cataracts and scarred corneas, which affect the optical path, diseases that affect the retina, such as macular degeneration and diabetes, diseases affecting the optic pathway to the brain such as tumors and multiple sclerosis, and diseases affecting the visual cortex such as tumors and strokes. Though the resolving power depends on the size and packing density of the photoreceptors, the neural system must interpret the receptors' information. As determined from single-cell experiments on the cat and primate, different ganglion cells in the retina are tuned to different spatial frequencies, so some ganglion cells at each location have better acuity than others. Ultimately, however, it appears that the size of a patch of cortical tissue in visual area V1 that processes a given location in the visual field (a concept known as cortical magnification) is equally important in determining visual acuity. In particular, that size is largest in the fovea's center, and decreases with increasing distance from there. Optical aspects. Besides the neural connections of the receptors, the optical system is an equally key player in retinal resolution. In the ideal eye, the image of a diffraction grating can subtend 0.5 micrometre on the retina. This is certainly not the case, however, and furthermore the pupil can cause diffraction of the light. Thus, black lines on a grating will be mixed with the intervening white lines to make a gray appearance. Defective optical issues (such as uncorrected myopia) can render it worse, but suitable lenses can help. Images (such as gratings) can be sharpened by lateral inhibition, i.e., more highly excited cells inhibiting the less excited cells. A similar reaction is in the case of chromatic aberrations, in which the color fringes around black-and-white objects are inhibited similarly. Expression. Visual acuity is often measured according to the size of letters viewed on a Snellen chart or the size of other symbols, such as Landolt Cs or the E Chart. In some countries, acuity is expressed as a vulgar fraction, and in some as a decimal number. Using the metre as a unit of measurement, (fractional) visual acuity is expressed relative to 6/6. Otherwise, using the foot, visual acuity is expressed relative to 20/20. For all practical purposes, 20/20 vision is equivalent to 6/6. In the decimal system, acuity is defined as the reciprocal value of the size of the gap (measured in arc minutes) of the smallest Landolt C, the orientation of which can be reliably identified. A value of 1.0 is equal to 6/6. LogMAR is another commonly used scale, expressed as the (decadic) logarithm of the minimum angle of resolution (MAR), which is the reciprocal of the acuity number. The LogMAR scale converts the geometric sequence of a traditional chart to a linear scale. It measures visual acuity loss: positive values indicate vision loss, while negative values denote normal or better visual acuity. This scale is commonly used clinically and in research because the lines are of equal length and so it forms a continuous scale with equally spaced intervals between points, unlike Snellen charts, which have different numbers of letters on each line. A visual acuity of 6/6 is frequently described as meaning that a person can see detail from away the same as a person with "normal" eyesight would see from 6 metres. If a person has a visual acuity of 6/12, they are said to see detail from away the same as a person with "normal" eyesight would see it from away. The definition of 6/6 is somewhat arbitrary, since human eyes typically have higher acuity, as Tscherning writes, "We have found also that the best eyes have a visual acuity which approaches 2, and we can be almost certain that if, with a good illumination, the acuity is only equal to 1, the eye presents defects sufficiently pronounced to be easily established." Most observers may have a binocular acuity superior to 6/6; the limit of acuity in the unaided human eye is around 6/3–6/2.4 (20/10–20/8), although 6/3 was the highest score recorded in a study of some US professional athletes. Some birds of prey, such as hawks, are believed to have an acuity of around 20/2; in this respect, their vision is much better than human eyesight. When visual acuity is below the largest optotype on the chart, the reading distance is reduced until the patient can read it. Once the patient is able to read the chart, the letter size and test distance are noted. If the patient is unable to read the chart at any distance, they are tested as follows: Legal definitions. Various countries have defined statutory limits for poor visual acuity that qualifies as a disability. For example, in Australia, the Social Security Act defines blindness as: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;A person meets the criteria for permanent blindness under section 95 of the Social Security Act if the corrected visual acuity is less than 6/60 on the Snellen Scale in both eyes or there is a combination of visual defects resulting in the same degree of permanent visual loss. In the US, the relevant federal statute defines blindness as follows: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;[T]he term "blindness" means central visual acuity of 20/200 or less in the better eye with the use of a correcting lens. An eye that is accompanied by a limitation in the fields of vision such that the widest diameter of the visual field subtends an angle no greater than 20 degrees shall be considered for purposes in this paragraph as having a central visual acuity of 20/200 or less. A person's visual acuity is registered documenting the following: whether the test was for distant or near vision, the eye(s) evaluated and whether corrective lenses (i.e. glasses or contact lenses) were used: So, distant visual acuity of 6/10 and 6/8 with pinhole in the right eye will be: DscOD 6/10 PH 6/8. Distant visual acuity of count fingers and 6/17 with pinhole in the left eye will be: DscOS CF PH 6/17. Near visual acuity of 6/8 with pinhole remaining at 6/8 in both eyes with spectacles will be: NccOU 6/8 PH 6/8. "Dynamic visual acuity" defines the ability of the eye to visually discern fine detail in a moving object. Measurement considerations. Visual acuity measurement involves more than being able to see the optotypes. The patient should be cooperative, understand the optotypes, be able to communicate with the physician, and many more factors. If any of these factors is missing, then the measurement will not represent the patient's real visual acuity. Visual acuity is a subjective test meaning that if the patient is unwilling or unable to cooperate, the test cannot be done. A patient who is sleepy, intoxicated, or has any disease that can alter their consciousness or mental status, may not achieve their maximum possible acuity. Patients who are illiterate in the language whose letters and/or numbers appear on the chart will be registered as having very low visual acuity if this is not known. Some patients will not tell the examiner that they do not know the optotypes, unless asked directly about it. Brain damage can result in a patient not being able to recognize printed letters, or being unable to spell them. A motor inability can make a person respond incorrectly to the optotype shown and negatively affect the visual acuity measurement. Variables such as pupil size, background adaptation luminance, duration of presentation, type of optotype used, interaction effects from adjacent visual contours (or "crowding") can all affect visual acuity measurement. Testing in children. The newborn's visual acuity is approximately 6/133, developing to 6/6 well after the age of six months in most children, according to a study published in 2009. The measurement of visual acuity in infants, pre-verbal children and special populations (for instance, disabled individuals) is not always possible with a letter chart. For these populations, specialised testing is necessary. As a basic examination step, one must check whether visual stimuli can be fixated, centered and followed. More formal testing using preferential looking techniques use "Teller acuity" cards (presented by a technician from behind a window in the wall) to check whether the child is more visually attentive to a random presentation of vertical or horizontal gratings on one side compared with a blank page on the other side – the bars become progressively finer or closer together, and the endpoint is noted when the child in its adult carer's lap equally prefers the two sides. Another popular technique is electro-physiologic testing using visual evoked (cortical) potentials (VEPs or VECPs), which can be used to estimate visual acuity in doubtful cases and expected severe vision loss cases like Leber's congenital amaurosis. VEP testing of acuity is somewhat similar to preferential looking in using a series of black and white stripes (sine wave gratings) or checkerboard patterns (which produce larger responses than stripes). Behavioral responses are not required and brain waves created by the presentation of the patterns are recorded instead. The patterns become finer and finer until the evoked brain wave just disappears, which is considered to be the endpoint measure of visual acuity. In adults and older, verbal children capable of paying attention and following instructions, the endpoint provided by the VEP corresponds very well to the psychophysical measure in the standard measurement (i.e. the perceptual endpoint determined by asking the subject when they can no longer see the pattern). There is an assumption that this correspondence also applies to much younger children and infants, though this does not necessarily have to be the case. Studies do show the evoked brain waves, as well as derived acuities, are very adult-like by one year of age. For reasons not totally understood, until a child is several years old, visual acuities from behavioral preferential looking techniques typically lag behind those determined using the VEP, a direct physiological measure of early visual processing in the brain. Possibly it takes longer for more complex behavioral and attentional responses, involving brain areas not directly involved in processing vision, to mature. Thus the visual brain may detect the presence of a finer pattern (reflected in the evoked brain wave), but the "behavioral brain" of a small child may not find it salient enough to pay special attention to. A simple but less-used technique is checking oculomotor responses with an optokinetic nystagmus drum, where the subject is placed inside the drum and surrounded by rotating black and white stripes. This creates involuntary abrupt eye movements (nystagmus) as the brain attempts to track the moving stripes. There is a good correspondence between the optokinetic and usual eye-chart acuities in adults. A potentially serious problem with this technique is that the process is reflexive and mediated in the low-level brain stem, not in the visual cortex. Thus someone can have a normal optokinetic response and yet be cortically blind with no conscious visual sensation. "Normal" visual acuity. Visual acuity depends upon how accurately light is focused on the retina, the integrity of the eye's neural elements, and the interpretative faculty of the brain. "Normal" visual acuity (in central, i.e. foveal vision) is frequently considered to be what was defined by Herman Snellen as the ability to recognize an optotype when it subtended 5 minutes of arc, that is Snellen's chart 6/6-metre, 20/20 feet, 1.00 decimal or 0.0 logMAR. In young humans, the average visual acuity of a healthy, emmetropic eye (or ametropic eye with correction) is approximately 6/5 to 6/4, so it is inaccurate to refer to 6/6 visual acuity as "perfect" vision. On the contrary, Tscherning writes, "We have found also that the best eyes have a visual acuity which approaches 2, and we can be almost certain that if, with a good illumination, the acuity is only equal to 1, the eye presents defects sufficiently pronounced to be easily established." 6/6 is the visual acuity needed to discriminate two contours separated by 1 arc minute – 1.75 mm at 6 metres. This is because a 6/6 letter, E for example, has three limbs and two spaces in between them, giving 5 different detailed areas. The ability to resolve this therefore requires 1/5 of the letter's total size, which in this case would be 1 minute of arc (visual angle). The significance of the 6/6 standard can best be thought of as the lower limit of normal, or as a screening cutoff. When used as a screening test, subjects that reach this level need no further investigation, even though the average visual acuity with a healthy visual system is typically better. Some people may have other visual problems, such as severe visual field defects, color blindness, reduced contrast, mild amblyopia, cerebral visual impairments, inability to track fast-moving objects, or one of many other visual impairments and still have "normal" visual acuity. Thus, "normal" visual acuity does not imply normal vision. The reason visual acuity is very widely used is that it is easily measured, its reduction (after correction) often indicates some disturbance, and that it often corresponds with the normal daily activities a person can handle, and evaluates their impairment to do them (even though there is heavy debate over that relationship). Other measures. Normally, visual acuity refers to the ability to resolve two separated points or lines, but there are other measures of the ability of the visual system to discern spatial differences. Vernier acuity measures the ability to align two line segments. Humans can do this with remarkable accuracy. This success is regarded as" hyperacuity". Under optimal conditions of good illumination, high contrast, and long line segments, the limit to vernier acuity is about 8 arc seconds or 0.13 arc minutes, compared to about 0.6 arc minutes (6/4) for normal visual acuity or the 0.4 arc minute diameter of a foveal cone. Because the limit of vernier acuity is well below that imposed on regular visual acuity by the "retinal grain" or size of the foveal cones, it is thought to be a process of the visual cortex rather than the retina. Supporting this idea, vernier acuity seems to correspond very closely (and may have the same underlying mechanism) enabling one to discern very slight differences in the orientations of two lines, where orientation is known to be processed in the visual cortex. The smallest detectable visual angle produced by a single fine dark line against a uniformly illuminated background is also much less than foveal cone size or regular visual acuity. In this case, under optimal conditions, the limit is about 0.5 arc seconds or only about 2% of the diameter of a foveal cone. This produces a contrast of about 1% with the illumination of surrounding cones. The mechanism of detection is the ability to detect such small differences in contrast or illumination, and does not depend on the angular width of the bar, which cannot be discerned. Thus as the line gets finer, it appears to get fainter but not thinner. Stereoscopic acuity is the ability to detect differences in depth with the two eyes. For more complex targets, stereoacuity is similar to normal monocular visual acuity, or around 0.6–1.0 arc minutes, but for much simpler targets, such as vertical rods, may be as low as only 2 arc seconds. Although stereoacuity normally corresponds very well with monocular acuity, it may be very poor, or absent, even in subjects with normal monocular acuities. Such individuals typically have abnormal visual development when they are very young, such as an alternating strabismus, or eye turn, where both eyes rarely, or never, point in the same direction and therefore do not function together. Another test, including Visual acuity (EVTS/OptimEyes), uses targets which change in size, contrast, and viewing time. This test was developed by Daniel M. Laby with colleagues and uses item response theory to calculate a vision performance score (core score). This specific test of visual function has been shown to correlate to professional sports performance. Motion acuity. The eye has acuity limits for detecting motion. Forward motion is limited by the "subtended angular velocity detection threshold" (SAVT), and horizontal and vertical motion acuity are limited by lateral motion thresholds. The lateral motion limit is generally below the looming motion limit, and for an object of a given size, lateral motion becomes the more insightful of the two, once the observer moves sufficiently far away from the path of travel. Below these thresholds subjective constancy is experienced in accordance with the Stevens' power law and Weber–Fechner law. Subtended angular velocity detection threshold (SAVT). There is a specific acuity limit in detecting an approaching object's looming motion. This is regarded as the subtended angular velocity detection threshold (SAVT) limit of visual acuity. It has a practical value of 0.0275 rad/s. For a person with SAVT limit of formula_0, the looming motion of a directly approaching object of size S, moving at velocity v, is not detectable until its distance D is formula_1 where the S2/4 term is omitted for small objects relative to great distances by small-angle approximation. To exceed the SAVT, an object of size S moving as velocity v must be closer than D; beyond that distance, subjective constancy is experienced. The SAVT formula_0 can be measured from the distance at which a looming object is first detected: formula_2 where the S2 term is omitted for small objects relative to great distances by small-angle approximation. The SAVT has the same kind of importance to driving safety and sports as the static limit. The formula is derived from taking the derivative of the visual angle with respect to distance, and then multiplying by velocity to obtain the time rate of visual expansion (dθ/dt dθ/dx · dx/dt). Lateral motion. There are acuity limits (formula_0) of horizontal and vertical motion as well. They can be measured and defined by the threshold detection of movement of an object traveling at distance D and velocity v orthogonal to the direction of view, from a set-back distance B with the formula formula_3 Because the tangent of the subtended angle is the ratio of the orthogonal distance to the set-back distance, the angular time rate (rad/s) of lateral motion is simply the derivative of the inverse tangent multiplied by the velocity (dθ/dt dθ/dx · dx/dt). In application this means that an orthogonally traveling object will not be discernible as moving until it has reached the distance formula_4 where formula_0 for lateral motion is generally ≥ 0.0087 rad/s with probable dependence on deviation from the fovia and movement orientation, velocity is in terms of the distance units, and zero distance is straight ahead. Far object distances, close set-backs, and low velocities generally lower the salience of lateral motion. Detection with close or null set-back can be accomplished through the pure scale changes of looming motion. Radial motion. The motion acuity limit affects radial motion in accordance to its definition, hence the ratio of the velocity v to the radius R must exceed formula_0: formula_5 Radial motion is encountered in clinical and research environments, in dome theaters, and in virtual-reality headsets. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\dot\\theta_t" }, { "math_id": 1, "text": "D \\lessapprox \\sqrt{\\frac{S \\cdot v}{\\dot{\\theta_{t}}}-\\frac{S^2}{4}}," }, { "math_id": 2, "text": " \\dot\\theta_t \\approx \\frac{4S \\cdot v}{S^2 + 4D^2}, " }, { "math_id": 3, "text": " \\dot\\theta_t \\approx \\frac{B \\cdot v}{B^2 + D^2}. " }, { "math_id": 4, "text": " D \\lessapprox \\sqrt{\\frac{B \\cdot v}{\\dot\\theta_t} - B^2}, " }, { "math_id": 5, "text": "\\dot\\theta_t \\lessapprox \\frac{v}{R}." } ]
https://en.wikipedia.org/wiki?curid=648954