id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
76409777
OR-AND-invert
OR-AND-invert gates or OAI-gates are logic gates comprising OR gates followed by a NAND gate. They can be efficiently implemented in logic families like CMOS and TTL. They are dual to AND-OR-invert gates. Overview. OR-AND-invert gates implement the inverted product of sums. formula_0 groups of formula_1, formula_2 input signals combined with OR, and the results then combined with NAND. Examples. 2-1 OAI-gate. A 2-1-OAI gate realizes the function formula_3 with the truth table shown below. 2-2 OAI gate. A 2-2-OAI gate realizes the function formula_4 with the truth table shown below. Realization. OAI-gates can efficiently be implemented as complex gates. An example of a 3-1 OAI-gate is shown in the figure below. Examples of use. One possibility of implementing an XOR gate is by using a 2-2-OAI-gate with non-inverted and inverted inputs.
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "m_i" }, { "math_id": 2, "text": "m_i \\ge 1, i=1\\ldots n" }, { "math_id": 3, "text": "Y = \\overline{(A \\lor B) \\land C} " }, { "math_id": 4, "text": "Y = \\overline{(A \\lor B) \\land (C \\lor D)} " } ]
https://en.wikipedia.org/wiki?curid=76409777
76412112
Z 229-15
Ring galaxy in Constellation Lyra Z 229-15 is a ring galaxy in the constellation Lyra. It is around 390 million light-years from Earth. It has been referred to by NASA and other space agencies as hosting an active galactic nucleus, a quasar, and a Seyfert galaxy, each of which overlap in some way. Discovery. Z 229-15 was first discovered by astronomer, D. Proust from the Meudon Observatory in 1990. According to Proust, he described the object as a possible obscured spiral galaxy featuring strong signs of absorption. Additionally, Z 229-15 was also observed through the 1.93-m telescope taken at Observatorie de Haute-Provence. Classification. Z 229-15's classification has been up for speculation for many years. Z 229-15 has been widely called a quasar, and if this is true would make Z 229-15 positively local. Many space agencies, notably NASA, have called it a Seyfert galaxy that contains a quasar, and that, by definition, hosts an active galactic nuclei. This would make Z 229-15 a very uncommon galaxy in scientific terms. Supermassive black hole. Z 229-15 has a supermassive black hole at its core. The mass of the black hole is formula_0 solar masses. The interstellar matter in Z 229-15 gets so hot that it releases a large amount of energy across the electromagnetic spectrum on a regular basis. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\log_{10}M_{BH}=6.94\\pm0.14" } ]
https://en.wikipedia.org/wiki?curid=76412112
76415171
Waste input-output model
The Waste Input-Output (WIO) model is an innovative extension of the environmentally extended input-output (EEIO) model. It enhances the traditional Input-Output (IO) model by incorporating physical waste flows generated and treated alongside monetary flows of products and services. In a WIO model, each waste flow is traced from its generation to its treatment, facilitated by an allocation matrix. Additionally, the model accounts for the transformation of waste during treatment into secondary waste and residues, as well as recycling and final disposal processes. By including the end-of-life (EoL) stage of products, the WIO model enables a comprehensive consideration of the entire product life cycle, encompassing production, use, and disposal stages within the IO analysis framework. As such, it serves as a valuable tool for life cycle assessment (LCA). Background. With growing concerns about environmental issues, the EEIO model evolved from the conventional IO model appended by integrating environmental factors such as resources, emissions, and waste. The standard EEIO model, which includes the economic input-output life-cycle assessment (EIO-LCA) model, can be formally expressed as follows Here formula_0 represents the square matrix of input coefficients, formula_1 denotes releases (such as emissions or waste) per unit of output or the intervention matrix, formula_2 stands for the vector of final demand (or functional unit), formula_3 is the identity matrix, and formula_4 represents the resulting releases (For further details, refer to the input-output model). A model in which formula_1 represents the generation of waste per unit of output is known as a Waste Extended IO (WEIO) model. In this model, waste generation is included as a satellite account. However, this formulation, while well-suited for handling emissions or resource use, encounters challenges when dealing with waste. It overlooks the crucial point that waste typically undergoes treatment before recycling or final disposal, leading to a form less harmful to the environment. Additionally, the treatment of emissions results in residues that require proper handling for recycling or final disposal (for instance, the pollution abatement process of sulfur dioxide involves its conversion into gypsum or sulfuric acid). Leontief's pioneering pollution abatement IO model did not address this aspect, whereas Duchin later incorporated it in a simplified illustrative case of wastewater treatment. In waste management, it is common for various treatment methods to be applicable to a single type of waste. For instance, organic waste might undergo landfilling, incineration, gasification, or composting. Conversely, a single treatment process may be suitable for various types of waste; for example, solid waste of any type can typically be disposed of in a landfill. Formally, this implies that there is no one-to-one correspondence between treatment methods and types of waste. A theoretical drawback of the Leontief-Duchin EEIO model is that it considers only cases where this one-to-one correspondence between treatment methods and types of waste applies, which makes the model difficult to apply to real waste management issues. The WIO model addresses this weakness by introducing a general mapping between treatment methods and types of waste, establishing a highly adaptable link between waste and treatment. This results in a model that is applicable to a wide range of real waste management issues. The Methodology. We describe below the major features of the WIO model in its relationship to the Leontief-Duchin EEIO model, starting with notations. Let there be formula_5 producing sectors (each producing a single primary product), formula_6 waste treatment sectors, and formula_7 waste categories. Now, let's define the matrices and variables: It is important to note that variables with formula_30 or formula_31 pertain to conventional components found in an IO table and are measured in monetary units. Conversely, variables with formula_32 or formula_29 typically do not appear explicitly in an IO table and are measured in physical units. The balance of goods and waste. Using the notations introduced above, we can represent the supply and demand balance between products and waste for treatment by the following system of equations: Here, formula_33 dednotes a vector of ones (formula_25) used for summing the rows of formula_34, and similar definitions apply to other formula_35 terms. The first line pertains to the standard balance of goods and services with the left-hand side referring to the demand and the right-hand-side supply. Similarly, the second line refers to the balance of waste, where the left-hand side signifies the generation of waste for treatment, and the right-hand side denotes the waste designated for treatment. It is important to note that increased recycling reduces the amount of waste for treatment formula_29. The IO model with waste and waste treatment. We now define the input coefficient matrices formula_0 and waste generation coefficients formula_36 as follows formula_37 Here, formula_38 refers to a diagonal matrix where the formula_39 element is the formula_40-th element of a vector formula_41. Using formula_0 and formula_36 as derived above, the balance (1) can be represented as: This equation (2) represents the Duchin-Leontief environmental IO model, an extension of the original Leontief model of pollution abatement to account for the generation of secondary waste. It is important to note that this system of equations is generally unsolvable due to the presence of formula_42 on the left-hand side and formula_29 on the right-hand side, resulting in asymmetry. This asymmetry poses a challenge for solving the equation. However, the Duchin-Leontief environmental IO model addresses this issue by introducing a simplifying assumption: This assumption (3) implies that a single treatment sector exclusively treats each waste. For instance, waste plastics are either landfilled or incinerated but not both simultaneously. While this assumption simplifies the model and enhances computational feasibility, it may not fully capture the complexities of real-world waste management scenarios. In reality, various treatment methods can be applied to a given waste; for example, organic waste might be landfilled, incinerated, or composted. Therefore, while the assumption facilitates computational tractability, it might oversimplify the actual waste management processes. The WIO model. Nakamura and Kondo addressed the above problem by introducing the allocation matrix formula_43 of order formula_44 that assigns waste to treatment processes: Here, the element of formula_45 of formula_43 represents the proportion of waste formula_46 treated by treatment formula_47. Since waste must be treated in some manner (even if illegally dumped, which can be considered a form of treatment), we have: formula_48 Here, formula_49 stands for the transpose operator. Note that the allocation matrix formula_43 is essential for deriving formula_42 from formula_29. The simplifying condition (3) corresponds to the special case where formula_50 and formula_43 is a unit matrix. The table below gives an example of formula_43 for seven waste types and three treatment processes. Note that formula_43 represents the allocation of waste for treatment, that is, the portion of waste that is not recycled. The application of the allocation matrix formula_43 transforms equation (2) into the following fom: Note that, different from (2), the variable formula_51 occurs on both sides of the equation. This system of equations is thus solvable (provided it exists), with the solution given by: formula_52 The WIO counterpart of the standard EEIO model of emissions, represented by equation (0), can be formulated as follows: Here, formula_53 represents emissions per output from production sectors, and formula_54 denotes emissions from waste treatment sectors. Upon comparison of equation (6) with equation (0), it becomes clear that the former expands upon the latter by incorporating factors related to waste and waste treatment. Finally, the amount of waste for treatment induced by the final demand sector can be given by: The Supply and Use Extension (WIO-SUT). In the WIO model (5), waste flows are categorized based solely on treatment method, without considering the waste type. Manfred Lenzen addressed this limitation by allowing both waste by type and waste by treatment method to be presented together in a single representation within a supply-and-use framework. This extension of the WIO framework, given below, results in a symmetric WIO model that does not require the conversion of waste flows into treatment flows. formula_55 It is worth noting that despite the seemingly different forms of the two models, the Leontief inverse matrices of WIO and WIO-SUT are equivalent. The WIO Cost and Price Model. Let's denote by formula_56, formula_57, formula_58, and formula_59 the vector of product prices, waste treatment prices, value-added ratios of products, and value-added ratios of waste treatments, respectively. The case without waste recycling. In the absence of recycling, the cost counterpart of equation (5) becomes: formula_60 <br> which can be solved for formula_56 and formula_57 as: The case with waste recycling. When there is a recycling of waste, the simple representation given by equation (8) must be extended to include the rate of recycling formula_61 and the price of waste formula_62: <br> Here, formula_62 is the formula_63 vector of waste prices, formula_64 is the diagonal matrix of the formula_65 vector of the average waste recycling rates, formula_66, and formula_67 (formula_68 and formula_69 are defined in a similar fashion). Rebitzer and Nakamura used (9) to assess the life-cycle cost of washing machines under alternative End-of-Life scenarios. More recently, Liao et al. applied (9) to assess the economic effects of recycling copper waste domestically in Taiwan, amid the country's consideration of establishing a copper refinery to meet increasing demand. A caution about possible changes in the input-output coeffcieints of treatment processes when the composition of waste changes. The input-output relationships of waste treatment processes are often closely linked to the chemical properties of the treated waste, particularly in incineration processes. The amount of recoverable heat, and thus the potential heat supply for external uses, including power generation, depends on the heat value of the waste. This heat value is strongly influenced by the waste's composition. Therefore, any change in the composition of waste can significantly impact formula_70 and formula_71. To address this aspect of waste treatment, especially in incineration, Nakamura and Kondo recommended using engineering information about the relevant treatment processes. They suggest solving the entire model iteratively, which consists of the WIO model and a systems engineering model that incorporates the engineering information. Alternatively, Tisserant et al proposed addressing this issue by distinguishing each waste by its treatment processes. They suggest transforming the rectangular waste flow matrix (formula_72) not into an formula_73 matrix as done by Nakamura and Kondo, but into an formula_74 matrix. The details of each column element were obtained based on the literature. WIO tables and applications. Waste footprint studies. The MOE-WIO table for Japan. The WIO table compiled by the Japanese Ministry of the Environment (MOE) for the year 2011 stands as the only publicly accessible WIO table developed by a governmental body thus far. This MOE-WIO table distinguishes 80 production sectors, 10 waste treatment sectors, 99 waste categories, and encompasses 7 greenhouse gases (GHGs). The MOE-WIO table is available here. Equation (7) can be used to assess the waste footprint of products or the amount of waste embodied in a product in its supply chain. Applied to the MOE-WIO, it was found that public construction significantly contributes to reducing construction waste, which mainly originates from building construction and civil engineering sectors. Additionally, public construction is the primary user (recycler) of slag and glass scrap. Regarding waste plastics, findings indicate that the majority of plastic waste originates not from direct household discharge but from various production sectors such as medical services, commerce, construction, personal services, food production, passenger motor vehicles, and real estate. Other studies. Many researchers have independently created their own WIO datasets and utilized them for various applications, encompassing different geographical scales and process complexities. Here, we provide a brief overview of a selection of them. End-of-Life electrical and electronic appliances. Kondo and Nakamura assessed the environmental and economic impacts of various life-cycle strategies for electrical appliances using the WIO-table they developed for Japan for the year 1995. This dataset encompassed 80 industrial sectors, 5 treatment processes, and 36 types of waste. The assessment was based on Equation (6). The strategies examined included disposal to a landfill, conventional recycling, intensive recycling employing advanced sorting technology, extension of product life, and extension of product life with functional upgrading. Their analysis revealed that intensive recycling outperformed landfilling and simple shredding in reducing final waste disposal and other impacts, including carbon emissions. Furthermore, they found that extending the product life significantly decreased environmental impact without negatively affecting economic activity and employment, provided that the reduction in spending on new purchases was balanced by increased expenditure on repair and maintenance. General and hazardous industrial waste. Using detailed data on industrial waste, including 196 types of general industrial waste and 157 types of hazardous industrial waste, Liao et al. analyzed the final demand footprint of industrial waste in Taiwan across various final demand categories. Their analysis revealed significant variations in waste footprints among different final demand categories. For example, over 90% of the generation of "Waste acidic etchants" and "Copper and copper compounds" was attributed to exports. Conversely, items like "Waste lees, wine meal, and alcohol mash" and "Pulp and paper sludge" were predominantly associated with household activities Global waste flows. Tisserant et al developed a WIO model of the global economy by constructing a harmonized multiregional solid waste account that covered 48 world regions, 163 production sectors, 11 types of solid waste, and 12 waste treatment processes for the year 2007. Russia was found to be the largest generator of waste, followed by China, the US, the larger Western European economies, and Japan. Decision Analytic Extension Based on Linear Programming (LP). Kondo and Nakamura applied linear programming (LP) methodology to extend the WIO model, resulting in the development of a decision analytic extension known as the WIO-LP model. The application of LP to the IO model has a well-established history. This model was applied to explore alternative treatment processes for end-of-life home electric and electronic appliances, aiming to identify the optimal combination of treatment processes to achieve specific objectives, such as minimization of carbon emissions or landfill waste. Lin applied this methodology to the regional Input-Output (IO) table for Tokyo, augmented to incorporate wastewater flows and treatment processes, and identified trade-off relationships between water quality and carbon emissions. A similar method was also employed to assess the environmental impacts of alternative treatment processes for waste plastics in China. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "F" }, { "math_id": 2, "text": "y" }, { "math_id": 3, "text": "I" }, { "math_id": 4, "text": "E" }, { "math_id": 5, "text": "n_{P}" }, { "math_id": 6, "text": "n_{T}" }, { "math_id": 7, "text": "n_{w}" }, { "math_id": 8, "text": "Z_{P}" }, { "math_id": 9, "text": "n_{P} \\times n_P" }, { "math_id": 10, "text": "W^+_{P}" }, { "math_id": 11, "text": "n_{W} \\times n_P" }, { "math_id": 12, "text": "W^{-}_{P}" }, { "math_id": 13, "text": "W_{P}" }, { "math_id": 14, "text": "W_{P}=W^{+}_{P}-W^{-}_{P}" }, { "math_id": 15, "text": "Z_{T}" }, { "math_id": 16, "text": "n_W \\times n_T" }, { "math_id": 17, "text": "W_T" }, { "math_id": 18, "text": "W_{T}=W^{+}_{T}-W^{-}_{T}" }, { "math_id": 19, "text": "W^{+}_T" }, { "math_id": 20, "text": "W^{-}_T" }, { "math_id": 21, "text": "W^{+}_{P}" }, { "math_id": 22, "text": "W^{-}_P" }, { "math_id": 23, "text": "W^{+}_{T}" }, { "math_id": 24, "text": "y_{P}" }, { "math_id": 25, "text": "n_P \\times 1" }, { "math_id": 26, "text": "w_{Y}" }, { "math_id": 27, "text": "n_W \\times 1" }, { "math_id": 28, "text": "x_{P}" }, { "math_id": 29, "text": "w" }, { "math_id": 30, "text": "Z" }, { "math_id": 31, "text": "x" }, { "math_id": 32, "text": "W" }, { "math_id": 33, "text": "\\iota_P" }, { "math_id": 34, "text": "Z_P" }, { "math_id": 35, "text": "\\iota" }, { "math_id": 36, "text": "G" }, { "math_id": 37, "text": "\n\\begin{align}\n A_{P} = Z_{P} \\hat{x}_{P}^{-1}, \n A_{T} = Z_{T} \\hat{x}_{T}^{-1},\n G_{P} = W_{P} \\hat{x}_{P}^{-1}, \n G_{T} = W_{T} \\hat{x}_{T}^{-1}.\n\\end{align}\n" }, { "math_id": 38, "text": "\\hat{v}" }, { "math_id": 39, "text": "(i, i)" }, { "math_id": 40, "text": "i" }, { "math_id": 41, "text": "v" }, { "math_id": 42, "text": "x_{T}" }, { "math_id": 43, "text": "S" }, { "math_id": 44, "text": "n_{T}\\times n_{w}" }, { "math_id": 45, "text": "S_{kl}" }, { "math_id": 46, "text": "l" }, { "math_id": 47, "text": "k" }, { "math_id": 48, "text": "\n{\\iota_{T}}^{'} S ={ \\iota_{w}}^{'}. \n" }, { "math_id": 49, "text": "'" }, { "math_id": 50, "text": "n_{T}=n_{w}" }, { "math_id": 51, "text": "x_T" }, { "math_id": 52, "text": "\n\\begin{align}\n \\begin{pmatrix}\n x_{P} \\\\ x_{T}\n \\end{pmatrix}\n =\n \\begin{pmatrix}\n I - A_{P} & -A_{T} \\\\\n - S G_{P} & I - S G_{T}\n \\end{pmatrix}^{-1}\n \\begin{pmatrix}\n y_{P} \\\\ S w_{y}\n \\end{pmatrix}.\n\\end{align}\n" }, { "math_id": 53, "text": "F_P" }, { "math_id": 54, "text": "F_T" }, { "math_id": 55, "text": "\n\\begin{pmatrix}\nA_P & A_T & 0 \\\\\n 0 & 0 & S \\\\\nG_P & G_T & 0 \n\\end{pmatrix}\n\\begin{pmatrix}\nx_P \\\\ x_T \\\\ w\n\\end{pmatrix}\n+ \n\\begin{pmatrix}\ny_P \\\\ 0 \\\\ w_y\n\\end{pmatrix}\n= \n\\begin{pmatrix}\nx_P \\\\ x_T \\\\ w\n\\end{pmatrix}\n" }, { "math_id": 56, "text": "p_P" }, { "math_id": 57, "text": "p_T" }, { "math_id": 58, "text": "v_P" }, { "math_id": 59, "text": "v_T" }, { "math_id": 60, "text": "\n\\begin{pmatrix}\np_P & p_T \n\\end{pmatrix}\n = \n\\begin{pmatrix}\np_P & p_T \n\\end{pmatrix}\n\\begin{pmatrix}\nA_P & A_T \\\\\nSG_P & SG_T\n\\end{pmatrix}\n+ \n\\begin{pmatrix}\nv_P & v_T \n\\end{pmatrix}\n" }, { "math_id": 61, "text": "r" }, { "math_id": 62, "text": "p^W" }, { "math_id": 63, "text": "n_w \\times 1" }, { "math_id": 64, "text": "\\hat{r}" }, { "math_id": 65, "text": "n_w \\times " }, { "math_id": 66, "text": "G^{+}_{P} = W^{+}_P{\\hat{x_P}}^{-1}" }, { "math_id": 67, "text": "G^{-}_P = W^{-}_P{\\hat{x_P}}^{-1}" }, { "math_id": 68, "text": "G^{+}_{T}" }, { "math_id": 69, "text": "G^{-}_{T}" }, { "math_id": 70, "text": "A_T" }, { "math_id": 71, "text": "G_T" }, { "math_id": 72, "text": "n_w \\times n_T" }, { "math_id": 73, "text": "n_T \\times n_T" }, { "math_id": 74, "text": "n_T n_W \\times n_T n_W" } ]
https://en.wikipedia.org/wiki?curid=76415171
76416
Maximum power transfer theorem
Theorem in electrical engineering In electrical engineering, the maximum power transfer theorem states that, to obtain "maximum" external power from a power source with internal resistance, the resistance of the load must equal the resistance of the source as viewed from its output terminals. Moritz von Jacobi published the maximum power (transfer) theorem around 1840; it is also referred to as "Jacobi's law". The theorem results in maximum "power" transfer from the power source to the load, but not maximum "efficiency" of useful power out of total power consumed. If the load resistance is made larger than the source resistance, then efficiency increases (since a higher percentage of the source power is transferred to the load), but the "magnitude" of the load power decreases (since the total circuit resistance increases). If the load resistance is made smaller than the source resistance, then efficiency decreases (since most of the power ends up being dissipated in the source). Although the total power dissipated increases (due to a lower total resistance), the amount dissipated in the load decreases. The theorem states how to choose (so as to maximize power transfer) the load resistance, once the source resistance is given. It is a common misconception to apply the theorem in the opposite scenario. It does "not" say how to choose the source resistance for a given load resistance. In fact, the source resistance that maximizes power transfer from a voltage source is always zero (the hypothetical ideal voltage source), regardless of the value of the load resistance. The theorem can be extended to alternating current circuits that include reactance, and states that maximum power transfer occurs when the load impedance is equal to the complex conjugate of the source impedance. The mathematics of the theorem also applies to other physical interactions, such as: Maximizing power transfer versus power efficiency. The theorem was originally misunderstood (notably by Joule) to imply that a system consisting of an electric motor driven by a battery could not be more than 50% efficient, since the power dissipated as heat in the battery would always be equal to the power delivered to the motor when the impedances were matched. In 1880 this assumption was shown to be false by either Edison or his colleague Francis Robbins Upton, who realized that maximum efficiency was not the same as maximum power transfer. To achieve maximum efficiency, the resistance of the source (whether a battery or a dynamo) could be (or should be) made as close to zero as possible. Using this new understanding, they obtained an efficiency of about 90%, and proved that the electric motor was a practical alternative to the heat engine. The efficiency "η" is the ratio of the power dissipated by the load resistance "R"L to the total power dissipated by the circuit (which includes the voltage source's resistance of "R"S as well as "R"L): formula_0 Consider three particular cases (note that voltage sources must have some resistance): Impedance matching. A related concept is reflectionless impedance matching. In radio frequency transmission lines, and other electronics, there is often a requirement to match the source impedance (at the transmitter) to the load impedance (such as an antenna) to avoid reflections in the transmission line. Calculus-based proof for purely resistive circuits. In the simplified model of powering a load with resistance "R"L by a source with voltage "V" and source resistance "R"S, then by Ohm's law the resulting current "I" is simply the source voltage divided by the total circuit resistance: formula_7 The power "P"L dissipated in the load is the square of the current multiplied by the resistance: formula_8 The value of "R"L for which this expression is a maximum could be calculated by differentiating it, but it is easier to calculate the value of "R"L for which the denominator: formula_9 is a minimum. The result will be the same in either case. Differentiating the denominator with respect to "R"L: formula_10 For a maximum or minimum, the first derivative is zero, so formula_11 or formula_12 In practical resistive circuits, "R"S and "R"L are both positive, so the positive sign in the above is the correct solution. To find out whether this solution is a minimum or a maximum, the denominator expression is differentiated again: formula_13 This is always positive for positive values of formula_14 and formula_15, showing that the denominator is a minimum, and the power is therefore a maximum, when: formula_16 The above proof assumes fixed source resistance formula_14. When the source resistance can be varied, power transferred to the load can be increased by reducing formula_17. For example, a 100 Volt source with an formula_17 of formula_18 will deliver 250 watts of power to a formula_18 load; reducing formula_17 to formula_19 increases the power delivered to 1000 watts. Note that this shows that maximum power transfer can also be interpreted as the load voltage being equal to one-half of the Thevenin voltage equivalent of the source. In reactive circuits. The power transfer theorem also applies when the source and/or load are not purely resistive. A refinement of the maximum power theorem says that any reactive components of source and load should be of equal magnitude but opposite sign. ("See below for a derivation.") Physically realizable sources and loads are not usually purely resistive, having some inductive or capacitive components, and so practical applications of this theorem, under the name of complex conjugate impedance matching, do, in fact, exist. If the source is totally inductive (capacitive), then a totally capacitive (inductive) load, in the absence of resistive losses, would receive 100% of the energy from the source but send it back after a quarter cycle. The resultant circuit is nothing other than a resonant LC circuit in which the energy continues to oscillate to and fro. This oscillation is called reactive power. Power factor correction (where an inductive reactance is used to "balance out" a capacitive one), is essentially the same idea as complex conjugate impedance matching although it is done for entirely different reasons. For a fixed reactive "source", the maximum power theorem maximizes the real power (P) delivered to the load by complex conjugate matching the load to the source. For a fixed reactive "load", power factor correction minimizes the apparent power (S) (and unnecessary current) conducted by the transmission lines, while maintaining the same amount of real power transfer. This is done by adding a reactance to the load to balance out the load's own reactance, changing the reactive load impedance into a resistive load impedance. Proof. In this diagram, AC power is being transferred from the source, with phasor magnitude of voltage formula_20 (positive peak voltage) and fixed source impedance formula_21 (S for source), to a load with impedance formula_22 (L for load), resulting in a (positive) magnitude formula_23 of the current phasor formula_24. This magnitude formula_23 results from dividing the magnitude of the source voltage by the magnitude of the total circuit impedance: formula_25 The average power formula_26 dissipated in the load is the square of the current multiplied by the resistive portion (the real part) formula_27 of the load impedance formula_22: formula_28 where formula_29 and formula_27 denote the resistances, that is the real parts, and formula_30 and formula_31 denote the reactances, that is the imaginary parts, of respectively the source and load impedances formula_21 and formula_22. To determine, for a given source, the voltage formula_32 and the impedance formula_33 the value of the load impedance formula_34 for which this expression for the power yields a maximum, one first finds, for each fixed positive value of formula_27, the value of the reactive term formula_31 for which the denominator: formula_35 is a minimum. Since reactances can be negative, this is achieved by adapting the load reactance to: formula_36 This reduces the above equation to: formula_37 and it remains to find the value of formula_27 which maximizes this expression. This problem has the same form as in the purely resistive case, and the maximizing condition therefore is formula_38 The two maximizing conditions: describe the complex conjugate of the source impedance, denoted by formula_41 and thus can be concisely combined to: formula_42
[ { "math_id": 0, "text": "\\eta = \\frac{P_\\mathrm{L}}{P_\\mathrm{Total}} = \\frac{I^2 \\cdot R_\\mathrm{L}}{I^2 \\cdot (R_\\mathrm{L} + R_\\mathrm{S})} = \\frac{R_\\mathrm{L}}{R_\\mathrm{L} + R_\\mathrm{S}} = \\frac{1}{1 + R_\\mathrm{S} / R_\\mathrm{L}} \\, ." }, { "math_id": 1, "text": "R_\\mathrm{L}/R_\\mathrm{S} \\to 0" }, { "math_id": 2, "text": "\\eta \\to 0." }, { "math_id": 3, "text": "R_\\mathrm{L}/R_\\mathrm{S} = 1" }, { "math_id": 4, "text": "\\eta = \\tfrac{1}{2}." }, { "math_id": 5, "text": "R_\\mathrm{L}/R_\\mathrm{S} \\to \\infty" }, { "math_id": 6, "text": "\\eta \\to 1." }, { "math_id": 7, "text": "I = \\frac{V}{R_\\mathrm{S} + R_\\mathrm{L}}." }, { "math_id": 8, "text": "P_\\mathrm{L} = I^2 R_\\mathrm{L} = \\left(\\frac{V}{R_\\mathrm{S} + R_\\mathrm{L}}\\right)^2 R_\\mathrm{L} = \\frac{V^2}{R_\\mathrm{S}^2 / R_\\mathrm{L} + 2 R_\\mathrm{S} + R_\\mathrm{L}}." }, { "math_id": 9, "text": "R_\\mathrm{S}^2 / R_\\mathrm{L} + 2 R_\\mathrm{S} + R_\\mathrm{L}" }, { "math_id": 10, "text": "\\frac{d}{d R_\\mathrm{L}} \\left(R_\\mathrm{S}^2 / R_\\mathrm{L} + 2R_\\mathrm{S} + R_\\mathrm{L}\\right) = -R_\\mathrm{S}^2 / R_\\mathrm{L}^2 + 1." }, { "math_id": 11, "text": "R_\\mathrm{S}^2 / R_\\mathrm{L}^2 = 1" }, { "math_id": 12, "text": "R_\\mathrm{L} = \\pm R_\\mathrm{S}." }, { "math_id": 13, "text": "\\frac{d^2}{dR_\\mathrm{L}^2} \\left( {R_\\mathrm{S}^2 / R_\\mathrm{L} + 2 R_\\mathrm{S} + R_\\mathrm{L}} \\right) = {2 R_\\mathrm{S}^2} / {R_\\mathrm{L}^3}. " }, { "math_id": 14, "text": "R_\\mathrm{S} " }, { "math_id": 15, "text": "R_\\mathrm{L} " }, { "math_id": 16, "text": "R_\\mathrm{S} = R_\\mathrm{L}. " }, { "math_id": 17, "text": "R_\\textrm{S}" }, { "math_id": 18, "text": "10\\,\\Omega" }, { "math_id": 19, "text": "0\\,\\Omega" }, { "math_id": 20, "text": "|V_\\text{S}|" }, { "math_id": 21, "text": "Z_\\text{S}" }, { "math_id": 22, "text": "Z_\\text{L}" }, { "math_id": 23, "text": "|I|" }, { "math_id": 24, "text": "I" }, { "math_id": 25, "text": "|I| = {|V_\\text{S}| \\over |Z_\\text{S} + Z_\\text{L}|} ." }, { "math_id": 26, "text": "P_\\text{L}" }, { "math_id": 27, "text": "R_\\text{L}" }, { "math_id": 28, "text": "\\begin{align}\nP_\\text{L} & = I_\\text{rms}^2 R_\\text{L} = {1 \\over 2} |I|^2 R_\\text{L}\\\\\n& = {1 \\over 2} \\left( {|V_\\text{S}| \\over |Z_\\text{S} + Z_\\text{L}|} \\right)^2 R_\\text{L} = {1 \\over 2}{ |V_\\text{S}|^2 R_\\text{L} \\over (R_\\text{S} + R_\\text{L})^2 + (X_\\text{S} + X_\\text{L})^2},\n\\end{align}" }, { "math_id": 29, "text": "R_\\text{S}" }, { "math_id": 30, "text": "X_\\text{S}" }, { "math_id": 31, "text": "X_\\text{L}" }, { "math_id": 32, "text": "V_\\text{S}" }, { "math_id": 33, "text": "Z_\\text{S}," }, { "math_id": 34, "text": "Z_\\text{L}, " }, { "math_id": 35, "text": "(R_\\text{S} + R_\\text{L})^2 + (X_\\text{S} + X_\\text{L})^2 " }, { "math_id": 36, "text": "X_\\text{L} = -X_\\text{S}." }, { "math_id": 37, "text": "P_\\text{L} = \\frac 1 2 \\frac{|V_\\text{S}|^2 R_\\text{L}}{(R_\\text{S} + R_\\text{L})^2}" }, { "math_id": 38, "text": "R_\\text{L} = R_\\text{S}." }, { "math_id": 39, "text": "R_\\text{L} = R_\\text{S}" }, { "math_id": 40, "text": "X_\\text{L} = -X_\\text{S}" }, { "math_id": 41, "text": "^*," }, { "math_id": 42, "text": "Z_\\text{L} = Z_\\text{S}^*." } ]
https://en.wikipedia.org/wiki?curid=76416
7641969
Circuit complexity
Model of computational complexity In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits formula_0 (see below). Proving lower bounds on size of Boolean circuits computing explicit Boolean functions is a popular approach to separating complexity classes. For example, a prominent circuit class P/poly consists of Boolean functions computable by circuits of polynomial size. Proving that formula_1 would separate P and NP (see below). Complexity classes defined in terms of Boolean circuits include AC0, AC, TC0, NC1, NC, and P/poly. Size and depth. A Boolean circuit with formula_2 input bits is a directed acyclic graph in which every node (usually called "gates" in this context) is either an input node of in-degree 0 labelled by one of the formula_2 input bits, an AND gate, an OR gate, or a NOT gate. One of these gates is designated as the output gate. Such a circuit naturally computes a function of its formula_2 inputs. The size of a circuit is the number of gates it contains and its depth is the maximal length of a path from an input gate to the output gate. There are two major notions of circuit complexity The circuit-size complexity of a Boolean function formula_3 is the minimal size of any circuit computing formula_3. The circuit-depth complexity of a Boolean function formula_3 is the minimal depth of any circuit computing formula_3. These notions generalize when one considers the circuit complexity of any language that contains strings with different bit lengths, especially infinite formal languages. Boolean circuits, however, only allow a fixed number of input bits. Thus, no single Boolean circuit is capable of deciding such a language. To account for this possibility, one considers families of circuits formula_0 where each formula_4 accepts inputs of size formula_2. Each circuit family will naturally generate the language by circuit formula_4 outputting formula_5 when a length formula_2 string is a member of the family, and formula_6 otherwise. We say that a family of circuits is size minimal if there is no other family that decides on inputs of any size, formula_2, with a circuit of smaller size than formula_7 (respectively for depth minimal families). Thus, circuit complexity is meaningful even for non-recursive languages. The notion of a uniform family enables variants of circuit complexity to be related to algorithm based complexity measures of recursive languages. However, the non-uniform variant is helpful to find lower bounds on how complex any circuit family must be in order to decide given languages. Hence, the circuit-size complexity of a formal language formula_8 is defined as the function formula_9, that relates a bit length of an input, formula_2, to the circuit-size complexity of a minimal circuit formula_4 that decides whether inputs of that length are in formula_8. The circuit-depth complexity is defined similarly. Uniformity. Boolean circuits are one of the prime examples of so-called non-uniform models of computation in the sense that inputs of different lengths are processed by different circuits, in contrast with uniform models such as Turing machines where the same computational device is used for all possible input lengths. An individual computational problem is thus associated with a particular "family" of Boolean circuits formula_10 where each formula_7 is the circuit handling inputs of "n" bits. A "uniformity" condition is often imposed on these families, requiring the existence of some possibly resource-bounded Turing machine that, on input "n", produces a description of the individual circuit formula_7. When this Turing machine has a running time polynomial in "n", the circuit family is said to be P-uniform. The stricter requirement of DLOGTIME-uniformity is of particular interest in the study of shallow-depth circuit-classes such as AC0 or TC0. When no resource bounds are specified, a language is recursive (i.e., decidable by a Turing machine) if and only if the language is decided by a uniform family of Boolean circuits. Polynomial-time uniform. A family of Boolean circuits formula_11 is "polynomial-time uniform" if there exists a deterministic Turing machine "M", such that Logspace uniform. A family of Boolean circuits formula_11 is "logspace uniform" if there exists a deterministic Turing machine "M", such that History. Circuit complexity goes back to Shannon in 1949, who proved that almost all Boolean functions on "n" variables require circuits of size Θ(2"n"/"n"). Despite this fact, complexity theorists have so far been unable to prove a superlinear lower bound for any explicit function. Superpolynomial lower bounds have been proved under certain restrictions on the family of circuits used. The first function for which superpolynomial circuit lower bounds were shown was the parity function, which computes the sum of its input bits modulo 2. The fact that parity is not contained in AC0 was first established independently by Ajtai in 1983 and by Furst, Saxe and Sipser in 1984. Later improvements by Håstad in 1987 established that any family of constant-depth circuits computing the parity function requires exponential size. Extending a result of Razborov, Smolensky in 1987 proved that this is true even if the circuit is augmented with gates computing the sum of its input bits modulo some odd prime "p". The "k"-clique problem is to decide whether a given graph on "n" vertices has a clique of size "k". For any particular choice of the constants "n" and "k", the graph can be encoded in binary using formula_14 bits, which indicate for each possible edge whether it is present. Then the "k"-clique problem is formalized as a function formula_15 such that formula_16 outputs 1 if and only if the graph encoded by the string contains a clique of size "k". This family of functions is monotone and can be computed by a family of circuits, but it has been shown that it cannot be computed by a polynomial-size family of monotone circuits (that is, circuits with AND and OR gates but without negation). The original result of Razborov in 1985 was later improved to an exponential-size lower bound by Alon and Boppana in 1987. In 2008, Rossman showed that constant-depth circuits with AND, OR, and NOT gates require size formula_17 to solve the "k"-clique problem even in the average case. Moreover, there is a circuit of size formula_18 that computes formula_16. In 1999, Raz and McKenzie later showed that the monotone NC hierarchy is infinite. The Integer Division Problem lies in uniform TC0. Circuit lower bounds. Circuit lower bounds are generally difficult. Known results include It is open whether NEXPTIME has nonuniform TC0 circuits. Proofs of circuit lower bounds are strongly connected to derandomization. A proof that formula_19 would imply that either formula_20 or that permanent cannot be computed by nonuniform arithmetic circuits (polynomials) of polynomial size and polynomial degree. In 1997, Razborov and Rudich showed that many known circuit lower bounds for explicit Boolean functions imply the existence of so called natural properties useful against the respective circuit class. On the other hand, natural properties useful against P/poly would break strong pseudorandom generators. This is often interpreted as a "natural proofs" barrier for proving strong circuit lower bounds. In 2016, Carmosino, Impagliazzo, Kabanets and Kolokolova proved that natural properties can be also used to construct efficient learning algorithms. Complexity classes. Many circuit complexity classes are defined in terms of class hierarchies. For each non-negative integer "i", there is a class NCi, consisting of polynomial-size circuits of depth formula_21, using bounded fan-in AND, OR, and NOT gates. The union NC of all of these classes is a subject to discussion. By considering unbounded fan-in gates, the classes ACi and AC (which is equal to NC) can be constructed. Many other circuit complexity classes with the same size and depth restrictions can be constructed by allowing different sets of gates. Relation to time complexity. If a certain language, formula_8, belongs to the time-complexity class formula_22 for some function formula_9, then formula_8 has circuit complexity formula_23. If the Turing Machine that accepts the language is oblivious (meaning that it reads and writes the same memory cells regardless of input), then formula_8 has circuit complexity formula_24. Monotone circuits. A monotone Boolean circuit is one that has only AND and OR gates, but no NOT gates. A monotone circuit can only compute a monotone Boolean function, which is a function formula_25 where for every formula_26, formula_27, where formula_28 means that formula_29 for all formula_30. Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "C_{1},C_{2},\\ldots" }, { "math_id": 1, "text": "\\mathsf{NP}\\not\\subseteq \\mathsf{P/poly}" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "C_{n}" }, { "math_id": 5, "text": "1" }, { "math_id": 6, "text": "0" }, { "math_id": 7, "text": "C_n" }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "t:\\mathbb{N}\\to\\mathbb{N}" }, { "math_id": 10, "text": "C_1, C_2, \\dots " }, { "math_id": 11, "text": "\\{C_n:n \\in \\mathbb{N}\\}" }, { "math_id": 12, "text": "n \\in \\mathbb{N}" }, { "math_id": 13, "text": "1^n" }, { "math_id": 14, "text": "{n \\choose 2}" }, { "math_id": 15, "text": "f_k:\\{0,1\\}^{{n \\choose 2}}\\to\\{0,1\\}" }, { "math_id": 16, "text": "f_k" }, { "math_id": 17, "text": "\\Omega(n^{k/4})" }, { "math_id": 18, "text": "n^{k/4+O(1)}" }, { "math_id": 19, "text": "\\mathsf{P} = \\mathsf{BPP}" }, { "math_id": 20, "text": "\\mathsf{NEXP} \\not \\subseteq \\mathsf{P/poly}" }, { "math_id": 21, "text": "O(\\log^i(n))" }, { "math_id": 22, "text": "\\text{TIME}(t(n))" }, { "math_id": 23, "text": "\\mathcal{O}(t(n) \\log t(n))" }, { "math_id": 24, "text": "\\mathcal{O}(t(n))" }, { "math_id": 25, "text": "f:\\{0,1\\}^n \\to \\{0,1\\}" }, { "math_id": 26, "text": "x,y \\in \\{0,1\\}^n" }, { "math_id": 27, "text": "x \\leq y \\implies f(x) \\leq f(y)" }, { "math_id": 28, "text": "x\\leq y" }, { "math_id": 29, "text": "x_i \\leq y_i" }, { "math_id": 30, "text": "i \\in \\{1,\\ldots,n\\}" } ]
https://en.wikipedia.org/wiki?curid=7641969
76423616
Han Xin code
Type of matrix barcode Han Xin code (汉信码 in Chinese, "Chinese-sensible code") is two-dimensional (2D) matrix barcode symbology invented in 2007 by Chinese company The Article Numbering Center of China (中国物品编码中心 in Chinese) to break monopoly of QR code. As QR code, Han Xin code consists of black squares and white square spaces arranged in a square grid on a white background. It has four finder patterns and other markers which allow to recognize it with camera-based readers. Han Xin code contains Reed–Solomon error correction with ability to read corrupted images. At this time, it is issued as ISO/IEC 20830:2021. The main advantage (and invention requirement), comparable to QR code, is an embedded ability to natively encode Chinese characters instead of Japanese in QR code. Han Xin code in maximal 84 version (189×189 size) allows to encode 7827 numeric characters, 4350 English text characters, 3261 bytes and 1044–2174 Chinese characters (it depends on Unicode region). Han Xin code encodes full ISO/IEC 646 Latin characters instead of restricted amount Latin characters which is supported by QR code. It makes Han Xin code more suitable for English text encoding or GS1 Application Identifiers data encoding. Additionally, Han Xin code can encode Unicode characters from other languages with special Unicode mode, which has embedded lossless compression for UTF-8 characters set and Extended Channel Interpretation support. Han Xin code has special compactification mode for URI encoding and can reduce barcode size which encodes links to web pages. History and standards. Chinese company The Article Numbering Center of China (中国物品编码中心 in Chinese) during 10-th Five-year plans of China started research of own QR code replacement to remove Japanese monopoly in 2D barcodes. In 2007, the new barcodes standard, at this time known as Han Xin code, published as GB/T 21049-2007 with the name Chinese-sensible code. In 2011, USA company Association for Automatic Identification and Mobility (AIM) brought out "ISS Han Xin Code symbology" as official encoding standard and published it in the own store. In 2015, group of ISO/IEC JTC 1/SC 31 started implementation of Han Xin code as international standard and published it as ISO/IEC 20830:2021 in 2021. In 2022 Chinese-sensible code standard was reviewed as GB/T 21049-2022 and renamed as Han Xin code to be compliant with ISO standard. Set of patents is registered in United States Patent and Trademark Office related with Han Xin code encoding and decoding: Application. Han Xin code can be used in the same way as QR code. At this time Han Xin code is used mostly in China, because it has embedded encoding ability to encode Chinese characters. However, most of barcode printers and barcode scanners support Han Xin code. Han Xin code can be scanned on iOS and Android mobile devices and many barcode libraries support reading and writing Han Xin code. Main advantages of Han Xin code are: Barcode design. Han Xin code represents data in black and white square modules, where dark module is a binary one and a light module is a zero. Additionally, Han Xin code can be encoded in inverse colors, but this option in many barcode readers is disabled by default. Black and white modules are arranged into square region with sizes from 23 × 23 modules (Version 1) to 189 × 189 modules (Version 84). As QR code, Han Xin code does not have rectangular versions like DataMatrix has and this restricts usage of Han Xin code in some cases. Han Xin code version size can be calculated with the following formula: <br>formula_0 Han Xin code symbol is constructed from the following elements: Finder pattern. Finder Pattern consists from four Position Detection Patterns located at the four corners of the barcode. The size of Position Detection Pattern is 7×7 modules and it is constructed from 5 elements: dark 7 × 7 modules, light 6 × 6 modules, dark 5 × 5 modules, light 4 × 4 modules, dark 3 × 3 modules respectively. The scanning ratio of each Position Detection Pattern is 1:1:1:1:3 or 3:1:1:1:1 (depends on scanning direction). The four patterns orientation allows to detect unambiguously the barcode location and orientation. Every pattern has Position Detection Pattern separator with Structural Information Region aligned to it. Alignment pattern. The Alignment Patterns are added to the Han Xin code from Version 4 (Versions 1–3 do not have alignment patterns) and used to precise cell position in the distorted barcodes. Alignment Patterns in Han Xin code are split into: The Alignment Pattern is made up of a dark line and a downside adjacent light line which are one module wide. Assistant Alignment Pattern consisting from 5 light modules and 1 dark module indicates edge of region block with its dark module. Below you can see examples of Han Xin code with different Alignment pattern placement. Structural information. Han Xin code Structural Information Region is a one module wide region surrounding the four Position Detection Patterns. Han Xin code has two Structural Information identical arrays, which are made from 34 data modules. Every Structural Information array is split on 17 modules which are placed around each Position Detection Pattern. Structural Information Region encodes the following data: Metadata bits from 0–11 are split into 4 bits tetrads(m2, m1, m0) and supplemented with four error correction tetrads (r3, r2, r1, r0). Data masking. To make Han Xin code dark and light modules amount to be closely to 1:1 in the symbol, masking algorithm is used. Masking sequence is applied to Data Region through the XOR operation. Finder Pattern, Alignment Patterns and Structural Information Regions are excluded from masking operation. The following table shows mask pattern algorithms (which is placed to Structural Information Region). i - Row index of the symbol. <br>j - Column index of the symbol. <br>Both i and j start from (1,1), the top left corner module of the symbol. When the masking solution condition is true, the resulting mask bit is 1. Error correction. Han Xin code uses Reed–Solomon error correction. Encoded data is represented as byte (8-bit) array. Data array divided into blocks and error correction codewords sequence is generated for each block which is added to the end of the error correction block. After this, all blocks are merged sequentially into byte stream. The polynomial arithmetic for Han Xin Code uses finite field generation polynomial: x^8 + x^6 + x^5 + x (355 or 101100011b) with initial root = 1. The amount of error correction codewords depends on symbol version and error correction level and can be from 16% to 60%, which allows to correct from 8% to 30% damage. Data region. Han Xin code data is encoded as byte array. Data byte array is split into error correction blocks, where error correction codewords (bytes) are added. Error correction blocks are united into one codewords array: As an example, this can be demonstrated on Han Xin code version 5 with error correction level L4. It has 27 encoded codewords and 2 error correction blocks with each block size of data codewords and error correction codewords: (14, 20), (13, 22): <br>D(x) - Data codewords. <br>E(b.x) - error codeword, where b is block number and x position in block. <br>C(x) - resulted codewords. As the next operation, resulted codewords array C(x) is split into blocks with size of 13 bytes which connects codewords in the same position of each block and form new codewords array. The result is byte array of the same size but mixed by position of 13. <br>CM(x) – mixed by position of 13 array of codewords (bytes). After the upper operations the resulted codewords are placed into data region row by row from left to right and from up to down. Horizontal line damage would affect fewer codewords, vertical line damage would affect more codewords. Encoding. Han Xin code can encode 7827 numeric characters, 4350 English text characters, 3261 bytes and 1044–2174 Chinese characters in the maximal version 84 version. Additionally, it supports special Unicode and industrial modes. All modes can be mixed to obtain best compactification level for the data. The following table demonstrates abilities to encode data with different barcode version and error correction level. Encoding modes. All encoding modes can be split into the following groups: Numeric mode. The input data string in Numeric mode is divided into blocks of three digits (the last block can be less than three) and encoded in 10 bits (0000000000b - 1111100111b). The mode data is prefixed with mode indicator 0001b and terminates with mode terminator which also indicates number of digits in last group. As an example, we need to encode digits sequence 12700402: <br>Prefix => 0001b <br>127 => 0001111111 <br>004 => 0000000100 <br>02 => 0000000010 <br>Terminator => 1111111110b Text mode. Text mode encodes data characters set from ISO/IEC 646. Each character is represented by 6 bits. All characters are divided into two subsets: Text1 sub-mode and Text2 sub-mode. 11110b value is used to switch between text sub-modes, 111111b is a mode terminator. Text mode starts from Text1 sub-mode. Binary byte mode. Binary mode encodes bytes array [0 – 255] in any form. Binary mode consists from binary mode indicator 0011b, 13-bit binary counter and bytes data which are converted to 8-bit sequence. None mode terminator is required. Chinese Characters modes. Chinese Characters modes is a set of 4 modes which encodes Chinese characters from GB 18030 codepage. Unicode mode. Unicode mode encodes UTF-8 charset with embedded lossless compression. In the Unicode mode, the input data is analysed by using self-adaptive algorithm. Firstly, input data is divided and combined into the 1, 2, 3, or 4 byte pattern preencoding sub-sequences, and secondly a run-length data compression algorithm is applied to encode each sub-sequences of the input data. Shortly, the Unicode mode searches characters sub-pages which can have the same prefix sequence for all of characters of the same language (Cyrillic, Greek, French, German... languages) and encodes only differences from prefix bytes sequence. GS1 mode. Han Xin code GS1 mode is an indicator that the represented data is defined by GS1 General Specification. GS1 mode encodes data in Numeric and Text modes. Other modes may be used but GS1 mode must be first mode in the symbol and encoded data must be returned with GS1 flag. <FNC1> (if required) must be encoded as 1111101000b in Numeric mode (Numeric mode encodes only three digits, so 1111101000b => 1000 value is counted as special character). In case <FNC1> identifier must be inserted and encoder is in any mode different from Numeric, the mode must be terminated and Numeric mode must be started. GS1 mode indicator is 11100001b and GS1 mode terminator is 11111111b. The data in GS1 mode is split into GS1 Application Identifiers chinks and then compacted with the best modes. As an example, the following data can be encoded: <br>(10)123456ABC<FNC1>(240)DATA The data is encoded in the following way: <br><11100001b> <Numeric 10123456> <Text ABC> <Numeric mode selector> <1111101000b> <Numeric 240> <Text DATA> <11111111b> URI mode. Han Xin code URI mode encodes URI links in compact encoding. URI mode indicator is 11100010b and URI mode terminator is 111b. URI mode can encode data in three charsets: URI-A, URI-B, URI-C with own sub-mode terminators. URI mode can encode %XX data in special Percent-Encoding sub-mode, where three symbols is encoded in 8 bits. Percent-Encoding sub-mode encodes %XX data in 8 bits sequence. The mode does not require any terminator. To encode URI %XX data in this mode, sub-mode indicator (100b) must be added, then 8-bit indicator of sub-mode 8 bits sequence must be added (counter = Length of %XX / 3) and after this sequence, where %FF, or %ff, or %00, must be added as xFF or x00 bytes. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "Size = 23 + (Version - 1) * 2" } ]
https://en.wikipedia.org/wiki?curid=76423616
76423933
Janzen–Rayleigh expansion
In fluid dynamics, Janzen–Rayleigh expansion represents a regular perturbation expansion using the relevant mach number as the small parameter of expansion for the velocity field that possess slight compressibility effects. The expansion was first studied by O. Janzen in 1913 and Lord Rayleigh in 1916. Steady potential flow. Consider a steady potential flow that is characterized by the velocity potential formula_0 Then formula_1 satisfies formula_2 where formula_3, the sound speed is expressed as a function of the velocity magnitude formula_4 For a polytropic gas, we can write formula_5 where formula_6 is the specific heat ratio, formula_7 is the stagnation sound speed (i.e., the sound speed in a gas at rest) and formula_8 is the stagnation enthalpy. Let formula_9 be the characteristic velocity scale and formula_10 is the characteristic value of the sound speed, then the function formula_11 is of the form formula_12 where formula_13 is the relevant Mach number. For small Mach numbers, we can introduce the series formula_14 Substituting this governing equation and collecting terms of different orders of formula_15 leads to a set of equations. These are formula_16 and so on. Note that formula_17 is independent of formula_6 with which the latter quantity appears in the problem for formula_18. Imai–Lamla method. A simple method for finding the particular integral for formula_17 in two dimensions was devised by Isao Imai and . In two dimensions, the problem can be handled using complex analysis by introducing the complex potential formula_19 formally regarded as the function of formula_20 and its conjugate formula_21; here formula_22 is the stream function, defined such that formula_23 where formula_24 is some reference value for the density. The perturbation series of formula_25 is given by formula_26 where formula_27 is an analytic function since formula_28 and formula_29, being solutions of the Laplace equation, are harmonic functions. The integral for the first-order problem leads to the Imai–Lamla formula formula_30 where formula_31 is the homogeneous solution (an analytic function), that can be used to satisfy necessary boundary conditions. The series for the complex velocity potential formula_32 is given by formula_33 where formula_34 and formula_35 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\varphi(\\mathbf x)." }, { "math_id": 1, "text": "\\varphi" }, { "math_id": 2, "text": "(c^2-\\varphi_x^2)\\varphi_{xx}+(c^2-\\varphi_y^2)\\varphi_{yy}+(c^2-\\varphi_z^2)\\varphi_{zz}-2(\\varphi_x\\varphi_y\\varphi_{xy}+\\varphi_y\\varphi_z\\varphi_{yz}+\\varphi_z\\varphi_x\\phi_{zx})=0" }, { "math_id": 3, "text": "c=c(v^2)" }, { "math_id": 4, "text": "v^2=(\\nabla \\varphi)^2." }, { "math_id": 5, "text": "c^2 = c_0^2 - \\frac{\\gamma-1}{2}v^2" }, { "math_id": 6, "text": "\\gamma" }, { "math_id": 7, "text": "c_0^2 = h_0(\\gamma-1)/2" }, { "math_id": 8, "text": "h_0" }, { "math_id": 9, "text": "U" }, { "math_id": 10, "text": "c_0" }, { "math_id": 11, "text": "c(v^2)" }, { "math_id": 12, "text": "\\frac{c^2}{U^2} = \\frac{1}{M^2} - \\frac{\\gamma-1}{2}\\frac{v^2}{U^2}." }, { "math_id": 13, "text": "M=U/c_0" }, { "math_id": 14, "text": "\\varphi = U (\\varphi_0 + M^2 \\varphi_1 + M^4 \\varphi_2 + \\cdots)" }, { "math_id": 15, "text": "Ma" }, { "math_id": 16, "text": "\\begin{align}\n\\nabla^2\\varphi_0 &= 0,\\\\\n\\nabla^2\\varphi_1 & = \\varphi_{0,x}^2\\varphi_{0,xx} + \\varphi_{0,y}^2\\varphi_{0,yy} + \\varphi_{0,z}^2\\varphi_{0,zz} +2(\\varphi_{0,x}\\varphi_{0,y}\\varphi_{0,xy}+\\varphi_{0,y}\\varphi_{0,z}\\varphi_{0,yz}+\\varphi_{0,z}\\varphi_{0,x}\\phi_{0,zx}),\n\\end{align}" }, { "math_id": 17, "text": "\\varphi_1" }, { "math_id": 18, "text": "\\varphi_2" }, { "math_id": 19, "text": "f(z,\\overline z) = \\varphi + i\\psi" }, { "math_id": 20, "text": "z=x+iy" }, { "math_id": 21, "text": "\\overline z = x-iy" }, { "math_id": 22, "text": "\\psi" }, { "math_id": 23, "text": "u =\\frac{\\rho_\\infty}{\\rho}\\frac{\\partial\\psi}{\\partial y}=\\frac{\\partial\\varphi}{\\partial x}, \\quad v =-\\frac{\\rho_\\infty}{\\rho}\\frac{\\partial\\psi}{\\partial x}=\\frac{\\partial\\varphi}{\\partial y}" }, { "math_id": 24, "text": "\\rho_\\infty" }, { "math_id": 25, "text": "f" }, { "math_id": 26, "text": "f(z,\\overline z) = U[f_0(z) + M^2 f_1(z,\\overline z) + \\cdots]" }, { "math_id": 27, "text": "f_0=f_0(z)" }, { "math_id": 28, "text": "\\varphi_0" }, { "math_id": 29, "text": "\\psi_0" }, { "math_id": 30, "text": "f_1(z,\\overline z) = \\frac{1}{4} \\frac{df_0}{dz}\\overline{\\int\\left(\\frac{df_0}{dz}\\right)^2dz} + F(z)" }, { "math_id": 31, "text": "F(z)" }, { "math_id": 32, "text": "g = u-iv " }, { "math_id": 33, "text": "g(z,\\overline z) = U[g_0(z) + M^2g_1(z,\\overline z) +\\cdots]" }, { "math_id": 34, "text": "g_0=df_0/dz" }, { "math_id": 35, "text": "g_1(z,\\overline z) = \\frac{1}{4} \\frac{d^2f_0}{dz^2}\\overline{\\int\\left(\\frac{df_0}{dz}\\right)^2dz} + \\frac{1}{4}\\overline{ \\frac{df_0}{dz}}\\left(\\frac{df_0}{dz}\\right)^2 + \\frac{dF}{dz}." } ]
https://en.wikipedia.org/wiki?curid=76423933
76425
Kater's pendulum
Reversible free swinging pendulum A Kater's pendulum is a reversible free swinging pendulum invented by British physicist and army captain Henry Kater in 1817, published 29th January 1818 for use as a gravimeter instrument to measure the local acceleration of gravity. Its advantage is that, unlike previous pendulum gravimeters, the pendulum's centre of gravity and center of oscillation do not have to be determined, allowing a greater accuracy. For about a century, until the 1930s, Kater's pendulum and its various refinements remained the standard method for measuring the strength of the Earth's gravity during geodetic surveys. It is now used only for demonstrating pendulum principles. Description. A pendulum can be used to measure the acceleration of gravity "g" because for narrow swings its period of swing "T" depends only on "g" and its length "L": formula_0 So by measuring the length "L" and period "T" of a pendulum, "g" can be calculated. The Kater's pendulum consists of a rigid metal bar with two pivot points, one near each end of the bar. It can be suspended from either pivot and swung. It also has either an adjustable weight that can be moved up and down the bar, or one adjustable pivot, to adjust the periods of swing. In use, it is swung from one pivot, and the period timed, and then turned upside down and swung from the other pivot, and the period timed. The movable weight (or pivot) is adjusted until the two periods are equal. At this point the period "T" is equal to the period of an 'ideal' simple pendulum of length equal to the distance between the pivots. From the period and the measured distance "L" between the pivots, the acceleration of gravity can be calculated with great precision from the equation (1) above. The acceleration due to gravity by Kater's pendulum is given by formula_1 where "T"1 and "T"2 are the time periods of oscillations when it is suspended from "K"1 and "K"2 respectively and "ℓ"1 and "ℓ"2 are the distances of knife edges "K"1 and "K"2 from the center of gravity respectively. History. Gravity measurement with pendulums. The first person to discover that gravity varied over the Earth's surface was French scientist Jean Richer, who in 1671 was sent on an expedition to Cayenne, French Guiana, by the French Académie des Sciences, assigned the task of making measurements with a pendulum clock. Through the observations he made in the following year, Richer determined that the clock was <templatestyles src="Fraction/styles.css" />2+1⁄2 minutes per day slower than at Paris, or equivalently the length of a pendulum with a swing of one second there was <templatestyles src="Fraction/styles.css" />1+1⁄4 Paris "lines", or 2.6 mm, shorter than at Paris. It was realized by the scientists of the day, and proven by Isaac Newton in 1687, that this was due to the fact that the Earth was not a perfect sphere but slightly oblate; it was thicker at the equator because of the Earth's rotation. Since the surface was farther from the Earth's center at Cayenne than at Paris, gravity was weaker there. After that discovery was made, freeswinging pendulums started to be used as precision gravimeters, taken on voyages to different parts of the world to measure the local gravitational acceleration. The accumulation of geographical gravity data resulted in more and more accurate models of the overall shape of the Earth. Pendulums were so universally used to measure gravity that, in Kater's time, the local strength of gravity was usually expressed not by the value of the acceleration "g" now used, but by the length at that location of the "seconds pendulum", a pendulum with a period of two seconds, so each swing takes one second. It can be seen from equation (1) that for a seconds pendulum, the length is simply proportional to "g": formula_2 Inaccuracy of gravimeter pendulums. In Kater's time, the period "T" of pendulums could be measured very precisely by timing them with precision clocks set by the passage of stars overhead. Prior to Kater's discovery, the accuracy of "g" measurements was limited by the difficulty of measuring the other factor "L", the length of the pendulum, accurately. "L" in equation (1) above was the length of an ideal mathematical 'simple pendulum' consisting of a point mass swinging on the end of a massless cord. However the 'length' of a real pendulum, a swinging rigid body, known in mechanics as a compound pendulum, is more difficult to define. In 1673 Dutch scientist Christiaan Huygens in his mathematical analysis of pendulums, "Horologium Oscillatorium", showed that a real pendulum had the same period as a simple pendulum with a length equal to the distance between the pivot point and a point called the "center of oscillation", which is located under the pendulum's center of gravity and depends on the mass distribution along the length of the pendulum. The problem was there was no way to find the location of the center of oscillation in a real pendulum accurately. It could theoretically be calculated from the shape of the pendulum if the metal parts had uniform density, but the metallurgical quality and mathematical abilities of the time didn't allow the calculation to be made accurately. To get around this problem, most early gravity researchers, such as Jean Picard (1669), Charles Marie de la Condamine (1735), and Jean-Charles de Borda (1792) approximated a simple pendulum by using a metal sphere suspended by a light wire. If the wire had negligible mass, the center of oscillation was close to the center of gravity of the sphere. But even finding the center of gravity of the sphere accurately was difficult. In addition, this type of pendulum inherently wasn't very accurate. The sphere and wire didn't swing back and forth as a rigid unit, because the sphere acquired a slight angular momentum during each swing. Also the wire stretched elastically during the pendulum's swing, changing "L" slightly during the cycle. Kater's solution. However, in "Horologium Oscillatorium", Huygens had also proved that the pivot point and the center of oscillation were interchangeable. That is, if any pendulum is suspended upside down from its center of oscillation, it has the same period of swing, and the new center of oscillation is the old pivot point. The distance between these two conjugate points was equal to the length of a simple pendulum with the same period. As part of a committee appointed by the Royal Society in 1816 to reform British measures, Kater had been contracted by the House of Commons to determine accurately the length of the seconds pendulum in London. He realized Huygens' principle could be used to find the center of oscillation, and so the length "L", of a rigid (compound) pendulum. If a pendulum were hung upside down from a second pivot point that could be adjusted up and down on the pendulum's rod, and the second pivot were adjusted until the pendulum had the same period as it did when swinging right side up from the first pivot, the second pivot would be at the center of oscillation, and the distance between the two pivot points would be "L". Kater was not the first to have this idea. French mathematician Gaspard de Prony first proposed a reversible pendulum in 1800, but his work was not published until 1889. In 1811 Friedrich Bohnenberger again discovered it, but Kater independently invented it and was first to put it in practice. The pendulum. Kater built a pendulum consisting of a brass rod about 2 meters long, <templatestyles src="Fraction/styles.css" />1+1⁄2 inches wide and one-eighth inch thick, with a weight "(d)" on one end. For a low friction pivot he used a pair of short triangular 'knife' blades attached to the rod. In use the pendulum was hung from a bracket on the wall, supported by the edges of the knife blades resting on flat agate plates. The pendulum had two of these knife blade pivots "(a)", facing one another, about a meter (40 in) apart, so that a swing of the pendulum took approximately one second when hung from each pivot. Kater found that making one of the pivots adjustable caused inaccuracies, making it hard to keep the axis of both pivots precisely parallel. Instead he permanently attached the knife blades to the rod, and adjusted the periods of the pendulum by a small movable weight "(b,c)" on the pendulum shaft. Since gravity only varies by a maximum of 0.5% over the Earth, and in most locations much less than that, the weight had to be adjusted only slightly. Moving the weight toward one of the pivots decreased the period when hung from that pivot, and increased the period when hung from the other pivot. This also had the advantage that the precision measurement of the separation between the pivots had to be made only once. Experimental procedure. To use, the pendulum was hung from a bracket on a wall, with the knife blade pivots supported on two small horizontal agate plates, in front of a precision pendulum clock to time the period. It was swung first from one pivot, and the oscillations timed, then turned upside down and swung from the other pivot, and the oscillations timed again. The small weight "(b)" was adjusted with the adjusting screw, and the process repeated until the pendulum had the same period when swung from each pivot. By putting the measured period "T", and the measured distance between the pivot blades "L", into the period equation (1), "g" could be calculated very accurately. Kater performed 12 trials. He measured the period of his pendulum very accurately using the clock pendulum by the "method of coincidences"; timing the interval between the "coincidences" when the two pendulums were swinging in synchronism. He measured the distance between the pivot blades with a microscope comparator, to an accuracy of 10−4 in. (2.5 μm). As with other pendulum gravity measurements, he had to apply small corrections to the result for a number of variable factors: He gave his result as the length of the seconds pendulum. After corrections, he found that the mean length of the solar seconds pendulum at London, at sea level, at , swinging in vacuum, was 39.1386 inches. This is equivalent to a gravitational acceleration of 9.81158 m/s2. The largest variation of his results from the mean was . This represented a precision of gravity measurement of 0.7×10−5 (7 milligals). In 1824, the British Parliament made Kater's measurement of the seconds pendulum the official backup standard of length for defining the yard if the yard prototype was destroyed. Use. The large increase in gravity measurement accuracy made possible by Kater's pendulum established gravimetry as a regular part of geodesy. To be useful, it was necessary to find the exact location (latitude and longitude) of the 'station' where a gravity measurement was taken, so pendulum measurements became part of surveying. Kater's pendulums were taken on the great historic geodetic surveys of much of the world that were being done during the 19th century. In particular, Kater's pendulums were used in the Great Trigonometric Survey of India. Reversible pendulums remained the standard method used for absolute gravity measurements until they were superseded by free-fall gravimeters in the 1950s. Repsold–Bessel pendulum. Repeatedly timing each period of a Kater pendulum, and adjusting the weights until they were equal, was time-consuming and error-prone. Friedrich Bessel showed in 1826 that this was unnecessary. As long as the periods measured from each pivot, "T"1 and "T"2, are close in value, the period "T" of the equivalent simple pendulum can be calculated from them: formula_3 Here formula_4 and formula_5 are the distances of the two pivots from the pendulum's center of gravity. The distance between the pivots, formula_6, can be measured with great accuracy. formula_4 and formula_5, and thus their difference formula_7, cannot be measured with comparable accuracy. They are found by balancing the pendulum on a knife edge to find its center of gravity, and measuring the distances of each of the pivots from the center of gravity. However, because formula_8 is so much smaller than formula_9, the second term on the right in the above equation is small compared to the first, so formula_7 doesn't have to be determined with high accuracy, and the balancing procedure described above is sufficient to give accurate results. Therefore, the pendulum doesn't have to be adjustable at all, it can simply be a rod with two pivots. As long as each pivot is close to the center of oscillation of the other, so the two periods are close, the period "T" of the equivalent simple pendulum can be calculated with equation (2), and the gravity can be calculated from "T" and "L" with (1). In addition, Bessel showed that if the pendulum was made with a symmetrical shape, but internally weighted on one end, the error caused by effects of air resistance would cancel out. Also, another error caused by the non-zero radius of the pivot knife edges could be made to cancel out by interchanging the knife edges. Bessel didn't construct such a pendulum, but in 1864 Adolf Repsold, under contract to the Swiss Geodetic Commission, developed a symmetric pendulum 56 cm long with interchangeable pivot blades, with a period of about <templatestyles src="Fraction/styles.css" />3⁄4 second. The Repsold pendulum was used extensively by the Swiss and Russian Geodetic agencies, and in the Survey of India. Other widely used pendulums of this design were made by Charles Peirce and C. Defforges. International Association of Geodesy. The 1875 Conference of the European Arc Measurement dealt with the best instrument to be used for the determination of gravity. The association decided in favor of the reversion pendulum and it was resolved to redo in Berlin, in the station where Friedrich Wilhelm Bessel made his famous measurements, the determination of gravity by means of devices of various kinds employed in different countries, in order to compare them and thus to have the equation of their scales, after an in-depth discussion in which an American scholar, Charles Sanders Peirce, took part. Indeed, as the figure of the Earth could be inferred from variations of the seconds pendulum length, the United States Coast Survey's direction instructed Charles Sanders Peirce in the spring of 1875 to proceed to Europe for the purpose of making pendulum experiments to chief initial stations for operations of this sort, in order to bring the determinations of the forces of gravity in America into communication with those of other parts of the world; and also for the purpose of making a careful study of the methods of pursuing these researches in the different countries of Europe. The determination of gravity by the reversible pendulum was subject to two types of error. On the one hand the resistance of the air and on the other hand the movements that the oscillations of the pendulum imparted to its plane of suspension. These movements were particularly important with the apparatus designed by the Repsold brothers on the indications of Bessel, because the pendulum had a large mass in order to counteract the effect of the viscosity of the air. While Emile Plantamour was carrying out a series of experiments with this device, Adolph Hirsch found a way to demonstrate the movements of the pendulum's suspension plane by an ingenious process of optical amplification. Isaac-Charles Élisée Cellérier, a mathematician from Geneva and Charles Sanders Peirce would independently develop a correction formula that allowed the use of the observations made with this type of gravimeter. President of the Permanent Commission of the European Arc Measurement from 1874 to 1886, Carlos Ibáñez Ibáñez de Ibero became the first president of the International Geodetic Association (1887–1891) after the death of Johann Jacob Baeyer. Under Ibáñez's presidency, the International Geodetic Association acquired a global dimension with the accession of the United States, Mexico, Chile, Argentina and Japan. As a result of the work of the International Geodetic Association, in 1901, Friedrich Robert Helmert found, mainly by gravimetry, parameters of the ellipsoid remarkably close to reality. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "T = 2 \\pi \\sqrt { \\frac{L}{g} } \\qquad \\qquad \\qquad (1)\\," }, { "math_id": 1, "text": "g=\\frac{8\\pi^2}{\\dfrac{T_1^2+T_2^2}{\\ell_1+\\ell_2}+\\dfrac{T_1^2-T_2^2}{\\ell_1-\\ell_2}}" }, { "math_id": 2, "text": "g = \\pi^2 L \\," }, { "math_id": 3, "text": "T^2 = \\frac{T_1^2 + T_2^2}{2} + \\frac{T_1^2 - T_2^2}{2} \\left ( \\frac {h_1 + h_2}{h_1-h_2} \\right ) \\, \\qquad \\qquad \\qquad (2)" }, { "math_id": 4, "text": "h_1\\," }, { "math_id": 5, "text": "h_2\\," }, { "math_id": 6, "text": "h_1 + h_2\\," }, { "math_id": 7, "text": "h_1 - h_2\\," }, { "math_id": 8, "text": "T_1^2 - T_2^2\\," }, { "math_id": 9, "text": "T_1^2 + T_2^2\\," } ]
https://en.wikipedia.org/wiki?curid=76425
76438868
Michelson–Sivashinsky equation
In combustion, Michelson–Sivashinsky equation describes the evolution of a premixed flame front, subjected to the Darrieus–Landau instability, in the small heat release approximation. The equation was derived by Gregory Sivashinsky in 1977, who along the Daniel M. Michelson, presented the numerical solutions of the equation in the same year. Let the planar flame front, in a uitable frame of reference be on the formula_0-plane, then the evolution of this planar front is described by the amplitude function formula_1 (where formula_2) describing the deviation from the planar shape. The Michelson–Sivashinsky equation, reads as formula_3 where formula_4 is a constant. Incorporating also the Rayleigh–Taylor instability of the flame, one obtains the Rakib–Sivashinsky equation (named after Z. Rakib and Gregory Sivashinsky), formula_5 where formula_6 denotes the spatial average of formula_7, which is a time-dependent function and formula_8 is another constant. N-pole solution. The equations, in the absence of gravity, admits an explicit solution, which is called as the N-pole solution since the equation admits a pole decomposition,as shown by Olivier Thual, Uriel Frisch and Michel Hénon in 1988. Consider the 1d equation formula_9 where formula_10 is the Fourier transform of formula_7. This has a solution of the form formula_11 where formula_12 (which appear in complex conjugate pairs) are poles in the complex plane. In the case periodic solution with periodicity formula_13, the it is sufficient to consider poles whose real parts lie between the interval formula_14 and formula_13. In this case, we have formula_15 These poles are interesting because in physical space, they correspond to locations of the cusps forming in the flame front. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "xy" }, { "math_id": 1, "text": "u(\\mathbf x,t)" }, { "math_id": 2, "text": "\\mathbf x=(x,y)" }, { "math_id": 3, "text": "\\frac{\\partial u}{\\partial t} + \\frac{1}{2}(\\nabla u)^2 - \\nu \\nabla^2 u - \\frac{1}{8\\pi^2} \\int |\\mathbf k| e^{i\\mathbf k\\cdot(\\mathbf x-\\mathbf x')}u (\\mathbf x,t) d\\mathbf kd\\mathbf x'=0," }, { "math_id": 4, "text": "\\nu" }, { "math_id": 5, "text": "\\frac{\\partial u}{\\partial t} + \\frac{1}{2}(\\nabla u)^2 - \\nu \\nabla^2 u - \\frac{1}{8\\pi^2} \\int |\\mathbf k| e^{i\\mathbf k\\cdot(\\mathbf x-\\mathbf x')}u (\\mathbf x,t) d\\mathbf kd\\mathbf x' + \\gamma \\left(u - \\langle \n u \\rangle \\right)=0, \\quad " }, { "math_id": 6, "text": "\\langle u \\rangle(t)" }, { "math_id": 7, "text": "u" }, { "math_id": 8, "text": "\\gamma" }, { "math_id": 9, "text": "u_t + u u_x - \\nu u_{xx} = \\int_{-\\infty}^{+\\infty} e^{ikx} \\hat u(k,t) dk," }, { "math_id": 10, "text": "\\hat u" }, { "math_id": 11, "text": "\\begin{align}u(x,t) &= -2\\nu \\sum_{n=1}^{2N} \\frac{1}{x-z_n(t)}, \\\\ \n\\frac{dz_n}{dt} &= -2\\nu \\sum_{l=1,l\\neq n}^{2N} \\frac{1}{z_n-z_l} - i \\mathrm{sgn}(\\mathrm{Im} z_n),\n\\end{align}" }, { "math_id": 12, "text": "z_n(t)" }, { "math_id": 13, "text": "2\\pi" }, { "math_id": 14, "text": "0" }, { "math_id": 15, "text": "\\begin{align}\nu(x,t) &= -\\nu \\sum_{n=1}^{2\\pi} \\cot\\frac{x-z_n(t)}{2} , \\\\\n \\frac{dz_n}{dt} &= -\\nu \\sum_{l\\neq n} \\cot\\frac{z_n-z_l}{2} - i \\mathrm{sgn}(\\mathrm{Im} z_n)\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=76438868
764405
Turing degree
Measure of unsolvability In computer science and mathematical logic the Turing degree (named after Alan Turing) or degree of unsolvability of a set of natural numbers measures the level of algorithmic unsolvability of the set. Overview. The concept of Turing degree is fundamental in computability theory, where sets of natural numbers are often regarded as decision problems. The Turing degree of a set is a measure of how difficult it is to solve the decision problem associated with the set, that is, to determine whether an arbitrary number is in the given set. Two sets are Turing equivalent if they have the same level of unsolvability; each Turing degree is a collection of Turing equivalent sets, so that two sets are in different Turing degrees exactly when they are not Turing equivalent. Furthermore, the Turing degrees are partially ordered, so that if the Turing degree of a set "X" is less than the Turing degree of a set "Y", then any (possibly noncomputable) procedure that correctly decides whether numbers are in "Y" can be effectively converted to a procedure that correctly decides whether numbers are in "X". It is in this sense that the Turing degree of a set corresponds to its level of algorithmic unsolvability. The Turing degrees were introduced by and many fundamental results were established by . The Turing degrees have been an area of intense research since then. Many proofs in the area make use of a proof technique known as the priority method. Turing equivalence. For the rest of this article, the word "set" will refer to a set of natural numbers. A set "X" is said to be Turing reducible to a set "Y" if there is an oracle Turing machine that decides membership in "X" when given an oracle for membership in "Y". The notation "X" ≤T "Y" indicates that "X" is Turing reducible to "Y". Two sets "X" and "Y" are defined to be Turing equivalent if "X" is Turing reducible to "Y" and "Y" is Turing reducible to "X". The notation "X" ≡T "Y" indicates that "X" and "Y" are Turing equivalent. The relation ≡T can be seen to be an equivalence relation, which means that for all sets "X", "Y", and "Z": A Turing degree is an equivalence class of the relation ≡T. The notation ["X"] denotes the equivalence class containing a set "X". The entire collection of Turing degrees is denoted formula_0. The Turing degrees have a partial order ≤ defined so that ["X"] ≤ ["Y"] if and only if "X" ≤T "Y". There is a unique Turing degree containing all the computable sets, and this degree is less than every other degree. It is denoted 0 (zero) because it is the least element of the poset formula_0. (It is common to use boldface notation for Turing degrees, in order to distinguish them from sets. When no confusion can occur, such as with ["X"], the boldface is not necessary.) For any sets "X" and "Y", X join Y, written "X" ⊕ "Y", is defined to be the union of the sets {2"n" : "n" ∈ "X"} and {2"m"+1 : "m" ∈ "Y"}. The Turing degree of "X" ⊕ "Y" is the least upper bound of the degrees of "X" and "Y". Thus formula_0 is a join-semilattice. The least upper bound of degrees a and b is denoted a ∪ b. It is known that formula_0 is not a lattice, as there are pairs of degrees with no greatest lower bound. For any set "X" the notation "X"′ denotes the set of indices of oracle machines that halt (when given their index as input) when using "X" as an oracle. The set "X"′ is called the Turing jump of "X". The Turing jump of a degree ["X"] is defined to be the degree ["X"′]; this is a valid definition because "X"′ ≡T "Y"′ whenever "X" ≡T "Y". A key example is 0′, the degree of the halting problem. Structure of the Turing degrees. A great deal of research has been conducted into the structure of the Turing degrees. The following survey lists only some of the many known results. One general conclusion that can be drawn from the research is that the structure of the Turing degrees is extremely complicated. Recursively enumerable Turing degrees. A degree is called "recursively enumerable" (r.e.) or "computably enumerable" (c.e.) if it contains a recursively enumerable set. Every r.e. degree is below 0′, but not every degree below 0′ is r.e.. However, a set formula_4 is many-one reducible to 0′ iff formula_4 is r.e.. Additionally, there is Shoenfield's limit lemma, a set A satisfies formula_5 iff there is a "recursive approximation" to its characteristic function: a function "g" such that for sufficiently large "s", formula_6. A set "A" is called "n"-r e. if there is a family of functions formula_7 such that: Properties of "n"-r.e. degrees: Post's problem and the priority method. Emil Post studied the r.e. Turing degrees and asked whether there is any r.e. degree strictly between 0 and 0′. The problem of constructing such a degree (or showing that none exist) became known as Post's problem. This problem was solved independently by Friedberg and Muchnik in the 1950s, who showed that these intermediate r.e. degrees do exist (Friedberg–Muchnik theorem). Their proofs each developed the same new method for constructing r.e. degrees, which came to be known as the priority method. The priority method is now the main technique for establishing results about r.e. sets. The idea of the priority method for constructing a r.e. set "X" is to list a countable sequence of "requirements" that "X" must satisfy. For example, to construct a r.e. set "X" between 0 and 0′ it is enough to satisfy the requirements "Ae" and "Be" for each natural number "e", where "Ae" requires that the oracle machine with index "e" does not compute 0′ from "X" and "Be" requires that the Turing machine with index "e" (and no oracle) does not compute "X". These requirements are put into a "priority ordering", which is an explicit bijection of the requirements and the natural numbers. The proof proceeds inductively with one stage for each natural number; these stages can be thought of as steps of time during which the set "X" is enumerated. At each stage, numbers may be put into "X" or forever (if not injured) prevented from entering "X" in an attempt to "satisfy" requirements (that is, force them to hold once all of "X" has been enumerated). Sometimes, a number can be enumerated into "X" to satisfy one requirement but doing this would cause a previously satisfied requirement to become unsatisfied (that is, to be "injured"). The priority order on requirements is used to determine which requirement to satisfy in this case. The informal idea is that if a requirement is injured then it will eventually stop being injured after all higher priority requirements have stopped being injured, although not every priority argument has this property. An argument must be made that the overall set "X" is r.e. and satisfies all the requirements. Priority arguments can be used to prove many facts about r.e. sets; the requirements used and the manner in which they are satisfied must be carefully chosen to produce the required result. For example, a simple (and hence noncomputable r.e.) low "X" (low means "X"′=0′) can be constructed in infinitely many stages as follows. At the start of stage "n", let "T""n" be the output (binary) tape, identified with the set of cell indices where we placed 1 so far (so "X"=∪"n" "T""n"; "T"0=∅); and let "P""n"("m") be the priority for not outputting 1 at location "m"; "P"0("m")=∞. At stage "n", if possible (otherwise do nothing in the stage), pick the least "i"<"n" such that ∀"m" "P""n"("m")≠"i" and Turing machine "i" halts in <"n" steps on some input "S"⊇"T""n" with ∀"m"∈"S"\"T""n" "P""n"("m")≥"i". Choose any such (finite) "S", set "T""n"+1="S", and for every cell "m" visited by machine "i" on "S", set "P""n"+1("m") = min("i", "P""n"("m")), and set all priorities >"i" to ∞, and then set one priority ∞ cell (any will do) not in "S" to priority "i". Essentially, we make machine "i" halt if we can do so without upsetting priorities <"i", and then set priorities to prevent machines >"i" from disrupting the halt; all priorities are eventually constant. To see that "X" is low, machine "i" halts on "X" iff it halts in <"n" steps on some "T""n" such that machines <"i" that halt on "X" do so <"n"-"i" steps (by recursion, this is uniformly computable from 0′). "X" is noncomputable since otherwise a Turing machine could halt on "Y" iff "Y"\"X" is nonempty, contradicting the construction since "X" excludes some priority "i" cells for arbitrarily large "i"; and "X" is simple because for each "i" the number of priority "i" cells is finite. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathcal{D}" }, { "math_id": 1, "text": "\\aleph_0" }, { "math_id": 2, "text": "2^{\\aleph_0}" }, { "math_id": 3, "text": "\\omega_1" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "[A]\\leq_T \\emptyset'" }, { "math_id": 6, "text": "g(s)=\\chi_A(s)" }, { "math_id": 7, "text": "(A_s)_{s\\in\\mathbb N}" }, { "math_id": 8, "text": "\\{s\\mid A_s(x)\\neq A_{s+1}(x)\\}" }, { "math_id": 9, "text": "\\mathbf a\\leq_T\\mathbf b" }, { "math_id": 10, "text": "\\{\\mathbf c\\mid\\mathbf a\\leq_T\\mathbf c\\leq_T\\mathbf b\\}" }, { "math_id": 11, "text": "\\overline A" } ]
https://en.wikipedia.org/wiki?curid=764405
76442
S. P. L. Sørensen
Danish chemist (1868–1939) Søren Peter Lauritz Sørensen (9 January 1868 – 12 February 1939) was a Danish chemist, known for the introduction of the concept of pH, a scale for measuring acidity and alkalinity. Personal life. Sørensen was born in Havrebjerg Denmark in 1868 as the son of a farmer. He began his studies at the University of Copenhagen at the age of 18. He wanted to make a career in medicine, but under the influence of chemist Sophus Mads Jørgensen decided to change to chemistry. While studying for his doctorate he worked as assistant in chemistry at the laboratory of the Technical University of Denmark, assisted in a geological survey of Denmark, and also worked as a consultant for the Royal Navy Dockyard. Sørensen was married twice. His second wife was Margrethe Høyrup Sørensen, who collaborated with him in his studies. Work. From 1901 to 1938, Sørensen was head of the prestigious Carlsberg Laboratory, Copenhagen. While working at the Carlsberg Laboratory he studied the effect of ion concentration on proteins and, because the concentration of hydrogen ions was particularly important, he introduced the pH-scale as a simple way of expressing it in 1909. The article in which he introduced the scale (using the notation formula_0) was published in French and Danish as well as in German described two methods for measuring acidity which Sørensen and his students had refined. The first method was based on electrodes, whereas the second involved comparing the colours of samples and a preselected set of indicators. (Sørensen, 1909). From p. 134: "Die Größe der Wasserstoffionenkonzentration … und die Bezeichnung formula_0 für den numerischen Wert des Exponent dieser Potenz benütze." (The magnitude of the hydrogen ion concentration is accordingly expressed by the normality factor of the solution concerned, based on the hydrogen ions, and this factor is written in the form of a negative power of 10. By the way, as I refer [to it] in a following section (see p. 159), I just want to point out here that I use the name "hydrogen ion exponent" and the notation formula_0 for the numerical value of the exponent of this power.) From pp. 159–160: ""Für die Zahl "p" schlage ich den Namen "Wasserstoffionenexponent" … Normalitätsfaktors der Lösung verstanden."" (For the number "p" I suggest the name "hydrogen ion exponent" and the notation formula_0. By the hydrogen ion exponent (formula_0) of a solution is thus understood the Briggsian logarithm of the reciprocal value of the normality factor of the solution, based on the hydrogen ions, and this factor is written in the form of a negative power of 10). Starting on p. 139, "4. Meßmethoden zur Bestimmung der Wasserstoffionenkonzentration." (4. Methods of measurement for the determination of hydrogen ion concentration.), Sørensen reviewed a series of methods for measuring hydrogen ion concentration. He rejected all of them except two. From p. 144: "Es gibt noch zwei Verfahrungsweisen, … bzw. die colorimetrische Methode genannt." (There are still two procedures by which the hydrogen or hydroxyl ion concentration of a solution can be determined; namely, gas chain measurement and determination by means of indicators, also called the electrometric or colorimetric method.) On pp. 145–146, Sørensen outlined the electrometric and colorimetric methods: From p. 145: "Die elektrometrische Methode. Wird eine mit Platin-schwarz bedeckte Platinplatte in eine wäßerige … von der Wasserstoffionenkonzentration der Lösung abhängt.)" (The electrometric method. If a platinum plate that's covered with platinum black is dipped into an aqueous – acidic, neutral, or alkaline – solution and if the solution is saturated with hydrogen, then one finds, between the platinum plate and the solution, a voltage difference whose magnitude depends on the hydrogen ion concentration of the solution according to a law. From pp. 145: "Die colorimetrische Methode. Der Umschlag des Indicators bei einer gewöhnlichen Titrierung bedeutet ja, wie bekannt, daß die Konzentration der Wasserstoffionen der vorliegenden Lösung eine gewisse Größe von der einen oder der anderen Seite her erreicht oder überschritten hat." (The colorimetric method. The sudden change of the indicator during a typical titration means, as is known, that the concentration of hydrogen ions in the solution at hand has reached or exceeded – from one direction or the other – a certain magnitude.) p. 146: "Die Grundlage ist seit langer Zeit bekannt, … eine vollständige Reihe Indikatoren mit Umschlagspunkten bei den verschiedensten Ionenkonzentrationen zusammenzustellen." (The basis [of the colorimetric method] has been known for a long time, but the scattered material was first struggled through and perfected at certain points by the beautiful investigations of Hans Friedenthal [1870-1942] and Eduard Salm, so that it became possible for them to assemble a complete series of indicators with transition points at the most varied ion concentrations.) On pp. 150ff, the electrometric method is detailed; and on pp. 201ff, the colorimetric method is detailed. References. <templatestyles src="Reflist/styles.css" /> Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "p_\\mathrm{H}" } ]
https://en.wikipedia.org/wiki?curid=76442
764433
Lusternik–Schnirelmann category
In mathematics, the Lyusternik–Schnirelmann category (or, Lusternik–Schnirelmann category, LS-category) of a topological space formula_0 is the homotopy invariant defined to be the smallest integer number formula_1 such that there is an open covering formula_2 of formula_0 with the property that each inclusion map formula_3 is nullhomotopic. For example, if formula_0 is a sphere, this takes the value two. Sometimes a different normalization of the invariant is adopted, which is one less than the definition above. Such a normalization has been adopted in the definitive monograph by Cornea, Lupton, Oprea, and Tanré (see below). In general it is not easy to compute this invariant, which was initially introduced by Lazar Lyusternik and Lev Schnirelmann in connection with variational problems. It has a close connection with algebraic topology, in particular cup-length. In the modern normalization, the cup-length is a lower bound for the LS-category. It was, as originally defined for the case of formula_0 a manifold, the lower bound for the number of critical points that a real-valued function on formula_0 could possess (this should be compared with the result in Morse theory that shows that the sum of the Betti numbers is a lower bound for the number of critical points of a Morse function). The invariant has been generalized in several different directions (group actions, foliations, simplicial complexes, etc.).
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "\\{U_i\\}_{1\\leq i\\leq k}" }, { "math_id": 3, "text": "U_i\\hookrightarrow X" } ]
https://en.wikipedia.org/wiki?curid=764433
764468
Post's theorem
Theorem in computability theory In computability theory Post's theorem, named after Emil Post, describes the connection between the arithmetical hierarchy and the Turing degrees. Background. The statement of Post's theorem uses several concepts relating to definability and recursion theory. This section gives a brief overview of these concepts, which are covered in depth in their respective articles. The arithmetical hierarchy classifies certain sets of natural numbers that are definable in the language of Peano arithmetic. A formula is said to be formula_0 if it is an existential statement in prenex normal form (all quantifiers at the front) with formula_1 alternations between existential and universal quantifiers applied to a formula with bounded quantifiers only. Formally a formula formula_2 in the language of Peano arithmetic is a formula_0 formula if it is of the form formula_3 where formula_4 contains only bounded quantifiers and "Q" is formula_5 if "m" is even and formula_6 if "m" is odd. A set of natural numbers formula_7 is said to be formula_8 if it is definable by a formula_8 formula, that is, if there is a formula_8 formula formula_2 such that each number formula_9 is in formula_7 if and only if formula_10 holds. It is known that if a set is formula_8 then it is formula_11 for any formula_12, but for each "m" there is a formula_13 set that is not formula_8. Thus the number of quantifier alternations required to define a set gives a measure of the complexity of the set. Post's theorem uses the relativized arithmetical hierarchy as well as the unrelativized hierarchy just defined. A set formula_7 of natural numbers is said to be formula_8 relative to a set formula_14, written formula_15, if formula_7 is definable by a formula_8 formula in an extended language that includes a predicate for membership in formula_14. While the arithmetical hierarchy measures definability of sets of natural numbers, Turing degrees measure the level of uncomputability of sets of natural numbers. A set formula_7 is said to be Turing reducible to a set formula_14, written formula_16, if there is an oracle Turing machine that, given an oracle for formula_14, computes the characteristic function of formula_7. The Turing jump of a set formula_7 is a form of the Halting problem relative to formula_7. Given a set formula_7, the Turing jump formula_17 is the set of indices of oracle Turing machines that halt on input formula_18 when run with oracle formula_7. It is known that every set formula_7 is Turing reducible to its Turing jump, but the Turing jump of a set is never Turing reducible to the original set. Post's theorem uses finitely iterated Turing jumps. For any set formula_7 of natural numbers, the notation formula_19 indicates the formula_9–fold iterated Turing jump of formula_7. Thus formula_20 is just formula_7, and formula_21 is the Turing jump of formula_19. Post's theorem and corollaries. Post's theorem establishes a close connection between the arithmetical hierarchy and the Turing degrees of the form formula_22, that is, finitely iterated Turing jumps of the empty set. (The empty set could be replaced with any other computable set without changing the truth of the theorem.) Post's theorem states: Post's theorem has many corollaries that expose additional relationships between the arithmetical hierarchy and the Turing degrees. These include: Proof of Post's theorem. Formalization of Turing machines in first-order arithmetic. The operation of a Turing machine formula_34 on input formula_9 can be formalized logically in first-order arithmetic. For example, we may use symbols formula_35, formula_36, and formula_37 for the tape configuration, machine state and location along the tape after formula_38 steps, respectively. formula_34's transition system determines the relation between formula_39 and formula_40; their initial values (for formula_41) are the input, the initial state and zero, respectively. The machine halts if and only if there is a number formula_38 such that formula_36 is the halting state. The exact relation depends on the specific implementation of the notion of Turing machine (e.g. their alphabet, allowed mode of motion along the tape, etc.) In case formula_34 halts at time formula_42, the relation between formula_39 and formula_40 must be satisfied only for k bounded from above by formula_42. Thus there is a formula formula_43 in first-order arithmetic with no unbounded quantifiers, such that formula_34 halts on input formula_9 at time formula_42 at most if and only if formula_43 is satisfied. Implementation example. For example, for a prefix-free Turing machine with binary alphabet and no blank symbol, we may use the following notations: For a prefix-free Turing machine we may use, for input n, the initial tape configuration formula_49 where cat stands for concatenation; thus formula_50 is a formula_51length string of formula_52 followed by formula_18 and then by formula_9. The operation of the Turing machine at the first formula_42 steps can thus be written as the conjunction of the initial conditions and the following formulas, quantified over formula_38 for all formula_53: T halts on input formula_9 at time formula_42 at most if and only if formula_43 is satisfied, where: formula_57 This is a first-order arithmetic formula with no unbounded quantifiers, i.e. it is in formula_58. Recursively enumerable sets. Let formula_59 be a set that can be recursively enumerated by a Turing machine. Then there is a Turing machine formula_34 that for every formula_9 in formula_59, formula_34 halts when given formula_9 as an input. This can be formalized by the first-order arithmetical formula presented above. The members of formula_59 are the numbers formula_9 satisfying the following formula: formula_60 This formula is in formula_61. Therefore, formula_59 is in formula_61. Thus every recursively enumerable set is in formula_61. The converse is true as well: for every formula formula_62 in formula_61 with k existential quantifiers, we may enumerate the formula_38–tuples of natural numbers and run a Turing machine that goes through all of them until it finds the formula is satisfied. This Turing machine halts on precisely the set of natural numbers satisfying formula_62, and thus enumerates its corresponding set. Oracle machines. Similarly, the operation of an oracle machine formula_34 with an oracle O that halts after at most formula_42 steps on input formula_9 can be described by a first-order formula formula_63, except that the formula formula_64 now includes: If the oracle is for a decision problem, formula_65 is always "Yes" or "No", which we may formalize as 0 or 1. Suppose the decision problem itself can be formalized by a first-order arithmetic formula formula_67. Then formula_34 halts on formula_9 after at most formula_42 steps if and only if the following formula is satisfied: formula_68 where formula_69 is a first-order formula with no unbounded quantifiers. Turing jump. If O is an oracle to the halting problem of a machine formula_70, then formula_67 is the same as "there exists formula_71 such that formula_70 starting with input m is at the halting state after formula_71 steps". Thus: formula_72 where formula_73 is a first-order formula that formalizes formula_70. If formula_70 is a Turing machine (with no oracle), formula_73 is in formula_74 (i.e. it has no unbounded quantifiers). Since there is a finite number of numbers m satisfying formula_66, we may choose the same number of steps for all of them: there is a number formula_71, such that formula_70 halts after formula_71 steps precisely on those inputs formula_66 for which it halts at all. Moving to prenex normal form, we get that the oracle machine halts on input formula_9 if and only if the following formula is satisfied: formula_75 (informally, there is a "maximal number of steps"formula_71 such every oracle that does not halt within the first formula_71 steps does not stop at all; however, for everyformula_76, each oracle that halts after formula_76 steps does halt). Note that we may replace both formula_42 and formula_71 by a single number - their maximum - without changing the truth value of formula_62. Thus we may write: formula_77 For the oracle to the halting problem over Turing machines, formula_73 is in formula_78 and formula_62 is in formula_79. Thus every set that is recursively enumerable by an oracle machine with an oracle for formula_80, is in formula_79. The converse is true as well: Suppose formula_62 is a formula in formula_79 with formula_81 existential quantifiers followed by formula_82 universal quantifiers. Equivalently, formula_62 has formula_81> existential quantifiers followed by a negation of a formula in formula_61; the latter formula can be enumerated by a Turing machine and can thus be checked immediately by an oracle for formula_80. We may thus enumerate the formula_81–tuples of natural numbers and run an oracle machine with an oracle for formula_80 that goes through all of them until it finds a satisfaction for the formula. This oracle machine halts on precisely the set of natural numbers satisfying formula_62, and thus enumerates its corresponding set. Higher Turing jumps. More generally, suppose every set that is recursively enumerable by an oracle machine with an oracle for formula_83 is in formula_84. Then for an oracle machine with an oracle for formula_85, formula_72 is in formula_84. Since formula_67 is the same as formula_62 for the previous Turing jump, it can be constructed (as we have just done with formula_62 above) so that formula_73 in formula_86. After moving to prenex formal form the new formula_62 is in formula_87. By induction, every set that is recursively enumerable by an oracle machine with an oracle for formula_83, is in formula_84. The other direction can be proven by induction as well: Suppose every formula in formula_84 can be enumerated by an oracle machine with an oracle for formula_83. Now Suppose formula_62 is a formula in formula_87 with formula_81 existential quantifiers followed by formula_82 universal quantifiers etc. Equivalently, formula_62 has formula_81> existential quantifiers followed by a negation of a formula in formula_84; the latter formula can be enumerated by an oracle machine with an oracle for formula_83 and can thus be checked immediately by an oracle for formula_85. We may thus enumerate the formula_81–tuples of natural numbers and run an oracle machine with an oracle for formula_85 that goes through all of them until it finds a satisfaction for the formula. This oracle machine halts on precisely the set of natural numbers satisfying formula_62, and thus enumerates its corresponding set. References. Rogers, H. "The Theory of Recursive Functions and Effective Computability", MIT Press. ; Soare, R. "Recursively enumerable sets and degrees." Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1987.
[ { "math_id": 0, "text": "\\Sigma^{0}_m" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "\\phi(s)" }, { "math_id": 3, "text": "\\left(\\exists n^1_1\\exists n^1_2\\cdots\\exists n^1_{j_1}\\right)\\left(\\forall n^2_1 \\cdots \\forall n^2_{j_2}\\right)\\left(\\exists n^3_1\\cdots\\right)\\cdots\\left(Q n^m_1 \\cdots \\right)\\rho(n^1_1,\\ldots n^m_{j_m},x_1,\\ldots,x_k)" }, { "math_id": 4, "text": "\\rho" }, { "math_id": 5, "text": "\\forall" }, { "math_id": 6, "text": "\\exists" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "\\Sigma^0_m" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "\\phi(n)" }, { "math_id": 11, "text": "\\Sigma^0_n" }, { "math_id": 12, "text": "n > m" }, { "math_id": 13, "text": "\\Sigma^0_{m+1}" }, { "math_id": 14, "text": "B" }, { "math_id": 15, "text": "\\Sigma^{0,B}_m" }, { "math_id": 16, "text": "A \\leq_T B" }, { "math_id": 17, "text": "A'" }, { "math_id": 18, "text": "0" }, { "math_id": 19, "text": "A^{(n)}" }, { "math_id": 20, "text": "A^{(0)}" }, { "math_id": 21, "text": "A^{(n+1)}" }, { "math_id": 22, "text": "\\emptyset^{(n)}" }, { "math_id": 23, "text": "\\Sigma^0_{n+1}" }, { "math_id": 24, "text": "\\Sigma^{0,\\emptyset^{(n)}}_1" }, { "math_id": 25, "text": "n > 0" }, { "math_id": 26, "text": "C" }, { "math_id": 27, "text": "\\Sigma^{0,C}_{n+1}" }, { "math_id": 28, "text": "\\Sigma^{0,C^{(n)}}_1" }, { "math_id": 29, "text": "\\Delta_{n+1}" }, { "math_id": 30, "text": "B \\leq_T \\emptyset^{(n)}" }, { "math_id": 31, "text": "\\Delta^C_{n+1}" }, { "math_id": 32, "text": "B \\leq_T C^{(n)}" }, { "math_id": 33, "text": "\\emptyset^{(m)}" }, { "math_id": 34, "text": "T" }, { "math_id": 35, "text": "A_k" }, { "math_id": 36, "text": "B_k" }, { "math_id": 37, "text": "C_k" }, { "math_id": 38, "text": "k" }, { "math_id": 39, "text": "(A_k,B_k,C_k)" }, { "math_id": 40, "text": "(A_{k+1},B_{k+1},C_{k+1})" }, { "math_id": 41, "text": "k=0" }, { "math_id": 42, "text": "n_1" }, { "math_id": 43, "text": "\\varphi(n,n_1)" }, { "math_id": 44, "text": "A_0" }, { "math_id": 45, "text": "B_0=q_I" }, { "math_id": 46, "text": "C_0=0" }, { "math_id": 47, "text": "M(q,b)" }, { "math_id": 48, "text": "bit(j,m)" }, { "math_id": 49, "text": "t(n)= cat(2^{ceil(log_2 n)}-1,0,n)" }, { "math_id": 50, "text": "t(n)" }, { "math_id": 51, "text": "\\log(n)-" }, { "math_id": 52, "text": "1-s" }, { "math_id": 53, "text": "k<n_1" }, { "math_id": 54, "text": "(B_{k+1}, bit(C_k ,A_{k+1}), D) = M(B_k, bit(C_k ,A_k))" }, { "math_id": 55, "text": "C_{k+1} = C_k+D" }, { "math_id": 56, "text": "\\forall j: j\\ne C_k \\rightarrow bit(j ,A_{k+1}) = bit(j ,A_k)" }, { "math_id": 57, "text": "\\begin{align}\\varphi(n,n_1) =& (A_0=t(n)) \\land (B_0=q_I) \\land (C_0=0) \\land (B_{n_1}=q_H)\\\\ &\\land \\forall k<n_1: ((B_{k+1},bit(C_k,A_{k+1}),1) = M(B_k,bit(C_k,A_k)) \\land C_{k+1}=C_k+1)&\\\\ &\\lor ((B_{k+1},bit(C_k ,A_{k+1}),-1) = M(B_k,bit(C_k,A_k)) \\land C_{k+1}=C_k-1))& \\\\&\\land \\forall j<n_1+1: j\\ne C_k \\rightarrow (bit(j,A_{k+1})=bit(j,A_k)) & \\end{align}" }, { "math_id": 58, "text": "\\Sigma^0_0" }, { "math_id": 59, "text": "S" }, { "math_id": 60, "text": "\\exists n_1:\\varphi(n,n_1)" }, { "math_id": 61, "text": "\\Sigma^0_1" }, { "math_id": 62, "text": "\\varphi(n)" }, { "math_id": 63, "text": "\\varphi_O(n,n_1)" }, { "math_id": 64, "text": "\\varphi_1(n,n_1)" }, { "math_id": 65, "text": "O_m" }, { "math_id": 66, "text": "m<2^{n_1}" }, { "math_id": 67, "text": "\\psi^O(m)" }, { "math_id": 68, "text": "\\varphi_O(n,n_1) =\\forall m<2^{n_1}:((\\psi^O(m)\\rightarrow (O_m=1)) \\land(\\lnot\\psi^O(m)\\rightarrow (O_m=0))) \\land {\\varphi_O}_1(n,n_1)" }, { "math_id": 69, "text": "{\\varphi_O}_1(n,n_1)" }, { "math_id": 70, "text": "T'" }, { "math_id": 71, "text": "m_1" }, { "math_id": 72, "text": "\\psi^O(m) = \\exists m_1: \\psi_H(m,m_1)" }, { "math_id": 73, "text": "\\psi_H(m,m_1)" }, { "math_id": 74, "text": "\\Sigma^0_0 = \\Pi^0_0" }, { "math_id": 75, "text": "\\varphi(n) =\\exists n_1\\exists m_1 \\forall m_2 :(\\psi_H(m,m_2)\\rightarrow (O_m=1)) \\land(\\lnot\\psi_H(m,m_1)\\rightarrow (O_m=0))) \\land {\\varphi_O}_1(n,n_1)" }, { "math_id": 76, "text": "m_2" }, { "math_id": 77, "text": "\\varphi(n) =\\exists n_1 \\forall m_2 :(\\psi_H(m,m_2)\\rightarrow (O_m=1)) \\land(\\lnot\\psi_H(m,n_1)\\rightarrow (O_m=0))) \\land {\\varphi_O}_1(n,n_1)" }, { "math_id": 78, "text": "\\Pi^0_0" }, { "math_id": 79, "text": "\\Sigma^0_2" }, { "math_id": 80, "text": "\\emptyset ^{(1)}" }, { "math_id": 81, "text": "k_1" }, { "math_id": 82, "text": "k_2" }, { "math_id": 83, "text": "\\emptyset ^{(p)}" }, { "math_id": 84, "text": "\\Sigma^0_{p+1}" }, { "math_id": 85, "text": "\\emptyset ^{(p+1)}" }, { "math_id": 86, "text": "\\Pi^0_p" }, { "math_id": 87, "text": "\\Sigma^0_{p+2}" } ]
https://en.wikipedia.org/wiki?curid=764468
76459550
Tatra marmot
Subspecies of a rodent &lt;templatestyles src="Template:Taxobox/core/styles.css" /&gt; The Tatra marmot (Marmota marmota latirostris) is an endemic subspecies of marmot found in the Tatra Mountains. In the past, it was a game animal, but in the 19th century, its population drastically declined. It is a herbivore active in the summer, living in territorial family clans in the mountains from the upper montane to the alpine zone. It is one of the rarest vertebrates in Poland and is subject to strict legal protection. It is also legally protected in Slovakia. The Red List of Threatened Animals in Poland and the Polish Red Book of Animals classify the Tatra marmot as a strongly endangered subspecies (EN), while the Red List for the Carpathians in Poland designates it as "CR" – critically endangered. It is a relatively poorly researched animal. History of discovery and research. The wider recognition of the marmot in Poland was influenced by the slow progress of settlement in the Tatra region, dating back to the privilege granted by Bolesław V the Chaste in 1255 to the Cistercian Abbey of Szczyrzyc: "We also grant to the abbot: free hunting, all in the surrounding forests up to the mountains called Tatras". Historical accounts mention that over time, a group of hunters specializing in marmot hunting emerged, known as "whistlers". Descriptions of Tatra marmots were very modest and mostly related to their hunting value. Hungarian pastor Andreas Jonas Czirbesz wrote in 1774: "The Carpathian marmot resides in the highest mountain peaks' dens in summer and winter. It feeds on roots and herbs and has fat, tasty meat". The meat, skins, and especially the fat, which had wide applications in folk medicine, were valued. Various authors wrote about marmots; in 1719, teacher Georg Buchholtz, in 1721, Polish naturalist and Jesuit Gabriel Rzączyński, in 1750, Gdańsk naturalist Jacob Theodor Klein, and in 1779, Polish naturalist and clergyman Jan Krzysztof Kluk mentioned "whistlers" in the Carpathians. Marmots were hunted almost without restrictions until around 1868, and on the Hungarian side of the Tatras until 1883 when regulations protecting this species were introduced."" In 1865, the first description of marmot biology was published. The author of the work "About the Marmot" was Maksymilian Nowicki – co-founder of the Polish Tatra Society, researcher of Tatra fauna and flora, and a pioneer of nature conservation in Poland. Nowicki, using the name "Arctomys" ( – bear, and – mouse, rat) as the generic name for marmots, used the term "little bear mouse," while Ludwik Zejszner, writing at the same time, used the term "bear mouse": "this peculiar little animal looks like it consists of two others; with a head similar to a mouse, and the rest of the body like a bear, covered with long hair similar to martens. Highlanders call it a marmot because of its special barking, as the individual barking sounds are so drawn out that they bear a great resemblance to whistling... The marmot makes very long burrows in the Tatra hollows, lining them with moss, grass, and, like many animals, undergoes hibernation. At the end of summer, it stores numerous root supplies in its burrows and, having fattened up excessively, falls asleep in this winter abode, only waking up completely emaciated with spring". Systematics and evolution. Belonging to the family Sciuridae, the species "Marmota marmota" began to inhabit European territories already in the Pleistocene. It occurred over a vast area – from present-day Belgium and the shores of the English Channel to the Pannonian Basin. In the Holocene, with the warming climate, marmots had to choose more favorable locations. The moderate warmth of forested areas was not suitable for them, as their bodies are not well adapted to higher temperatures. They found cooler habitats in the elevated mountain ranges of Europe. Over time, they had to narrow their territories to the Alpine range and the Tatra Mountains. The separation of the Alpine population from the Carpathian one could have occurred from 15 to 50 thousand years ago, but for many years, no differences were noticed between the marmot living in the Tatras and its Alpine cousin, and both populations were treated as geographically separated locations of the same species. In the late 1950s and early 1960s, Czechoslovak scientists undertook comparative studies. For this purpose, on 31 May 1961, a marmot was shot from a colony in Wielki Żleb Krywański. The holotype studies were conducted by Josef Kratochvíl, a zoologist from the agricultural university in Brno, who found significant differences in the structure of the nasal bones compared to representatives of Alpine populations. After conducting additional comparisons of the results of cranial measurements of the skulls of 10 Tatra marmots (selected from 16 obtained for study) and 40 marmots from the Alps (27 skulls were personally examined, and the measurement data of 13 were the result of Gerrit Miller's work from 1912), Kratochvíl concluded that there was a regularity in the comparative group, namely, that the anterior, facial part of the nasal bone was significantly wider and longer in Tatra marmots than in animals from the Alps. Additionally, he found that marmots living in the Tatras are smaller than their cousins and have slightly different fur coloration. Ultimately, Kratochvíl classified marmots from the Tatras as a separate subspecies, "M. marmota latirostris", while marmots from the Alpine population were designated as "M. marmota marmota". Some zoologists question whether the observed differences could be merely manifestations of individual variability within the small population. However, genetic studies have not yet been conducted to settle this matter. Etymology. The generic name "Marmota" may derive from Gallo-Romance languages, meaning "murmuring" or "purring", or from Latin, being associated with the term "mus montanus", which translates to "mountain mouse". The subspecies epithet "latirostris" originates from two Latin words: the element "lati–" comes from Latin "lātus", meaning "wide", and –"rostris" from Latin "rōstrum", referring to the nose or beak, and together it can be translated as "broad-nosed". The epithet refers to the flattened and wider facial part of the nasal bone ("os nasale") of the animal compared to representatives of the Alpine population. Morphology. The Tatra marmot is one of the largest rodents in Europe. It is similar in size to a domestic cat, with a massive torso. Its length, including the head, ranges from 45 cm to 65 cm (although another source by the same author provides a range of 40–60 cm). In spring, the body mass of adult males ranges from 2.7 to 3.4 kg, while females weigh between 2.5 and 3.0 kg. During the season from spring to autumn, marmots start to consume more calories, taking in more carbohydrates from grass seeds, and their brown adipose tissue significantly expands, creating an energy reserve for the next hibernation period. Consequently, the body mass of marmots begins to increase noticeably during this period and can reach over 6 kg by autumn, with over 2 kg attributed to fat tissue. The fluffy tail measures between 13 and 17 cm. The primary fur consists of long, strong, and thick guard hairs, with the down hair being dense, composed of shorter, woolly, and slightly twisted hairs. The fur color is described as reddish-brown transitioning to dark brown-black. The hair coloration is highly varied, with black or dark brown dominating at the base, while shades of fawn, black, or reddish prevail higher up, fading to fawn or beige at the ends. The fur on the abdomen is lighter, ranging from light beige to yellowish. The head is covered with shorter hair, usually dark, black, or gray, with a light patch between the eyes. The muzzle is lighter, with a grayish tone, while the tail is blackish-brown, with a black tip. The fur of young individuals under 1 year old is notably darker and fluffier. Moulting typically occurs once a year, around June. Females, weakened by nursing new offspring, may have incomplete fur, and moulting may be delayed by about four weeks. As they age, the fur becomes more twisted and bushy. Older marmots, especially after hibernation, may have areas of thinning fur on their backs and tails. Five rows of whiskers, measuring up to 8 cm in length, grow on the sides of the marmot's muzzle, with sensory hairs also distributed on the eyebrows. Marmots do not have sweat glands. The forehead is flat and wide, with short (2–2.5 cm), densely furred ears almost entirely concealed in the head's fur. The eyes are small and black. The front paws are short, robust, and dexterous, equipped with four hairy toes ending in claws measuring from 2 to 2.5 cm, which serve as useful tools for digging burrows and holding food. The muscular hind limbs end in five toes with sharp claws. The external surface of the incisors is covered with hard enamel with longitudinal grooves, whose color changes with the animal's age, starting white in juveniles and darkening to orange and nearly brown with age. The posterior, concave surface of the incisors is made up of brittle dentin. The incisor's occlusal arrangement is scissor-like. The dentition for marmots is: formula_0. In females, there are five pairs of mammary glands. Morphological in comparison to the alpine marmot. The main feature that allowed the Tatra subspecies of marmot to be distinguished was the dimensions of the facial part of the nasal bone: Craniological comparisons (Josef Kratochvíl, 1961). Furthermore, Tatra marmots are characterized by lighter fur and have a grayish-brown coloration, while the alpine subspecies has a darker brown coloration, often with a reddish hue. "M. marmota latirostris" has a smaller body mass. However, sources do not provide details of this difference. Despite differences in the habitats of both subspecies (alpine marmots occupy locations at altitudes up to 3200 m above sea level), no differences have been observed in behavior and way of life. Lifestyle. The Tatra marmot is a diurnal animal with territorial and social behavior. It is monogamous, and individual families join colonies, with the nucleus typically being a dominant pair of animals. Annual life cycle. The annual life cycle of the marmot consists of two periods: summer activity and hibernation. Summer activity. The summer activity of the marmot begins around April and May and lasts until the second half of September or the first half of October. The earliest emergence of marmots on the surface has been observed on April 22, while the latest on May 10. This period lasts 139–158 days (or 139–161), with an average of 148 (or 150.7). The first spring activity after awakening from winter hibernation is associated with the mating season. the estrous cycle typically occurs in the second decade of May. Mating takes place both inside and outside the burrow. Gestation lasts about 33 days, and the young are born in the second decade of June. Usually, from 1 to 6 offspring are born, most commonly from 2 to 3. They remain under the care of the female in the burrow until the second half of July. They start consuming solid food at the age of 8 weeks. They will only become independent after 3–4 years when they reach sexual maturity. The beginning of summer activity is also marked by the search for new locations. Marmots migrate up to a distance of 3000 meters. During the summer period, marmots spend their time foraging (43.9%) and patrolling the territory (40.3%). Other activities such as moving, gathering winter supplies, digging and tidying burrows, playing, hygiene, typically account for only 8.9% of their time. Spending time inside the burrow during the day constitutes an average of 6.9%. There are three different types of contacts between marmots. The locomotor contact is the most common, followed by acoustic and visual contact. Hibernation. At the end of September and beginning of October, the period of summer activity ends for the Tatra marmot, and the hibernation period begins. Changes in the environment signal the marmot's organism. The air temperature drops, the days become shorter, and the vegetation, which serves as food, becomes scarce. Marmots curl up into a ball and collectively arrange themselves at the bottom of the winter burrow chamber. Metabolic processes slow down, allowing the body temperature to decrease from 37.7 °C to 8–10 °C, which is about 2–3 °C lower than the temperature inside the burrow. In exceptional situations, the body temperature may drop to 3–5 °C. Respiratory rate decreases from 16 to as low as 2–3 (or 4–5) breaths per minute, as oxygen consumption during hibernation decreases by about thirty times, and the heart rate decreases from 220 to 30 contractions per minute. Other sources specify the frequency of heart ventricular contractions in the summer period as 130, decreasing during hibernation to 15 per minute. Young marmots usually hibernate in the middle of the family, enveloped by older individuals, which helps them survive the burden of winter sleep more easily. Adult marmots accumulate more fat for this period. They also maintain a slightly higher body temperature. During hibernation, the marmot wakes up approximately every 3 weeks. The awakening lasts for about 12–30 hours, after which the animal returns to sleep. The trigger for awakening may be a decrease in the temperature inside the burrow to 0 °C. Too many awakenings during hibernation are very energetically costly for the marmot's organism. During these short awakenings, the marmot's body temperature rises to about 34 °C, sharply increasing energy expenditure, and up to 90% of the reserves accumulated in the form of brown adipose tissue are consumed. In the case of prolonged winter, the organism may become excessively depleted, leading to the death of the animal. During hibernation, the marmot's body mass significantly decreases. Hibernation lasts for 201–227 days, 215 on average. Social structure. Family colony can consist of several individuals – the dominant pair, their offspring from different years, and adopted individuals. According to former director of Tatra National Park, Wojciech Gąsienica-Byrcyn, several marmot families can comprise a colony. Marmot territorialism manifests in the daily need for the dominant male to patrol the boundaries demonstratively, systematically marking the territory with its scent. This is most likely done through secretions from glands located in the anal region, cheek glands, and glands in the paw pads. The marked territory is defended by the male against individuals from neighboring colonies. The first warnings are visual signals. The male raises its fur and waves its tail vertically. If the warning is not enough, it leads to a fight, where decisive wounds are inflicted using incisors. If the fight is evenly matched and neither male surrenders or flees, the brawl can last even the whole day. Aggressive defense is mainly directed towards foreign adult males. Young individuals can visit even the center of a foreign colony without fear. They are seen as adoption candidates for the colony without posing a threat to the dominant male's position. The social nature of marmot colonies is evident in their collective actions. They build burrows together, sleep together, and help each other groom. Young ones play together in groups, wrestling, standing in a "pillar" position, or chasing each other around the colony area. During a meeting, marmots greet each other by touching noses and sniffing. Sometimes, they also show excitement by moving their tails up and down. Reproduction is also subject to specific rules and is reserved only for the dominant pair. Mature individuals in the colony must adhere to the leader in this regard. Periodic battles for leadership may result in the overthrow of the dominant male. He is then expelled from the colony, and the victor may sometimes kill his offspring to solidify his power. Sounds. Marmots also communicate with each other using whistles or rather screams, as the sound emanates from their wide-open mouths. Its primary function is to warn colony members of danger. Several different levels of alarm whistles can be distinguished. The presence of a human, dog, or fox is signaled by a series of moderately intense whistles – then marmots swiftly move to the nearest burrow and observe from its entrance how events unfold. The series of whistles usually also discourages a fox, who is aware that after hearing the alarm signals, no marmot will become its prey. The greatest threat, such as the appearance of an eagle, is signaled by a single sharp whistle. Then, marmots flee without hesitation and hide in their burrows. Marmots also emit other sounds, which can be described as squeaks or murmurs. These do not convey warning information. Geographical distribution. The range of the "Marmota marmota" species spans between 44° and 49° N. Natural populations have survived in two mountain ranges: the Alps and the Tatra Mountains. The Tatra Mountains represent the northern range of the species (49° 14" N). The Alps are inhabited by the subspecies "Marmota marmota marmota", while "Marmota marmota latirostris" is an endemic species living in Poland and Slovakia. The vertical distribution range of Tatra marmots' habitats ranges from 1380 to 2050 meters above sea level, or according to other sources: in the Polish Tatra Mountains from 1750 to 1950 meters above sea level (with an average of 1870 meters above sea level), and in the entire Tatras massif from 1380 to 2330 meters above sea level. In the main Tatra range, 207 main burrows were inventoried, and in the Low Tatras, 46 burrows were counted, or according to other sources, 40 burrows. The population of Tatra marmots on the Polish side of the Tatras was estimated at 150–200 individuals, while the total population in the entire Tatras is less than 1000 individuals. The issue of precisely determining the locations of "M. marmota latirostris" in the Tatra Mountains encounters certain difficulties. While it is easy to establish that the autochthonous subspecies of the Tatra marmot occurs in the Western and Eastern Tatras, the origin of the marmots occurring in the Slovakian Low Tatras (with the highest peak being Ďumbier) separated from the main massif of the Tatras by the Liptov Basin is not clear. Although Ludwik Zejszner claimed that marmots were already living in the area of Solisko and Ďumbier in 1845, some researchers believe that the population in the Low Tatras is introduced from other parts of the Tatras. The fact of the 19th-century introduction in the area of Kráľova hoľa in the eastern part of the Low Tatras is reported by many researchers. Some even specify a specific time frame for it, around 1859–1867. Others believe that the marmots from the western part of the Low Tatras constitute a post-glacial population because, in their opinion, marmots would not be able to penetrate through the forested areas of the central part of the Low Tatras from previous introductions. In 1961, an expert on Tatra fauna, zoologist from the research station of the national park, Milič Blahout, wrote directly about the introduction of marmots from the Austrian Alps released in the Low Tatras between 1859 and 1867. Barbara Chovancová, who leads the marmot and chamois protection program in the Slovak Tatra National Park ("TANAP"), has no doubts that introductions were carried out twice in the Ďumbier area of the Low Tatras. Alpine marmots were released there in 1859, and in 1867, two pairs of marmots from the High Tatras were also released. If this were indeed the case, hybridization between representatives of both marmot subspecies could have occurred. Slovak authors mention a possible gene exchange between populations from the western and eastern parts of the Low Tatras. However, most believe that only "Marmota marmota latirostris" inhabits the Low Tatras. Kratochvíl proposed a hypothesis regarding the potential historical introduction confusion. He pointed out that historical information about marmots in the Carpathians could refer to populations still living in the 19th century in the Carpathians in Romania and Ukrainian Zakarpattia. However, there is a lack of reliable cranial or genetic studies that would resolve the issue of the origin of the populations in the Slovak Low Tatras. Data about historical locations of marmots in the Tatra Mountains was derived from extensive local nomenclature referring to marmot habitats: "Svišťový štít", "Svišťová dolina", "Svišťovský potok", "Świstówka Roztocka", "Świstówka Waksmundzka", "Malá Svišťovka", "Veľká Svišťovka", "Svišťové sedlo", "Świstowa Grań" with "Svišťové veže" included, "Świstowa Rówień", "Svišťové plieska", "Svišťový roh", "Svišťová kôpka", "Svišťový priechod", "Svišťový chrbát", "Sedlo pod Svišťovkou", "Svišťovka", "Nižná svišťová jaskyňa", "Vyšná svišťová jaskyňa". These names appear in literature only in the 17th century, but Gabriel Rzączyński already in 1721 mentioned, "is found in the Alps and Carpathian mountains in a valley named Świszcza" (probably referring to "Svišťová dolina"). In the last decade, a small number of marmots from the population in the Slovak Tatras were introduced into the Ukrainian part of the Eastern Carpathians. Fossil traces of occurrence. Fossil traces of marmot occurrence have been found in Moravia, and in Poland in the vicinity of Jasło. Ecology. The Tatra marmot is a herbivore. The main components of its diet include: herbaceous vegetation, shrubs and prostrate shrubs, roots, and tubers. The composition of its diet changes with the periods of vegetation in the Tatra Mountains. Favorite spring dishes include grasses, and in summer, the marmot most willingly eats spotted gentian, "Luzula alpinopilosa", alpine coltsfoot, alpine avens, "Mutellina purpurea", and "Poa granitica". More than 40 species of consumed plants are mentioned; among them are alpine bartsia, wood cranesbill, "Oreochloa disticha", European blueberry, "Veratrum lobelianum", "Campanula alpina", "Campanula tatrae", "Ranunculus pseudomontanus", large white buttercup, alpine hawkweed, brown clover, colorful fescue, "Valeriana sambucifolia", "Valeriana tripteris", "Thymus alpestris", "Thymus pulcherrimus", "Adenostyles alliariae", dandelions, "Solidago alpestris", "Doronicum austriacum", "Doronicum clusii", golden cinquefoil, "Anthyllis alpestris", bistort, alpine bistort, golden root, alpine pasqueflower, alpine sainfoin, "Juncus trifidus", "Rumex alpestris", wavy hair-grass, eyebright, "Soldanella carpatica", alpine meadow-grass, narcissus anemone and round-headed rampion. An adult marmot consumes about 1.5 kg of food daily. Water requirements are met by consuming juicy plants. The Tatra marmot is vulnerable to attacks from predators such as the golden eagle, gray wolf, Eurasian lynx, and red fox. The mere presence of humans also causes pressure and changes in behavior. In relation to humans, the marmot maintains a safe escape distance of several dozen to several hundred meters. In relation to predators, this distance is significantly extended and can be several hundred meters. Habitat. The entire population of the Tatra marmot is located within the territory of the Slovak Tatra National Park ("TANAP") and the Polish Tatra National Park, so the habitat has a primitive character. The typical habitat of "M. marmota latirostris" consists of Tatran areas in the altitudinal zonation: alpine zone, subalpine zone (grassy fragments), mountain zone (open spaces), and to a limited extent, foothill zone, with an average annual temperature ranging from -3 to +3 °C. The marmot prefers sunny places. The slope angle of the mountain habitats does not exceed 40°. The marmot enjoys the surroundings of rocks, which provide a good observation point and can also serve as shelter for hiding. The soil must have sufficient thickness to allow the digging of a burrow. In the Tatras, the upper limit of vertical distribution of habitats is determined by orographic conditions. Above 2300 m above sea level, the low thickness of the soil basically prevents the digging of burrows. The lowest recorded site was inventoried at an altitude of 1350 m above sea level. The central part of the colony typically occupies an area of about 2.5 hectares. The surface area of the territory depends, of course, on the vegetation coverage and the size of the colony. However, most authors report much smaller surface areas occupied by the colony: Peter Bopp 2000–2500 m2, Josef Kratochvíl 7900 m2, Milíč Blahout 2500–3600 m2, Dymitr I. Bibikov 500–4500 m2. The maximum occupied area can reach 2–7 hectares. Burrow. The burrow is the most important habitat for the marmot. It spends the entire period of hibernation in it, and during the summer, the burrow provides shelter. Therefore, suitable soil conditions, which allow easy excavation of tunnels, largely determine the location of a given colony. Both the thickness of the soil and the arrangement of aquifers are important, as they influence the avoidance of flooding or inundation. The nesting chamber is lined with dried grasses from the local vegetation. The marmot utilizes "Juncus trifidus", "Oreochloa disticha", fescue, reed grass, moss and lichens. The gathered lining is not used by marmots as food. It has also been observed that marmots collect tissues or fabrics discarded by tourists for this purpose. Marmots maintain cleanliness of their burrows, so each burrow is equipped with a latrine, which is created in alcoves along the side tunnels. Only there does the marmot relieve itself. Similarly, local marmot latrines can also be found outside formal burrows. They are created temporarily on the family's territory, in short tunnels, or in depressions under rocks. Hibernation burrow. The hibernation burrow plays a crucial role in the life of these animals. A marmot family spends an average of 215 days per year in it, hence the term "main burrow". It can also serve as the center of a marmot's life. The winter burrow consists of an entire system of branching tunnels, which can have a total length of over 10 meters. The tunnels have an oval shape with a diameter of 15–18 cm and are relatively shallow, but below the frost line – typically 1.2 meters below ground level. The central chamber of the burrow is located even up to 7 meters deep, accessed by several tunnels allowing the use of distant entrances. The chamber is abundantly lined with hay collected by the animals. To prepare one bedroom for winter, these rodents can use up to 15 kg of grass. Near the entrances, there is usually an earth mound formed by the removal of material during tunneling. The tunnels are equipped with small widenings, which allow family members to pass each other going in opposite directions. Before winter, the entrances to the burrows are carefully sealed by the marmots from the inside with packed earth, stones, and feces. Each burrow houses a family hibernating, usually consisting of 4 to 10 marmots. In the past, accounts from hunters indicated that they found between 2 and 15 individuals dug out from a winter burrow. Summer burrow. The summer burrow, sometimes referred to as transitional, is a system of tunnels built for temporary use during the summer season. It is excavated more shallowly (does not need to protect from the cold of frozen ground) and has a shorter tunnel system. Sometimes, after summer, it may be deepened and used for hibernation. However, if a family uses the hibernation burrow in the summer, they do not build a separate summer burrow. Escape burrow. The most commonly encountered underground structure of the marmot is the escape burrow, also known as emergency or rescue burrow. These are makeshift shelters from predators. Their construction is very simple – sometimes their length does not exceed 1 meter, and they are equipped with only one or two entrances and do not have many branches. The marmot tries to cover its territory with a network of such makeshift shelters. Within the area of one colony, there can be found from 16 to 20 such hiding places, and when disturbed, the animal can remain underground for several hours. The marmot tries not to stray more than 10–15 meters from the nearest shelter. Over time, even escape burrows can be expanded and elevated to a higher status. Tatra marmot in captivity. Tatra marmots are not currently bred in Polish zoos. Between 1924 and 1935, the zoo in Poznań possessed three individuals from the Polish population of "M. marmota", while from 1927 onwards, the Kraków "Bażantarnia" (which was succeeded by the local zoo) bred four Tatra marmots. After World War II, individual specimens of this subspecies were exhibited in Czechoslovakian zoos. Threats and conservation. Legal conservation. Threat categories. IUCN classification: "Marmota marmota marmota" is abundant, least concerning (LC). The subspecies "Marmota marmota latirostris" is rare and requires strict protection. Highly endangered in Poland due to a small population. Threats. Until the 19th century, regular hunting of marmots was conducted in the Tatra Mountains, where their skins, meat, and fat were highly sought-after commodities. There was a particularly high demand for marmot fat due to its use in traditional medicine and the widespread belief in its miraculous healing properties. Marmot fat was used both externally and internally for various purposes, including the treatment of hernias. It was also given with milk to women in labor to facilitate childbirth and as a strengthener. Additionally, it was believed to heal wounds, treat swollen glands (lymph nodes), and alleviate coughs. Towards the end of the 19th century, when Zakopane gained status of a spa town, many tuberculosis patients sought "miraculous remedies" in the form of marmot fat. Marmot skins were used to cover horse collars or sold as fur to urban residents. However, the highlanders in the Podhale region did not use them to make their own clothing. Freshly removed marmot skins were applied to painful areas for rheumatism. Marmot meat was also considered the best among all game meats. In the second half of the 19th century, only refuges inaccessible to hunters remained. In 1881, only 30 individuals were inventoried on the Polish side of the Tatras, and by 1888, there were 35 individuals left. On 5 October 1868, under pressure from members of the local history commission of the Kraków Scientific Society (Maksymilian Nowicki, Ludwik Zejszner, and Father Dr. Eugeniusz Janota), the Diet of Galicia and Lodomeria in Lviv adopted a law "regarding the prohibition of capturing, exterminating, and selling alpine animals proper to the Tatras, marmots, and wild goats". This was the world's first case of parliamentary enactment of a law protecting animal species. The situation of the Tatra marmot worsened again during World War I, leading to an increase in hunting. Since then, sheep grazing in marmot habitats has also begun. A significant improvement in the situation came with the establishment of the Tatra National Park in 1954, which encompasses all marmot locations in the Polish Tatras. The main threats to "M. marmota latirostris" include pressure from predators and hunting. Changes in behavior are induced by excessive tourist and sports activity. Human presence also limits contacts between colonies (e.g., in the Kasprowy Wierch area), which are necessary for genetic exchange, and increases the risk of additional diseases, including parasites. Population. Since the significant decline of marmots in the 19th century, their population has slowly been increasing. In 1881, there were 30 marmots in the Polish Tatras, between 1888 – 35, in 1928 and 1952 – 50, in 1982 between 108 and 132, and around 190 individuals in 2003. Altogether, on both sides of the Tatras at the beginning of the 21st century, there were approximately 700–800 individuals of this subspecies. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tfrac{1 0 2 3}{1 0 1 3}" } ]
https://en.wikipedia.org/wiki?curid=76459550
7646018
Schwartz kernel theorem
Theorem In mathematics, the Schwartz kernel theorem is a foundational result in the theory of generalized functions, published by Laurent Schwartz in 1952. It states, in broad terms, that the generalized functions introduced by Schwartz (Schwartz distributions) have a two-variable theory that includes all reasonable bilinear forms on the space formula_0 of test functions. The space formula_0 itself consists of smooth functions of compact support. Statement of the theorem. Let formula_1 and formula_2 be open sets in formula_3. Every distribution formula_4 defines a continuous linear map formula_5 such that for every formula_6. Conversely, for every such continuous linear map formula_7 there exists one and only one distribution formula_4 such that (1) holds. The distribution formula_8 is the kernel of the map formula_7. Note. Given a distribution formula_4 one can always write the linear map K informally as formula_9 so that formula_10. Integral kernels. The traditional kernel functions formula_11 of two variables of the theory of integral operators having been expanded in scope to include their generalized function analogues, which are allowed to be more singular in a serious way, a large class of operators from formula_0 to its dual space formula_12 of distributions can be constructed. The point of the theorem is to assert that the extended class of operators can be characterised abstractly, as containing all operators subject to a minimum continuity condition. A bilinear form on formula_0 arises by pairing the image distribution with a test function. A simple example is that the natural embedding of the test function space formula_0 into formula_12 - sending every test function formula_13 into the corresponding distribution formula_14 - corresponds to the delta distribution formula_15 concentrated at the diagonal of the underlined Euclidean space, in terms of the Dirac delta function formula_16. While this is at most an observation, it shows how the distribution theory adds to the scope. Integral operators are not so 'singular'; another way to put it is that for formula_7 a continuous kernel, only compact operators are created on a space such as the continuous functions on formula_17. The operator formula_18 is far from compact, and its kernel is intuitively speaking approximated by functions on formula_19 with a spike along the diagonal formula_20 and vanishing elsewhere. This result implies that the formation of distributions has a major property of 'closure' within the traditional domain of functional analysis. It was interpreted (comment of Jean Dieudonné) as a strong verification of the suitability of the Schwartz theory of distributions to mathematical analysis more widely seen. In his "Éléments d'analyse" volume 7, p. 3 he notes that the theorem includes differential operators on the same footing as integral operators, and concludes that it is perhaps the most important modern result of functional analysis. He goes on immediately to qualify that statement, saying that the setting is too 'vast' for differential operators, because of the property of monotonicity with respect to the support of a function, which is evident for differentiation. Even monotonicity with respect to singular support is not characteristic of the general case; its consideration leads in the direction of the contemporary theory of pseudo-differential operators. Smooth manifolds. Dieudonné proves a version of the Schwartz result valid for smooth manifolds, and additional supporting results, in sections 23.9 to 23.12 of that book. Generalization to nuclear spaces. Much of the theory of nuclear spaces was developed by Alexander Grothendieck while investigating the Schwartz kernel theorem and published in . We have the following generalization of the theorem. Schwartz kernel theorem: Suppose that "X" is nuclear, "Y" is locally convex, and "v" is a continuous bilinear form on formula_21. Then "v" originates from a space of the form formula_22 where formula_23 and formula_24 are suitable equicontinuous subsets of formula_25 and formula_26. Equivalently, "v" is of the form, formula_27 for all formula_28 where formula_29 and each of formula_30 and formula_31 are equicontinuous. Furthermore, these sequences can be taken to be null sequences (i.e. converging to 0) in formula_32 and formula_33, respectively. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{D}" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "Y" }, { "math_id": 3, "text": "\\mathbb{R}^n" }, { "math_id": 4, "text": "k \\in \\mathcal{D}'(X \\times Y)" }, { "math_id": 5, "text": "K \\colon \\mathcal{D}(Y) \\to \\mathcal{D}'(X)" }, { "math_id": 6, "text": "u \\in \\mathcal{D}(X), v \\in \\mathcal{D}(Y)" }, { "math_id": 7, "text": "K" }, { "math_id": 8, "text": "k" }, { "math_id": 9, "text": "Kv = \\int_{Y} k(\\cdot,y) v(y) d y" }, { "math_id": 10, "text": "\\langle Kv,u \\rangle = \\int_{X} \\int_{Y} k(x,y) v(y) u(x) d y d x" }, { "math_id": 11, "text": "K(x,y)" }, { "math_id": 12, "text": "\\mathcal{D}'" }, { "math_id": 13, "text": "f" }, { "math_id": 14, "text": "[f]" }, { "math_id": 15, "text": "\\delta(x-y)" }, { "math_id": 16, "text": "\\delta" }, { "math_id": 17, "text": "[0,1]" }, { "math_id": 18, "text": "I" }, { "math_id": 19, "text": "[0,1]\\times[0,1]" }, { "math_id": 20, "text": "x=y" }, { "math_id": 21, "text": "X \\times Y" }, { "math_id": 22, "text": "X^{\\prime}_{A^{\\prime}} \\widehat{\\otimes}_{\\epsilon} Y^{\\prime}_{B^{\\prime}}" }, { "math_id": 23, "text": "A^{\\prime}" }, { "math_id": 24, "text": "B^{\\prime}" }, { "math_id": 25, "text": "X^{\\prime}" }, { "math_id": 26, "text": "Y^{\\prime}" }, { "math_id": 27, "text": "v(x, y) = \\sum_{i=1}^{\\infty} \\lambda_i \\left\\langle x, x_i^{\\prime} \\right\\rangle \\left\\langle y, y_i^{\\prime} \\right\\rangle" }, { "math_id": 28, "text": "(x, y) \\in X \\times Y" }, { "math_id": 29, "text": "\\left( \\lambda_i \\right) \\in l^1" }, { "math_id": 30, "text": "\\{ x^{\\prime}_1, x^{\\prime}_2, \\ldots \\}" }, { "math_id": 31, "text": "\\{ y^{\\prime}_1, y^{\\prime}_2, \\ldots \\}" }, { "math_id": 32, "text": "X^{\\prime}_{A^{\\prime}}" }, { "math_id": 33, "text": "Y^{\\prime}_{B^{\\prime}}" } ]
https://en.wikipedia.org/wiki?curid=7646018
764639
Louis Poinsot
French mathematician and physicist (1777–1859) Louis Poinsot (; 3 January 1777 – 5 December 1859) was a French mathematician and physicist. Poinsot was the inventor of geometrical mechanics, showing how a system of forces acting on a rigid body could be resolved into a single force and a couple. Everyone makes for himself a clear idea of the motion of a point, that is to say, of the motion of a corpuscle which one supposes to be infinitely small, and which one reduces by thought in some way to a mathematical point. —Louis Poinsot, "Théorie nouvelle de la rotation des corps" (1834) Life. Louis was born in Paris on 3 January 1777. He attended the school of Lycée Louis-le-Grand for secondary preparatory education for entrance to the famous École Polytechnique. In October 1794, at age 17, he took the École Polytechnique entrance exam and failed the algebra section but was still accepted. A student there for two years, he left in 1797 to study at École des Ponts et Chaussées to become a civil engineer. Although now on course for the practical and secure professional study of civil engineering, he discovered his true passion, abstract mathematics. Poinsot thus left the École des Ponts et Chaussées and civil engineering to become a mathematics teacher at the secondary school Lycée Bonaparte in Paris, from 1804 to 1809. From there he became inspector general of the Imperial University of France. He shared the post with another famous mathematician, Delambre. On 1 November 1809, Poinsot became assistant professor of analysis and mechanics at his old school the École Polytechnique. During this period of transitions between schools and work, Poinsot had remained active in research and published a number of works on geometry, mechanics and statics so that by 1809 he had an excellent reputation. By 1812 Poinsot was no longer directly teaching at École Polytechnique using substitute teacher Reynaud, and later Cauchy, and lost his post in 1816 when they re-organized, but he did become admissions examiner and held that for another 10 years. He also worked at the famous Bureau des Longitudes from 1839 until his death. On the death of Joseph-Louis Lagrange in 1813, Poinsot was elected to fill his place at the Académie des Sciences. In 1840 he became a member of the superior council of public instruction. In 1846 he was awarded an Officer of the Legion of Honor, and on the formation of the Senate in 1852 he was chosen a member of that body. Poinsot was elected Fellow of the Royal Society of London in 1858. He died in Paris on 5 December 1859. He is buried in Pere Lachaise Cemetery in Paris. From the diary of Thomas Hirst, 20 December 1857: ...[Poinsot] shook me kindly by the hand, bid me be seated, and took his seat near me. He is now between 60 and 70 years old, with silver silken hair neatly arranged on a fine intelligent head. He is tall and thin, but although he now stoops with age and feebleness one can see that one time his figure was more than ordinarily graceful. He was loosely but neatly dressed in a large ample robe de chambre. His features are finely moulded — indeed everything about the man betokens good blood. He talks incessantly and well. I did not misunderstand a word, although he spoke always in a low tone, and now and then his voice dropped as if from weariness, but he never wandered from his point... Legacy and tributes. The crater Poinsot on the moon is named after Poinsot. A street in Paris is called Rue Poinsot (14th Arrondissement). Gustave Eiffel included Poinsot among the 72 names of prominent French scientists on plaques around the first stage of the Eiffel Tower. "Poinsot was determined to publish only fully developed results and to present them with clarity and elegance. Consequently he left a rather limited body of work ..." —Dictionary of Scientific Biography (see Sources) Work. Works include: Poinsot was the inventor of geometrical mechanics, which showed how a system of forces acting on a rigid body could be resolved into a single force and a couple. Previous work done on the motion of a rigid body had been purely analytical with no visualization of the motion, and the great value of the work, as Poinsot says, "it enables us to represent to ourselves the motion of a rigid body as clearly that as a moving point" (Encyclopædia Britannica, 1911). In particular, he devised what is now known as Poinsot's construction. This construction describes the motion of the angular velocity vector formula_0 of a rigid body with one point fixed (usually its center of mass). He proved that the endpoint of the vector formula_0 moves in a plane perpendicular to the angular momentum (in absolute space) of the rigid body. He discovered the four Kepler-Poinsot polyhedra in 1809. Two of these had already appeared in Kepler's work of 1619, although Poinsot was unaware of this. The other two are the great icosahedron and great dodecahedron, which some people call these two the "Poinsot solids". In 1810 Cauchy proved, using Poinsot's definition of regular, that the enumeration of regular star polyhedra is complete. Poinsot worked on number theory studying Diophantine equations. However he is best known for his work in geometry and, together with Monge, regained geometry's leading role in mathematical research in France in the 19th century. Poinsot also contributed to the importance of geometry by creating a chair of advanced geometry at the Sorbonne in 1846. Poinsot created the chair for Chasles which he occupied until his death in 1880. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{\\omega}" } ]
https://en.wikipedia.org/wiki?curid=764639
76465690
Sarah Koch
American mathematician (born 1979) Sarah Colleen Hanlon Koch (born 1979) is an American mathematician, the Arthur F. Thurnau Professor of Mathematics at the University of Michigan. Her research interests include complex analysis, complex dynamics, and Teichmüller theory. Education and career. Koch was born and educated in Concord, New Hampshire, with summers in Wilmington, Vermont. She went to the Rensselaer Polytechnic Institute, initially studying chemistry but soon switching to mathematics; she graduated in 2001. Next, she went to Cornell University for graduate study in mathematics, earning a master's degree in 2005, and completed her studies with a double Ph.D., supervised by John H. Hubbard: a doctorate from the University of Provence in 2007 with the dissertation "La Théorie de Teichmüller et ses applications aux endomorphismes de formula_0", and a doctorate from Cornell in 2008 with the dissertation "A New Link between Teichmüller Theory and Complex Dynamics". She became a postdoctoral researcher as a National Science Foundation postdoctoral fellow at the University of Warwick and Harvard University; at Harvard, her postdoctoral mentor was Curtis T. McMullen. She stayed at Harvard as Benjamin Peirce Assistant Professor from 2010 to 2013. She moved to the University of Michigan in 2013, became an associate professor in 2016, and was promoted to full professor in 2021, at the same time being named as the Arthur F. Thurnau Professor. Recognition. The University of Michigan gave Koch the 2016 Class of 1923 Memorial Teaching Award in 2016 and the 2020 Harold R. Johnson Diversity Service Award. Koch was the recipient of the 2021 Distinguished University Teaching of Mathematics Award of the Michigan Section of the Mathematical Association of America. She was a 2023 recipient of the Deborah and Franklin Haimo Awards for Distinguished College or University Teaching of Mathematics, recognizing her classroom teaching, her support of mathematics students from underrepresented groups, and her efforts to bring mathematics to middle schoolers from underserved African-American communities in Ypsilanti, Michigan. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{P}^n" } ]
https://en.wikipedia.org/wiki?curid=76465690
7646799
Bessel process
In mathematics, a Bessel process, named after Friedrich Bessel, is a type of stochastic process. Formal definition. The Bessel process of order "n" is the real-valued process "X" given (when "n" ≥ 2) by formula_0 where ||·|| denotes the Euclidean norm in R"n" and "W" is an "n"-dimensional Wiener process (Brownian motion). For any "n", the "n"-dimensional Bessel process is the solution to the stochastic differential equation (SDE) formula_1 where W is a 1-dimensional Wiener process (Brownian motion). Note that this SDE makes sense for any real parameter formula_2 (although the drift term is singular at zero). Notation. A notation for the Bessel process of dimension n started at zero is BES0("n"). In specific dimensions. For "n" ≥ 2, the "n"-dimensional Wiener process started at the origin is transient from its starting point: with probability one, i.e., "X""t" &gt; 0 for all "t" &gt; 0. It is, however, neighbourhood-recurrent for "n" = 2, meaning that with probability 1, for any "r" &gt; 0, there are arbitrarily large "t" with "X""t" &lt; "r"; on the other hand, it is truly transient for "n" &gt; 2, meaning that "X""t" ≥ "r" for all "t" sufficiently large. For "n" ≤ 0, the Bessel process is usually started at points other than 0, since the drift to 0 is so strong that the process becomes stuck at 0 as soon as it hits 0. Relationship with Brownian motion. 0- and 2-dimensional Bessel processes are related to local times of Brownian motion via the Ray–Knight theorems. The law of a Brownian motion near x-extrema is the law of a 3-dimensional Bessel process (theorem of Tanaka). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X_t = \\| W_t \\|," }, { "math_id": 1, "text": "dX_t = dW_t + \\frac{n-1}{2}\\frac{dt}{X_t}" }, { "math_id": 2, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=7646799
76470000
Donald M. Davis (mathematician)
American mathematician Donald M. Davis (born 7 May 1945) is an American mathematician specializing in algebraic topology. Davis received a B.S. from MIT in 1967 and a PhD in mathematics at Stanford in 1972, directed by R. James Milgram. After postdoctoral positions at University of California, San Diego and Northwestern University, he began a 50-year career at Lehigh University in 1974. In 2012 he was named an inaugural Fellow of the American Mathematical Society. . Since 2002, he has been Executive Editor of "Homology, Homotopy and Applications". Research. Davis has published in algebraic topology, differential topology, topological robotics, and combinatorial number theory. He is an expert on immersions of projective spaces, and maintains a website with all known results for real projective spaces. He computed the formula_0-periodic homotopy groups of all compact simple Lie groups. Coaching. In 1993 Davis started the Lehigh Valley Math Team. In 2005, 2009, 2010, 2011, and 2024, they were national champions in the American Regions Math League (ARML). They have finished second or third in ARML seven other times. They won the Harvard/MIT Math Tournament (HMMT) in 2023 and 2024, and the Princeton University Math Competition (PUMaC) in 2009, 2010, 2012, and 2023. Running. From 1977 through 2009, Davis competed in marathon and ultramarathon races. He was the overall winner of ultramarathon races of 31 to 78 miles in the 1970s, 1980s, 1990s, and 2000s. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v_1" } ]
https://en.wikipedia.org/wiki?curid=76470000
76478062
Knuth–Plass line-breaking algorithm
Line-breaking algorithm used in the TeX typesetting package The Knuth–Plass algorithm is a line-breaking algorithm designed for use in Donald Knuth's typesetting program TeX. It integrates the problems of text justification and hyphenation into a single algorithm by using a discrete dynamic programming method to minimize a loss function that attempts to quantify the aesthetic qualities desired in the finished output. The algorithm works by dividing the text into a stream of three kinds of objects: "boxes", which are non-resizable chunks of content, "glue", which are flexible, resizeable elements, and "penalties", which represent places where breaking is undesirable (or, if negative, desirable). The loss function, known as "badness", is defined in terms of the deformation of the glue elements, and any extra penalties incurred through line breaking. Making hyphenation decisions follows naturally from the algorithm, but the choice of possible hyphenation points within words, and optionally their preference weighting, must be performed first, and that information inserted into the text stream in advance. Knuth and Plass' original algorithm does not include page breaking, but may be modified to interface with a pagination algorithm, such as the algorithm designed by Plass in his PhD thesis. Typically, the cost function for this technique should be modified so that it does not count the space left on the final line of a paragraph; this modification allows a paragraph to end in the middle of a line without penalty. The same technique can also be extended to take into account other factors such as the number of lines or costs for hyphenating long words. Computational complexity. A naive brute-force exhaustive search for the minimum badness by trying every possible combination of breakpoints would take an impractical formula_0 time. The classic Knuth-Plass dynamic programming approach to solving the minimization problem is a worst-case formula_1 algorithm but usually runs much faster in close to linear time. Solving for the Knuth-Plass optimum can be shown to be a special case of the convex least-weight subsequence problem, which can be solved in formula_2 time. Methods to do this include the SMAWK algorithm. Simple example of minimum raggedness metric. For the input text AAA BB CC DDDDD with line width 6, a greedy algorithm that puts as many words on a line as possible while preserving order before moving to the next line, would produce: ------ Line width: 6 AAA BB Remaining space: 0 CC Remaining space: 4 DDDDD Remaining space: 1 The sum of squared space left over by this method is formula_3. However, the optimal solution achieves the smaller sum formula_4: ------ Line width: 6 AAA Remaining space: 3 BB CC Remaining space: 1 DDDDD Remaining space: 1 The difference here is that the first line is broken before codice_0 instead of after it, yielding a better right margin and a lower cost 11. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(2^n)" }, { "math_id": 1, "text": "O(n^2)" }, { "math_id": 2, "text": "O(n)" }, { "math_id": 3, "text": "0^2 + 4^2 + 1^2 = 17" }, { "math_id": 4, "text": "3^2 + 1^2 + 1^2 = 11" } ]
https://en.wikipedia.org/wiki?curid=76478062
764833
Poiseuille
Proposed SI derived unit of dynamic viscosity &lt;templatestyles src="Template:Infobox/styles-images.css" /&gt; The poiseuille (symbol Pl) has been proposed as a derived SI unit of dynamic viscosity, named after the French physicist Jean Léonard Marie Poiseuille (1797–1869). In practice the unit has never been widely accepted and most international standards bodies do not include the poiseuille in their list of units. The third edition of the IUPAC Green Book, for example, lists Pa⋅s (pascal-second) as the SI-unit for dynamic viscosity, and does not mention the poiseuille. The equivalent CGS unit, the poise, symbol P, is most widely used when reporting viscosity measurements. formula_0 Liquid water has a viscosity of at at a pressure of ( = = = ). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1\\ \\text{Pl} = 1\\ \\text{Pa}{\\cdot}\\text{s} = 1 \\text{kg}/\\text{m}{{\\cdot}}\\text{s} = 1 \\text{N}{\\cdot}\\text{s}/\\text{m}^{2} = 10\\ \\text{dyn}{\\cdot}\\text{s}/\\text{cm}^{2} = 10\\ \\text{P}" } ]
https://en.wikipedia.org/wiki?curid=764833
764848
Autoregressive moving-average model
Statistical model used in time series analysis In the statistical analysis of time series, autoregressive–moving-average (ARMA) models provide a parsimonious description of a (weakly) stationary stochastic process in terms of two polynomials, one for the autoregression (AR) and the second for the moving average (MA). The general ARMA model was described in the 1951 thesis of Peter Whittle, "Hypothesis testing in time series analysis", and it was popularized in the 1970 book by George E. P. Box and Gwilym Jenkins. Given a time series of data formula_0, the ARMA model is a tool for understanding and, perhaps, predicting future values in this series. The AR part involves regressing the variable on its own lagged (i.e., past) values. The MA part involves modeling the error term as a linear combination of error terms occurring contemporaneously and at various times in the past. The model is usually referred to as the ARMA("p","q") model where "p" is the order of the AR part and "q" is the order of the MA part (as defined below). ARMA models can be estimated by using the Box–Jenkins method. Autoregressive model. The notation AR("p") refers to the autoregressive model of order "p". The AR("p") model is written as formula_1 where formula_2 are parameters and the random variable formula_3 is white noise, usually independent and identically distributed (i.i.d.) normal random variables. In order for the model to remain stationary, the roots of its characteristic polynomial must lie outside of the unit circle. For example, processes in the AR(1) model with formula_4 are not stationary because the root of formula_5 lies within the unit circle. ADF assesses the stability of IMF and trend components. For stationary time series, the Autoregressive Moving Average (ARMA) model is used, while for non-stationary series, LSTM models are employed to derive abstract features. The final value is obtained by reconstructing the predicted outcomes of each time series. Moving-average model. The notation MA("q") refers to the moving average model of order "q": formula_6 where the formula_7 are the parameters of the model, formula_8 is the expectation of formula_0 (often assumed to equal 0), and the formula_3, formula_9... are again i.i.d. white noise error terms that are commonly normal random variables. ARMA model. The notation ARMA("p", "q") refers to the model with "p" autoregressive terms and "q" moving-average terms. This model contains the AR("p") and MA("q") models, formula_10 The general ARMA model was described in the 1951 thesis of Peter Whittle, who used mathematical analysis (Laurent series and Fourier analysis) and statistical inference. ARMA models were popularized by a 1970 book by George E. P. Box and Jenkins, who expounded an iterative (Box–Jenkins) method for choosing and estimating them. This method was useful for low-order polynomials (of degree three or less). The ARMA model is essentially an infinite impulse response filter applied to white noise, with some additional interpretation placed on it. Specification in terms of lag operator. In some texts the models will be specified in terms of the lag operator "L". In these terms then the AR("p") model is given by formula_11 where formula_12 represents the polynomial formula_13 The MA("q") model is given by formula_14 where formula_15 represents the polynomial formula_16 Finally, the combined ARMA("p", "q") model is given by formula_17 or more concisely, formula_18 or formula_19 Alternative notation. Some authors, including Box, Jenkins &amp; Reinsel use a different convention for the autoregression coefficients. This allows all the polynomials involving the lag operator to appear in a similar form throughout. Thus the ARMA model would be written as formula_20 Moreover, starting summations from formula_21 and setting formula_22 and formula_23, then we get an even more elegant formulation: formula_24 Alternative interpretation. In digital signal processing, the ARMA model is represented as a digital filter with white noise at the input and the ARMA process at the output. Fitting models. Choosing p and q. Finding appropriate values of "p" and "q" in the ARMA("p","q") model can be facilitated by plotting the partial autocorrelation functions for an estimate of "p", and likewise using the autocorrelation functions for an estimate of "q". Extended autocorrelation functions (EACF) can be used to simultaneously determine p and q. Further information can be gleaned by considering the same functions for the residuals of a model fitted with an initial selection of "p" and "q". Brockwell &amp; Davis recommend using Akaike information criterion (AIC) for finding "p" and "q". Another possible choice for order determining is the BIC criterion. Estimating coefficients. ARMA models in general can be, after choosing "p" and "q", fitted by least squares regression to find the values of the parameters which minimize the error term. It is generally considered good practice to find the smallest values of "p" and "q" which provide an acceptable fit to the data. For a pure AR model the Yule-Walker equations may be used to provide a fit. Unlike other methods of regression (i.e. OLS, 2SLS, etc.) often employed in econometric analysis, ARMA model outputs are used primarily for the cases of forecasting time-series data. Their coefficients are then as such only utilized for prediction. Other areas of econometrics look at the causal inference, time-series forecasting using ARMA is not. The coefficients should then only be seen as useful for predictive modelling. Spectrum. The spectral density of an ARMA process isformula_25where formula_26 is the variance of the white noise, formula_15 is the characteristic polynomial of the moving average part of the ARMA model, and formula_27 is the characteristic polynomial of the autoregressive part of the ARMA model. Applications. ARMA is appropriate when a system is a function of a series of unobserved shocks (the MA or moving average part) as well as its own behavior. For example, stock prices may be shocked by fundamental information as well as exhibiting technical trending and mean-reversion effects due to market participants. Generalizations. The dependence of formula_0 on past values and the error terms εt is assumed to be linear unless specified otherwise. If the dependence is nonlinear, the model is specifically called a "nonlinear moving average" (NMA), "nonlinear autoregressive" (NAR), or "nonlinear autoregressive–moving-average" (NARMA) model. Autoregressive–moving-average models can be generalized in other ways. See also autoregressive conditional heteroskedasticity (ARCH) models and autoregressive integrated moving average (ARIMA) models. If multiple time series are to be fitted then a vector ARIMA (or VARIMA) model may be fitted. If the time-series in question exhibits long memory then fractional ARIMA (FARIMA, sometimes called ARFIMA) modelling may be appropriate: see Autoregressive fractionally integrated moving average. If the data is thought to contain seasonal effects, it may be modeled by a SARIMA (seasonal ARIMA) or a periodic ARMA model. Another generalization is the "multiscale autoregressive" (MAR) model. A MAR model is indexed by the nodes of a tree, whereas a standard (discrete time) autoregressive model is indexed by integers. Note that the ARMA model is a univariate model. Extensions for the multivariate case are the vector autoregression (VAR) and Vector Autoregression Moving-Average (VARMA). Autoregressive–moving-average model with exogenous inputs model (ARMAX model). The notation ARMAX("p", "q", "b") refers to the model with "p" autoregressive terms, "q" moving average terms and "b" exogenous inputs terms. This model contains the AR("p") and MA("q") models and a linear combination of the last "b" terms of a known and external time series formula_28. It is given by: formula_29 where formula_30 are the parameters of the exogenous input formula_28. Some nonlinear variants of models with exogenous variables have been defined: see for example Nonlinear autoregressive exogenous model. Statistical packages implement the ARMAX model through the use of "exogenous" (that is, independent,) variables. Care must be taken when interpreting the output of those packages, because the estimated parameters usually (for example, in R and gretl) refer to the regression: formula_31 where formula_32 incorporates all exogenous (or independent) variables: formula_33 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X_t" }, { "math_id": 1, "text": " X_t = \\sum_{i=1}^p \\varphi_i X_{t-i}+ \\varepsilon_t" }, { "math_id": 2, "text": "\\varphi_1, \\ldots, \\varphi_p" }, { "math_id": 3, "text": "\\varepsilon_t" }, { "math_id": 4, "text": "|\\varphi_1| \\ge 1" }, { "math_id": 5, "text": "1 - \\varphi_1B = 0" }, { "math_id": 6, "text": " X_t = \\mu + \\varepsilon_t + \\sum_{i=1}^q \\theta_i \\varepsilon_{t-i}\\," }, { "math_id": 7, "text": "\\theta_1,...,\\theta_q" }, { "math_id": 8, "text": "\\mu" }, { "math_id": 9, "text": "\\varepsilon_{t-1}" }, { "math_id": 10, "text": " X_t = \\varepsilon_t + \\sum_{i=1}^p \\varphi_i X_{t-i} + \\sum_{i=1}^q \\theta_i \\varepsilon_{t-i}.\\," }, { "math_id": 11, "text": " \\varepsilon_t = \\left(1 - \\sum_{i=1}^p \\varphi_i L^i\\right) X_t = \\varphi (L) X_t\\," }, { "math_id": 12, "text": "\\varphi" }, { "math_id": 13, "text": " \\varphi (L) = 1 - \\sum_{i=1}^p \\varphi_i L^i.\\," }, { "math_id": 14, "text": " X_t - \\mu = \\left(1 + \\sum_{i=1}^q \\theta_i L^i\\right) \\varepsilon_t = \\theta (L) \\varepsilon_t , \\," }, { "math_id": 15, "text": "\\theta" }, { "math_id": 16, "text": " \\theta(L)= 1 + \\sum_{i=1}^q \\theta_i L^i .\\," }, { "math_id": 17, "text": " \\left(1 - \\sum_{i=1}^p \\varphi_i L^i\\right) X_t = \\left(1 + \\sum_{i=1}^q \\theta_i L^i\\right) \\varepsilon_t \\, ," }, { "math_id": 18, "text": " \\varphi(L) X_t = \\theta(L) \\varepsilon_t \\, " }, { "math_id": 19, "text": " \\frac{\\varphi(L)}{\\theta(L)}X_t = \\varepsilon_t \\, ." }, { "math_id": 20, "text": " \\left(1 - \\sum_{i=1}^p \\phi_i L^i\\right) X_t = \\left(1 + \\sum_{i=1}^q \\theta_i L^i\\right) \\varepsilon_t \\, ." }, { "math_id": 21, "text": " i=0 " }, { "math_id": 22, "text": " \\phi_0 = -1 " }, { "math_id": 23, "text": " \\theta_0 = 1 " }, { "math_id": 24, "text": " -\\sum_{i=0}^p \\phi_i L^i \\; X_t = \\sum_{i=0}^q \\theta_i L^i \\; \\varepsilon_t \\, ." }, { "math_id": 25, "text": "S(f) = \\frac{\\sigma^2}{2\\pi} \\left\\vert \\frac{\\theta(e^{-if})}{\\phi(e^{-if})} \\right\\vert^2" }, { "math_id": 26, "text": "\\sigma^2" }, { "math_id": 27, "text": "\\phi" }, { "math_id": 28, "text": "d_t" }, { "math_id": 29, "text": " X_t = \\varepsilon_t + \\sum_{i=1}^p \\varphi_i X_{t-i} + \\sum_{i=1}^q \\theta_i \\varepsilon_{t-i} + \\sum_{i=1}^b \\eta_i d_{t-i}.\\," }, { "math_id": 30, "text": "\\eta_1, \\ldots, \\eta_b" }, { "math_id": 31, "text": " X_t - m_t = \\varepsilon_t + \\sum_{i=1}^p \\varphi_i (X_{t-i} - m_{t-i}) + \\sum_{i=1}^q \\theta_i \\varepsilon_{t-i}.\\," }, { "math_id": 32, "text": "m_t" }, { "math_id": 33, "text": "m_t = c + \\sum_{i=0}^b \\eta_i d_{t-i}.\\," } ]
https://en.wikipedia.org/wiki?curid=764848
76492308
History of radiation protection
The history of radiation protection begins at the turn of the 19th and 20th centuries with the realization that ionizing radiation from natural and artificial sources can have harmful effects on living organisms. As a result, the study of radiation damage also became a part of this history. While radioactive materials and X-rays were once handled carelessly, increasing awareness of the dangers of radiation in the 20th century led to the implementation of various preventive measures worldwide, resulting in the establishment of radiation protection regulations. Although radiologists were the first victims, they also played a crucial role in advancing radiological progress and their sacrifices will always be remembered. Radiation damage caused many people to suffer amputations or die of cancer. The use of radioactive substances in everyday life was once fashionable, but over time, the health effects became known. Investigations into the causes of these effects have led to increased awareness of protective measures. The dropping of atomic bombs during World War II brought about a drastic change in attitudes towards radiation. The effects of natural cosmic radiation, radioactive substances such as radon and radium found in the environment, and the potential health hazards of non-ionizing radiation are well-recognized. Protective measures have been developed and implemented worldwide, monitoring devices have been created, and radiation protection laws and regulations have been enacted. In the 21st century, regulations are becoming even stricter. The permissible limits for ionizing radiation intensity are consistently being revised downward. The concept of radiation protection now includes regulations for the handling of non-ionizing radiation. In the Federal Republic of Germany, radiation protection regulations are developed and issued by the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV). The Federal Office for Radiation Protection is involved in the technical work. In Switzerland, the Radiation Protection Division of the Federal Office of Public Health is responsible, and in Austria, the Ministry of Climate Action and Energy. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; X-rays. Early radiation consequences. The discovery of X-rays by Wilhelm Conrad Röntgen (1845-1923) in 1895 led to extensive experimentation by scientists, physicians, and inventors. The first X-ray machines produced extremely unfavorable radiation spectra for imaging with extremely high skin doses. In February 1896, John Daniel and William Lofland Dudley (1859–1914) of Vanderbilt University conducted an experiment in which Dudley's head was X-rayed, resulting in hair loss. Herbert D. Hawks, a graduate of Columbia University, suffered severe burns on his hands and chest during demonstration experiments with X-rays. Burns and hair loss were reported in scientific journals. Nikola Tesla (1856–1943) was one of the first researchers to explicitly warn of the potential dangers of X-rays in the "Electrical Review" on May 5, 1897 - after initially claiming them to be completely harmless. He suffered massive radiation damage after his experiments. Nevertheless, some doctors at the time still claimed that X-rays had no effect on humans. Until the 1940s, X-ray machines were operated without any protective safeguards. Röntgen himself was spared the fate of the other X-ray users by habit. He always carried the unexposed photographic plates in his pockets and found that they were exposed if he remained in the same room during the exposure. So he regularly left the room when he took X-rays. The use of X-rays for diagnostic purposes in dentistry was made possible by the pioneering work of C. Edmund Kells (1856-1928), a New Orleans dentist who demonstrated them to dentists in Asheville, North Carolina, in July 1896. Kells committed suicide after suffering from radiation-induced cancer for many years. He had been amputated one finger at a time, later his entire hand, followed by his forearm and then his entire arm. Otto Walkhoff (1860-1934), one of the most important German dentists in history, took X-rays of himself in 1896 and is considered a pioneer in dental radiology. He described the required exposure time of 25 minutes as an "ordeal". Braunschweig's medical community later commissioned him to set up and supervise a central X-ray facility. In 1898, the year radium was discovered, he also tested the use of radium in medicine in a self-experiment using an amount of 0.2 grams of radium bromide. Walkhoff observed that cancerous mice exposed to radium radiation died significantly later than a control group of untreated mice. He thus initiated the development of radiation research for the treatment of tumors. The Armenian-American radiologist Mihran Krikor Kassabian (1870-1910), vice president of the American Roentgen Ray Society (ARRS), was concerned about the irritating effects of X-rays. In a publication, he mentioned his increasing problems with his hands. Although Kassabian recognized X-rays as the cause, he avoided making this reference so as not to hinder the progress of radiology. In 1902, he suffered a severe radiation burn on his hand. Six years later, the hand became necrotic and two fingers of his left hand were amputated. Kassabian kept a diary and photographed his hands as the tissue damage progressed. He died of cancer in 1910. Many of the early X-ray and radioactivity researchers went down in history as "martyrs for science." In her article, "The Miracle and the Martyrs", Sarah Zobel of the University of Vermont tells of a 1920 banquet held to honor many of the pioneers of X-rays. Chicken was served for dinner: "Shortly after the meal was served, it could be seen that some of the participants were unable to enjoy the meal. After years of working with X-rays, many of the participants had lost fingers or hands due to radiation exposure and were unable to cut the meat themselves". The first American to die from radiation exposure was Clarence Madison Dally (1845-1904), an assistant to Thomas Alva Edison (1847-1931). Edison began studying X-rays almost immediately after Röntgen's discovery and delegated the task to Dally. Over time, Dally underwent more than 100 skin operations due to radiation damage. Eventually, both of his arms had to be amputated. His death led Edison to abandon all further X-ray research in 1904. One of the pioneers was the Austrian Gustav Kaiser (1871-1954), who in 1896 succeeded in photographing a double toe with an exposure time of 1½-2 hours. Due to the limited knowledge at the time, he also suffered severe radiation damage to his hands, losing several fingers and his right metacarpal. His work was the basis for, among other things, the construction of lead rubber aprons. Heinrich Albers-Schönberg (1865-1921), the world's first professor of radiology, recommended gonadal protection for testicles and ovaries in 1903. He was one of the first to protect germ cells not only from acute radiation damage but also from small doses of radiation that could accumulate over time and cause late damage. Albers-Schönberg died at the age of 56 from radiation damage, as did Guido Holzknecht and Elizabeth Fleischman. Since April 4, 1936, a radiology memorial in the garden of the of Hamburg's St. Georg Hospital has commemorated the 359 victims from 23 countries who were among the first medical users of X-rays. Initial warnings. In 1896, the engineer Wolfram Fuchs, based on his experience with numerous X-ray examinations, recommended keeping the exposure time as short as possible, staying away from the tube, and covering the skin with Vaseline. In 1897, Chicago doctors William Fuchs and Otto Schmidt became the first users to have to pay compensation to a patient for radiation damage. In 1901, dentist William Herbert Rollins (1852-1929) called for using lead-glass goggles when working with X-rays, for the X-ray tube to be encased in lead, and for all areas of the body to be covered with lead aprons. He published over 200 articles on the potential dangers of X-rays, but his suggestions were long ignored. A year later, Rollins wrote in despair that his warnings about the dangers of X-rays were not being heeded by either the industry or his colleagues. By this time, Rollins had demonstrated that X-rays could kill laboratory animals and induce miscarriages in guinea pigs. Rollins' achievements were not recognized until later. Since then, he has gone down in the history of radiology as the "father of radiation protection. He became a member of the "Radiological Society of North America" and its first treasurer. Radiation protection continued to develop with the invention of new measuring devices such as the chromoradiometer by Guido Holzknecht (1872-1931) in 1902, the radiometer by Raymond Sabouraud (1864-1938) and Henri Noiré (1878–1937) in 1904/05, and the quantimeter by Robert Kienböck (1873-1951) in 1905, which made it possible to determine maximum doses at which there was a high probability that no skin changes would occur. Radium was also included by the British Roentgen Society, which published its first memorandum on radium protection in 1921. Unnecessary applications. Pedoscope. Since the 1920s, pedoscopes have been installed in many shoe stores in North America and Europe, more than 10,000 in the U.S. alone, following the invention of Jacob Lowe, a Boston physicist. They were X-ray machines used to check the fit of shoes and to promote sales, especially to children. Children were particularly fascinated by the sight of their footbones. X-rays were often taken several times daily to evaluate the fit of different shoes. Most were available in shoe stores until the early 1970s. The energy dose absorbed by the customer was up to 116 rads, or 1.16 grays. In the 1950s, when medical knowledge of the health risks was already available, pedoscopes came with warnings that shoe-buyers should not be scanned more than three times a day and twelve times a year. By the early 1950s, several professional organizations issued warnings against the continued use of shoe-mounted fluoroscopes, including the American Conference of Governmental Industrial Hygienists, the American College of Surgeons, the New York Academy of Medicine, and the American College of Radiology. At the same time, the District of Columbia enacted regulations requiring that shoe-mounted fluoroscopes be operated only by a licensed physical therapist. A few years later, the state of Massachusetts passed regulations stating that these machines could only be operated by a licensed physician. In 1957, the use of shoe-mounted fluoroscopes was banned by court order in Pennsylvania. By 1960, these measures and pressure from insurance companies led to the disappearance of the shoe-mounted fluoroscope, at least in the United States. In Switzerland, there were 1,500 shoe-mounted fluoroscopes in use, 850 were required to be inspected by the "Swiss Electrotechnical Association" by a decree of the "Federal Department of Home Affairs" on October 7, 1963. The last one was decommissioned in 1990. In Germany, the machines were not banned until 1976.   The fluoroscopy machine emitted uncontrolled X-rays, which continuously exposed children, parents, and sales staff. The all-wood cabinet of the machine did not prevent the X-rays from passing through, resulting in particularly high cumulative radiation levels for the cashier when the pedoscope was placed near the cash register. The all-wood cabinet of the machine did not prevent the X-rays from passing through, resulting in particularly high cumulative radiation levels for the cashier when the pedoscope was placed near the cash register. It is clear that the machine was not designed with proper safety measures in place, leading to dangerous levels of radiation exposure. The well-established long-term effects of X-rays, including genetic damage and carcinogenicity, suggest that the use of pedoscopes worldwide over several decades may have contributed to health effects.The well-established long-term effects of X-rays, including genetic damage and carcinogenicity, suggest that the use of pedoscopes worldwide over several decades may have contributed to health effects. However, it cannot be definitively proven that they were the sole cause. For example, a direct link has been discussed in the case of basal cell carcinoma of the foot. In 1950, a case was published in which a shoe model had to have a leg amputated as a result. Radiotherapy. In 1896, Viennese dermatologist Leopold Freund (1868-1943) used X-rays to treat patients for the first time. He successfully irradiated the hairy nevus of a young girl. In 1897, Hermann Gocht (1869–1931) published the treatment of trigeminal neuralgia with X-rays, and Alexei Petrovich Sokolov (1854-1928) wrote about radiotherapy for arthritis in the oldest radiology journal, Advances in the field of X-rays ("RöFo"). In 1922, X-rays were recommended as safe for many diseases and for diagnostic purposes. Radiation protection was limited to recommending doses that would not cause erythema (reddening of the skin). For example, X-rays were promoted as an alternative to tonsillectomy. It was also boasted that in 80% of cases of diphtheria carriers, "Corynebacterium diphtheriae" was no longer detectable within two to four days. In the 1930s, Günther von Pannewitz (1900-1966), a radiologist from Freiburg, Germany, perfected what he called "X-ray stimulation radiation" for degenerative diseases. Low-dose radiation reduces the inflammatory response of tissues. Until about 1960, children with diseases such as ankylosing spondylitis or favus (head fungus) were irradiated, which was effective but led to increased cancer rates among patients decades later. In 1926, the American pathologist James Ewing (1866-1943) was the first to observe bone changes as a result of radiotherapy, which he described as "radiation osteitis ("now Osteoradionecrosis). In 1983, Robert E. Marx stated that osteoradionecrosis is radiation-induced aseptic bone necrosis. The acute and chronic inflammatory processes of osteoradionecrosis are prevented by the administration of steroidal anti-inflammatory drugs. In addition, the administration of pentoxifylline and antioxidant treatments, such as superoxide dismutase and tocopherol (vitamin E) are recommended. Radiation protection during X-ray examinations. Preliminary observation. Sonography (ultrasound diagnostics) is a versatile and widely used imaging modality in medical diagnostics. Ultrasound is also used in therapy. However, it uses "mechanical waves" and no ionizing or non-ionizing radiation. Patient safety is ensured if the recommended limits for avoiding cavitation and overheating are observed, see also Safety Aspects of Sonography. Even devices that use alternating magnetic fields in the radiofrequency range, such as magnetic resonance imaging (MRI), do not use ionizing radiation. MRI was developed as an imaging technique in 1973 by Paul Christian Lauterbur (1929-2007) with significant contributions from Sir Peter Mansfield (1933-2017). Jewelry or piercings can become very hot; on the other hand, a high tensile force is exerted on the jewelry, which in the worst case can cause it to be torn out. To avoid pain and injury, jewelry containing ferromagnetic metals should be removed beforehand. Pacemakers, defibrillator systems, and large tattoos in the examination area that contain metallic color pigments may heat up or cause second-degree burns or malfunction of the implants. Photoacoustic Tomography (PAT) is a hybrid imaging modality that utilizes the photoacoustic effect without the use of ionizing radiation. It works without contact with very fast laser pulses that generate ultrasound in the tissue under examination. The local absorption of the light leads to sudden local heating and the resulting thermal expansion. The result is broadband acoustic waves. The original distribution of absorbed energy can be reconstructed by measuring the outgoing ultrasound waves with appropriate ultrasound transducers. Radiation exposure detection. In order to better assess radiation protection, the number of X-ray examinations, including the dose, has been recorded annually in Germany since 2007. However, the Federal Statistical Office does not have complete data for conventional X-ray examinations. In 2014, the total number of X-ray examinations in Germany was estimated to be about 135 million, of which about 55 million were dental X-ray examinations. The average effective dose from x-ray examinations per inhabitant in Germany in 2014 was about 1.55 mSv (about 1.7 x-ray examinations per inhabitant per year). The proportion of dental X-rays is 41%, but accounts for only 0.4% of the collective effective dose. In Germany, Section 28 of the X-ray Ordinance (RöV) has required since 2002 that the attending physician must have an X-ray pass available for X-ray examinations and offer it to the patient. The pass contains information about the patient's X-rays to avoid unnecessary examinations and to allow comparison with previous images. With the entry into force of the new Radiation Protection Ordinance on December 31, 2018, this obligation no longer applies. In Austria and Switzerland, x-ray passports have so far been available voluntarily. In principle, there must always be both a justifiable indication for the use of X-rays and the informed consent of the patient. In the context of medical treatment, informed consent refers to the patient's agreement to all types of interventions and other medical measures. § 630d Act of (in German) Radiation reduction. Over the years, there have been increasing efforts to reduce radiation exposure to therapists and patients. Radiation protective clothing. Following Rollins' discovery in 1920 that lead aprons protected against X-rays, lead aprons with a lead thickness of 0.5 mm were introduced. Due to their weight, lead-free and lead-reduced aprons were subsequently developed. In 2005, it was recognized that in some cases the protection was significantly less than wearing lead aprons. The lead-free aprons contain tin, antimony and barium, which have the property of producing intense radiation (X-ray fluorescence radiation) when irradiated. In Germany, the Radiology Standards Committee has taken up the issue and introduced a German standard (DIN 6857-1) in 2009. The international standard IEC 61331-3:2014 was finally published in 2014. Protective aprons that do not comply with DIN 6857-1 of 2009 or the new IEC 61331-1 of 2014 may result in higher exposures. There are two classes of lead equivalency classes: 0.25 mm and 0.35 mm. The manufacturer must specify the area weight in kg/m2 at which the protective effect of a pure lead apron of 0.25 or 0.35 mm Pb is achieved. The protective effect of an apron shall be appropriate to the energy range used, up to 110 kV for low energy aprons and up to 150 kV for high energy aprons. If necessary, lead glass panels must also be used, with the front panels having a lead equivalent of 0.5-1.0 mm, depending on the application, and the side shields having a lead equivalent of 0.5-0.75 mm. Outside the useful beam, radiation exposure is primarily caused by scattered radiation from the tissue being scanned. During examinations of the head and torso, this scattered radiation can spread throughout the body and is difficult to shield with radiation protective clothing. Fears that a lead apron will prevent radiation from leaving the body are unfounded, however, because lead absorbs radiation rather than scattering it. When preparing an orthopantomogram (OPG) for a dental overview radiograph, it is sometimes recommended not to wear a lead apron, as it does little to shield scattered radiation from the jaw area, but may hinder the rotation of the imaging device. However, according to the 2018 X-ray regulation, it is still mandatory to wear a lead apron when taking an OPG. X-ray intensifier foils. In the same year as the discovery of X-rays, Mihajlo Idvorski Pupin (1858-1935) invented the method of placing a sheet of paper coated with fluorescent substances on the photographic plate, drastically reducing the exposure time and thus the radiation exposure. 95% of the film was blackened by the intensifying film and only the remaining 5% was directly blackened by the X-rays. Thomas Alva Edison identified the blue-emitting calcium tungstate (CaWO4) as a suitable phosphor, which quickly became the standard for X-ray intensifying film. In the 1970s, calcium tungstate was replaced by even better and finer intensifying films with rare earth-based phosphors (terbium-activated lanthanum oxybromide, gadolinium oxysulfide). The use of intensifying films in dental film production did not become widespread because of the loss of image quality. The combination with high-sensitivity films further reduced radiation exposure. Anti-scatter grid. An anti-scatter grid is a device in X-ray technology that is placed in front of the image receiver (screen, detector, or film) and reduces the incidence of diffuse radiation on it. The first diffusion radiation grid was developed in 1913 by Gustav Peter Bucky (1880-1963). The US radiologist Hollis Elmer Potter (1880-1964) improved it in 1917 by adding a moving device. The radiation dose must be increased when using scattered radiation grids. For this reason, the use of scattered radiation equipment should not be used on children. In digital radiography, a grid may be omitted under certain conditions to reduce radiation exposure to the patient. Radiation protection splint. Radiation protection measures may also be necessary against scattered radiation, which occurs during tumor irradiation of the head and neck on metal parts of the dentition (dental fillings, bridges, etc.). Since the 1990s, soft tissue retractors known as radiation protection splints have been used to prevent or reduce mucositis, an inflammation of the mucous membranes. It is the most significant adverse acute side effect of radiation. The radiation protection splint is a spacer that keeps the mucosa away from the teeth and reduces the amount of scattered radiation that hits the mucosa according to the square law of distance. Mucositis, which is extremely painful, is one of the most significant detriments to a patient's quality of life and often limits radiation therapy, thereby reducing the chances of tumor cure. The splint reduces oral mucosal reactions that typically occur in the second and third third of a radiation series and are irreversible. Panoramic X-ray machine. The Japanese Hisatugu Numata developed the first panoramic radiograph in 1933/34. This was followed by the development of intraoral panoramic X-ray units, in which the X-ray tube is placed intraorally (inside the mouth) and the X-ray film extraorally (outside the mouth). At the same time, Horst Beger from Dresden in 1943 and the Swiss dentist Walter Ott in 1946 worked on the Panoramix (Koch &amp; Sterzel), Status X (Siemens) and Oralix (Philips). Intraoral panoramic devices were discontinued at the end of the 1980s because the radiation exposure was too high in direct contact with the tongue and oral mucosa due to the intraoral tube. Digital X-ray. Eastman Kodak filed the first patent for digital radiography in 1973. The first commercial CR (Computed Radiology) solution was offered by Fujifilm in Japan in 1983 under the device name CR-101. X-ray imaging plates are used in X-ray diagnostics to record the shadow image of X-rays. The first commercial digital X-ray system for use in dentistry was introduced in 1986 by Trophy Radiology (France) under the name Radiovisiography. Digital x-ray systems help reduce radiation exposure. Instead of film, the machines contain a scintillator that converts the incident X-ray photons either into visible light or directly into electrical impulses. Computer tomography. In 1972, the first commercial CT scanner for clinical use went into operation at Atkinsons Morley Hospital in London. Its inventor was the English engineer Godfrey Newbold Hounsfield (1919-2004), who shared the 1979 Nobel Prize in Medicine with Allan McLeod Cormack (1924-1998) for his pioneering work in the field of computed tomography. The first steps toward dose reduction were taken in 1989 in the era of single-slice spiral CT. The introduction of multi-slice spiral computed tomography in 1998 and its continuous development made it possible to reduce the dose by means of dose modulation. The tube current is adjusted, for example by reducing the power for images of the lungs compared to the abdomen. The tube current is modulated during rotation. Because the human body has an approximately oval cross-section, radiation intensity is reduced when radiation is delivered from the front or back, and is increased when radiation is delivered from the side. This dose control also depends on the body mass index. For example, the use of dose modulation in the head and neck region reduces total exposure and organ doses to the thyroid and eye lens by up to 50% without significantly compromising diagnostic image quality. The Computed Tomography Dose Index (CTDI) is used to measure radiation exposure during a CT scan. The CTDI was first defined by the Food and Drug Administration (FDA) in 1981. The unit of measurement for the CTDI is the mGy (milli-Gray). Multiplying the CTDI by the length of the examination volume yields the dose-length product (DLP), which quantifies the total radiation exposure to the patient during a CT scan. Structural protective measures. An X-ray room must be shielded on all sides with 1 mm lead equivalent shielding. Calcium silicate or solid brick masonry is recommended. A steel jamb should be used, not only because of the weight of the heavy shielding door but also because of the shielding; wooden frames must be shielded separately. The shielding door must be covered with a 1 mm thick lead foil and a lead glass window must be installed as a visual connection. A keyhole shall be avoided. All installations (sanitary or electrical), that interrupt the radiation protection, must be leaded ( § 20 (röv_1987) "[§ 20 X-ray Ordinance]" (in German) and § Annex+2 (röv_1987) (in German) Depending on the application, nuclear medicine requires even more extensive protective measures, up to and including concrete walls several meters thick. In addition, from December 31, 2018, when the latest amendments to Section 14 (1) No. 2b of the Radiation Protection Act § 14 "[Radiation Protection Act (StrlSchG)]" (in German) come into force, an expert in medical physics for X-ray diagnostics and therapy must be consulted for the optimization and quality assurance of the application and for advice on radiation protection issues. Certificate of competence. Each facility operating an x-ray unit shall have sufficient personnel with appropriate expertise. The person responsible for radiation protection or one or more Radiation Safety Officers shall have appropriate qualifications, which shall be regularly updated. X-ray examinations may be technically performed by any other staff member of a medical or dental practice if they are under the direct supervision and responsibility of the person responsible and if they have knowledge of radiation protection. This knowledge of radiation protection has been required since the amendment of the X-ray Ordinance in 1987; medical and dental assistants (then called medical assistants or dental assistants) received this additional training in 1990. The regulations for the specialty of radiology were tightened by the Radiation Protection Act, which came into force on October 1, 2017. The handling of radioactive substances and ionizing radiation (if not covered by the X-ray Ordinance) is regulated by the Radiation Protection Ordinance ("StrlSchV"). Section 30 StrlSchV § 30 (in German) defines the "Required expertise and knowledge in radiation protection". Radiation protection associations. The Association of German Radiation Protection Physicians ("VDSÄ") was formed in the late 1950s from a "working group of radiation protection physicians of the German Red Cross" and was founded in 1964. It was dedicated to the promotion of radiation protection and the representation of medical, dental, and veterinary radiation protection concerns to the public and the health care system. In 2017, it was merged into the Professional Association for Radiation Protection. The Austrian Association for Radiation Protection ("ÖVS"), founded in 1966, pursues the same goals as the Association for Medical Radiation Protection in Austria. The Professional Association for Radiation Protection for Germany and Switzerland is networked worldwide. Radiation protection in radiotherapy. In radiotherapy, radiation protection is often overlooked in favor of structural safeguards and therapist protection. The benefit/risk assessment should prioritize both the therapeutic goal of treating the patient's cancer and the safety of all involved. However, it is crucial to ensure that radiation is delivered only where it is needed through appropriate treatment planning. By employing strong radiation protection measures, we can confidently provide effective treatment while minimizing potential risks. Linear accelerators replaced cobalt and caesium emitters in routine therapy due to their superior technical characteristics and risk profile. They have been available since about 1970. The presence of a medical physicist responsible for technical quality control is required for linear accelerators, unlike X-rays and telecurie systems. It is important to note that radiation necrosis is the necrosis of cells in an organism caused by the effects of ionizing radiation. Radionecrosis is a serious complication of radiosurgical treatment that becomes clinically apparent months or years after irradiation. Radiation therapy has significantly reduced the incidence of radionecrosis since its early days. Modern radiation techniques prioritize the sparing of healthy tissue while irradiating as much of the area around the tumor as possible to prevent recurrence. It is important to note that patients undergoing radiotherapy face a certain level of radiation risk. Radiation protection and radiation damage in veterinary medicine. While there is limited literature on radiation injury to animals, there is no evidence of other types of radiation injury. Diagnostic radiation has been shown to cause local burns in animals, typically resulting from prolonged exposure of body parts or sparks from old x-ray tubes. It is important to note that the frequency of injury to veterinary staff and veterinarians is significantly lower than that in human medicine, highlighting the safety of diagnostic radiation in veterinary practice. In veterinary medicine, fewer images are taken compared to human medicine, particularly fewer CT scans. However, due to the manual restraint of animals to avoid anesthesia, at least one person is present in the control area, resulting in significantly higher radiation exposure than that of human medical staff. It is important to note that since the 1970s, dosimeters have been used to measure the radiation exposure of veterinary personnel, ensuring their safety. Feline hyperthyroidism (overactive thyroid) is a common disease in older cats. Radioiodine therapy is considered by many authors to be the treatment of choice. Following the administration of radioactive iodine, cats are kept in an isolation pen. The cat's radioactivity is measured to determine the time of discharge, which is typically 14 days after the start of therapy. The therapy requires significant radiation protection measures and is currently only offered at two veterinary facilities in Germany (as of 2010). After the start of treatment, cats must be kept indoors for four weeks, and contact with pregnant women and children under the age of 16 must be avoided due to residual radioactivity. Just like a medical practice, any veterinary practice operating an X-ray machine must have sufficient staff with the appropriate expertise, as required by Section 18 of the X-Ray Ordinance 2002. The corresponding training for paraveterinary workers (then called veterinary nurses) took place in 1990. In 2017, Linsengericht (Hesse) opened Europe's first clinic for horses with cancer. Radiation therapy is administered in a treatment room that is eight meters wide, on a specially designed table that can withstand heavyweight. The surrounding area is protected from radiation by three-meter thick walls. Mobile equipment is used to irradiate tumors in small animals at various locations. Radioactive substances. Radon. Radon is a naturally occurring radioactive noble gas discovered in 1900 by Friedrich Ernst Dorn (1848-1916) and is considered carcinogenic. Radon is increasingly found in areas with high levels of uranium and thorium in the soil. These are mainly areas with high granitic rock deposits. According to studies by the World Health Organization, the incidence of lung cancer increases significantly at radiation levels of 100-200 Bq per cubic meter of indoor air. The likelihood of developing lung cancer increases by 10% with each additional 100 Bq/m3 of indoor air. Elevated radon levels have been measured in numerous areas in Germany, particularly in southern Germany, Austria and Switzerland. Germany. The Federal Office for Radiation Protection has developed a radon map of Germany. The EU Directive 2013/59/Euratom (Radiation Protection Basic Standards Directive) introduced reference levels and the possibility for workers to have their workplace tested for radon exposure. In Germany, it was implemented in the Radiation Protection Act (Chapter 2 or Sections 124-132 StrlSchG) § 124-132 (in German) and the amended Radiation Protection Ordinance (Part 4 Chapter 1, Sections 153-158 StrlSchV). § 153-158 Act of (in German) The new radon protection regulations for workplaces and new residential buildings have been binding since January 2019. Extensive radon contamination and radon precautionary areas have been determined by the ministries of the environment of the federal states (as of June 15, 2021). Austria. The highest radon concentrations in Austria were measured in 1991 in the municipality of Umhausen in Tyrol. Umhausen has about 2300 inhabitants and is located in the Ötztal valley. Some of the houses there were built on a bedrock of granite gneiss. From this porous subsoil, the radon present in the rock seeped freely into the unsealed cellars, which were contaminated with up to 60,000 Becquerels of radon per cubic meter of air. Radon levels in the apartments in Umhausen have been systematically monitored since 1992. Since then, extensive radon mitigation measures have been implemented in the buildings: New buildings, sealing of cellar floors, forced ventilation of cellars or relocation. Queries in the Austrian Health Information System ("ÖGIS") have shown that the incidence of new cases of lung cancer has declined sharply since then. The Austrian National Radon Project (ÖNRAP) has studied radon exposure throughout the country. Austria also has a Radiation Protection Act as a legal basis. Indoor limits were set in 2008 The Austrian Ministry of the Environment states that &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"Precautionary measures in radiation protection use the generally accepted model that the risk of lung cancer increases uniformly (linearly) with radon concentration. This means that an increased risk of lung cancer does not only occur above a certain value, but that a guideline or limit value only adjusts the magnitude of the risk in a meaningful way to other existing risks. Achieving a guideline or limit therefore means taking a risk that is still (socially) acceptable. It therefore makes perfect sense to take simple measures to reduce radon levels, even if they are below the guideline values." In Austria, the Radon Protection Ordinance in its version of September 10, 2021 is currently in force, which also defines the radon protection areas and radon precautionary areas. Switzerland. The aim of the Radon Action Plan 2012-2020 in Switzerland was to incorporate the new international recommendations into the Swiss strategy for protection against radon and thus reduce the number of lung cancer cases attributable to radon in buildings. On 1 January 2018, the limit value of 1000 Bq/m3 was replaced by a reference value of 300 becquerels per cubic meter (Bq/m3) for the radon gas concentration averaged over a year in "rooms in which people regularly spend several hours a day". Subsequently, on May 11, 2020, the Federal Office of Public Health FOPH issued the Radon Action Plan 2021-2030. The provisions on radon protection are primarily laid down in the Radiation Protection Ordinance (RPO). Radiation sickness among miners. In 1879, Walther Hesse (1846-1911) and Friedrich Hugo Härting published the study "Lung Cancer, the Miners' Disease in the Schneeberg Mines". Hesse, a pathologist, was shocked by the poor health and young age of the miners. This particular form of bronchial carcinoma was given the name "Schneeberg disease" because it occurred among miners in the Schneeberg mines (Saxon Erz Mountains). When Hesse's report was published, radioactive radiation and the existence of radon were unknown. It was not until 1898 that Marie Curie-Skłodowska (1867-1934) and her husband Pierre Curie (1859-1906) discovered radium and created the concept of radioactivity. Beginning in the fall of 1898, Marie Curie suffered from inflammation of the fingertips, the first known symptoms of radiation sickness. In the Jáchymov mines, where silver and non-ferrous metals were mined from the 16th to the 19th century, uranium ore was mined in abundance in the 20th century. It was only during the Second World War that restrictions were imposed on ore mining in the Schneeberg and Jáchymov mines. After World War II, uranium mining was accelerated for the Soviet atomic bomb project and the emerging Soviet nuclear industry. Forced labor was used. Initially, these were German prisoners of war and displaced persons, and after the February Revolution of 1948, political prisoners were imprisoned by the Communist Party regime in Czechoslovakia, as well as conscripted civilian workers. Several "Czechoslovak gulags" were established in the area to house these workers. In all, about 100,000 political prisoners and more than 250,000 forced laborers passed through the camps. About half of them probably did not survive the mining work. Uranium mining ceased in 1964. We can only speculate about other victims who died as a result of radiation. Radon-bearing springs discovered during the mining in the early 20th century established a spa industry that is still important today, as well as the town's status as the oldest radium brine spa in the world. Wismut AG. The approximately 200,000 uranium miners employed by Wismut AG in the former Soviet occupation zone of East Germany were exposed to very high levels of radiation, particularly between 1946 and 1955, but also in later years. This exposure was caused by the inhalation of radon and its radioactive by-products, which were deposited to a considerable extent in the inhaled dust. Radiation exposure was expressed in the historical unit of working level month (WLM). This unit of measurement was introduced in the 1950s specifically for occupational safety in uranium mines in the U.S. to record radiation exposure resulting from radioactive exposure to radon and its decay products in the air we breathe. Approximately 9000 workers at Wismut AG have been diagnosed with lung cancer. Radium. Until the 1930s, radium compounds were not only considered relatively harmless, but also beneficial to health, and were advertised as medicines for a variety of ailments or used in products that glowed in the dark. Processing took place without any safeguards. Until the 1960s, radioactivity was often handled naively and carelessly. From 1940 to 1945, the Berlin-based "Auergesellschaft", founded by Carl Auer von Welsbach (1858-1929, Osram), produced a radioactive toothpaste called Doramad that contained thorium-X and was sold internationally. It was advertised with the statement, "Its radioactive radiation strengthens the defenses of the teeth and gums. The cells are charged with new life energy and the destructive effect of bacteria is inhibited. This gave the claim of "radiant white teeth" a double meaning. By 1930, there were also bath additives and eczema ointments under the brand name "Thorium-X". Radium was also added to toothpastes, such as Kolynos toothpaste. After World War I, radioactivity became a symbol of modern achievement and was considered "chic". Radioactive substances were added to mineral water, condoms, and cosmetic powders. Even chocolate laced with radium was sold. The toy manufacturer Märklin in the Swabian town of Göppingen tested the sale of an X-ray machine for children. At upper-class parties, people "photographed" each other's bones for fun. A system called "Trycho" () for epilation (hair removal) of the face and body was franchised in the USA. As a result, thousands of women suffered skin burns, ulcers and tumors. It was not until the atomic bombings of Hiroshima and Nagasaki that the public became aware of the dangers of ionizing radiation and these products were banned. A radium industry developed, using radium in creams, beverages, chocolates, toothpastes, and soaps. It took a relatively long time for radium and its decay product radon to be recognized as the cause of the observed effects. Radithor, a radioactive agent consisting of triple-distilled water in which the radium isotopes 226Ra and 228Ra were dissolved so that it had an activity of at least one microcurie, was marketed in the United States. It was not until 1932, when the prominent American athlete Eben Byers, who by his own account had taken about 1,400 vials of Radithor as medicine on the recommendation of his physician, fell seriously ill with cancer, lost many of his teeth, and died shortly thereafter in great agony, that strong doubts were raised about the healing powers of Radithor and radium water. Radium cures. 1908 saw a boom in the use of radioactive water for therapeutic purposes. The discovery of springs in Oberschlema and Bad Brambach paved the way for the establishment of radium spas, which relied on the healing properties of radium. During the cures, people bathed in radium water, drank cures with radium water, and inhaled radon in emanatoriums. The baths were visited by tens of thousands of people every year, hoping for hormesis. To this day, therapeutic applications are carried out in spas and healing tunnels. The natural release of radon from the ground is used. According to the German Spa Association, the activity in water must be at least 666 Bq/liter. The requirement for inhalation treatments is at least 37,000 Bq/m3 of air. This form of therapy is not scientifically accepted and the potential risk of radiation exposure is criticized. The equivalent dose of a radon cure in Germany is given by the individual health resorts as about one to two millisieverts, depending on the location. In 2010, doctors in Erlangen, using the (outdated) LNT (Linear, No-Threshold) model, concluded that five percent of all lung cancer deaths in Germany are caused by radon. There are radon baths in Bad Gastein, Bad Hofgastein and Bad Zell in Austria, in Niška Banja in Serbia, in the radon revitalization bath in Menzenschwand and in Bad Brambach, Bad Münster am Stein-Ebernburg, Bad Schlema, Bad Steben, Bad Schmiedeberg and Sibyllenbad in Germany, in Jáchymov in the Czech Republic, in Hévíz in Hungary, in Świeradów-Zdrój (Bad Flinsberg) in Poland, in Naretschen and Kostenez in Bulgaria and on the island of Ischia in Italy. There are radon tunnels in Bad Kreuznach and Bad Gastein. Illuminated dials. The dangers of radium were recognized in the early 1920s and first described in 1924 by New York dentist and oral surgeon Theodor Blum (1883-1962). He was particularly aware of the use of radium in the watch industry, where it was used for luminous dials. He published an article on the clinical picture of the so-called radium jaw. He observed this disease in female patients who, as dial painters, came into contact with luminous paint whose composition was similar to Radiomir, a luminous material invented in 1914 consisting of a mixture of zinc sulfide and radium bromide. As they painted, they used their lips to form the tip of the phosphorus-laden brush into the desired pointed shape, and this is how the radioactive radium entered their bodies. In the U.S. and Canada alone, about 4,000 workers were affected over the years. In retrospect, the factory workers were called the Radium Girls. They also played with the paint, painting their fingernails, teeth and faces. This made them glow at night to the surprise of their companions. After Harrison Stanford Martland (1883-1954), chief medical examiner in Essex County, detected the radioactive noble gas radon (a decay product of radium) in the breath of the Radium Girls, he turned to Charles Norris (1867-1935) and Alexander Oscar Gettler (1883-1968). In 1928, Gettler was able to detect a high concentration of radium in the bones of Amelia Maggia, one of the young women, even five years after her death. In 1931, a method was developed for determining radium dosage using a film dosimeter. A standard preparation is irradiated through a hardwood cube onto an X-ray film, which is then blackened. For a long time, the cube minute was an important unit of radium dosage. It was calibrated by ionometric measurements. The radiologists Hermann Georg Holthusen (1886-1971) and Anna Hamann (1894-1969) found a calibration value of 0.045 r/min in 1932/1935. The calibration film receives the y-ray dose of 0.045 r per minute through the wooden cube from the preparation of 13.33 mg. In 1933, the physicist Robley D. Evans (1907-1995) made the first measurements of radon and radium in the excretions of female workers. On this basis, the National Bureau of Standards, the predecessor to the National Institute of Standards and Technology (NIST), set the limit for radium at 0.1 microcuries (about 3.7 kilobecquerels) in 1941. A "Radium Action Plan 2015-2019" aims to solve the problem of radiological contamination in Switzerland, mainly in the Jura Mountains, due to the use of radium luminous paint in the watch industry until the 1960s. In France, a line of cosmetics called Tho-Radia, which contained both thorium and radium, was created in 1932 and lasted until the 1960s. Other terrestrial radiation. Terrestrial radiation is the ubiquitous radiation on Earth caused by radionuclides in the ground that were formed billions of years ago by stellar nucleosynthesis and have not yet decayed due to their long half-lives. Terrestrial radiation is caused by natural radionuclides that occur naturally in the Earth's soil, rocks, hydrosphere, and atmosphere. Natural radionuclides can be divided into cosmogenic and primordial nuclides. Cosmogenic nuclides do not contribute significantly to the terrestrial ambient radiation at the Earth's surface. The sources of terrestrial radiation are the natural radioactive nuclides found in the uppermost layers of the Earth, in the water and in the air. These include in particular Mining and extraction of fuels. According to the World Nuclear Association, coal from all deposits contains traces of various radioactive substances, particularly radon, uranium and thorium. These substances are released during coal mining, especially from surface mines, through power plant emissions, or power plant ash, and contribute to terrestrial radiation exposure through their exposure pathways. In December 2009, it was revealed that oil and gas production generates millions of tons of radioactive waste each year, much of which is improperly disposed of without detection, including 226Radium and 210Polonium. The specific activity of the waste ranges from 0.1 to 15,000 becquerels per gram. In Germany, according to the "Radiation Protection Ordinance" of 2001, the material is subject to monitoring at one Becquerel per gram and would have to be disposed of separately. The implementation of this regulation has been left to the industry, which has disposed of the waste carelessly and improperly for decades. Building material. Every building material contains traces of natural radioactive substances, especially 238uranium, 232thorium, and their decay products, and 40potassium. Solidified and effusive rocks such as granite, tuff, and pumice have higher levels of radioactivity. In contrast, sand, gravel, limestone, and natural gypsum (calcium sulfate dihydrate) have low levels of radioactivity. The European Union's Activity Concentration Index (ACI), developed in 1999, can be used to assess radiation exposure from building materials. It replaces the Leningrad summation formula, which was used in 1971 in Leningrad (St. Petersburg) to determine how much radiation exposure from building materials is permissible for humans. The ACI is calculated from the sum of the weighted activities of 40potassium, 226radium, and 232thorium. The weighting takes into account the relative harmfulness to humans. According to official recommendations, building materials with a European ACI value greater than "1" should not be used in large quantities. Glazes. Uranium pigments are used to color ceramic tiles with uranium glazes (red, yellow, brown), where 2 mg of uranium per cm2 is allowed. Between 1900 and 1943, large quantities of uranium-containing ceramics were produced in the United States, as well as in Germany and Austria. It is estimated that between 1924 and 1943, 50-150 tons of uranium (V,VI) oxide were used annually in the U.S. to produce uranium-containing glazes. In 1943, the U.S. government imposed a ban on the civilian use of uranium-containing substances, which remained in effect until 1958. Beginning in 1958, the U.S. government, and in 1969 the United States Atomic Energy Commission, sold depleted uranium in the form of uranium(VI) fluoride for civilian use. In Germany, uranium-glazed ceramics were produced by the Rosenthal porcelain factory and were commercially available until the early 1980s. Uranium-glazed ceramics should only be used as collector's items and not for everyday use due to possible abrasion. ODL measurement network. The Federal Office for Radiation Protection's monitoring network measures natural radiation exposure through the local dose rate (ODL), expressed in microsieverts per hour (μSv/h).  In Germany, the natural ODL ranges from approximately 0.05 to 0.18 μSv/h, depending on local conditions. The ODL monitoring network has been operational since 1973 and currently comprises 1800 fixed, automatically operating measuring points. Its primary function is to provide early warning for the rapid detection of increased radiation from radioactive substances in the air in Germany. Spectroscopic probes have been successfully utilized since 2008 to determine the contribution of artificial radionuclides in addition to the local dose rate, showcasing the network's advanced capabilities. In addition to the ODL monitoring network of the Federal Office for Radiation Protection, there are other federal monitoring networks at the Federal Maritime and Hydrographic Agency and the Federal Institute of Hydrology, which measure gamma radiation in water; the German Meteorological Service measures air activity with aerosol samplers. To monitor nuclear facilities, the relevant federal states operate their own ODL monitoring networks. The data from these monitoring networks are automatically fed into the Integrated Measurement and Information System (IMIS), where they are used to analyze the current situation. Many countries operate their own ODL monitoring networks to protect the public. In Europe, these data are collected and published on the EURDEP platform of the European Atomic Energy Community. The European monitoring networks are based on Articles 35 and 37 of the Euratom Treaty. Radionuclides in medicine. Nuclear medicine is the use of open radionuclides for diagnostic and therapeutic purposes (radionuclide therapy). It also includes the use of other radioactive substances and nuclear physics techniques for functional and localization diagnostics. George de Hevesy (1885-1966) lived as a lodger and in 1923 suspected that his landlady was offering him pudding that he had not eaten the following week. He mixed a small amount of a radioactive isotope into the leftovers. When she served him the pudding a week later, he was able to detect radioactivity in a sample of the casserole. When he showed this to his landlady, she immediately gave him notice. The method he used made him the "father of nuclear medicine". It became known as the tracer method, which is still used today in nuclear medicine diagnostics. A small amount of a radioactive substance, its distribution in the organism, and its path through the human body can be tracked externally. This provides information about various metabolic functions of the body. The continuous development of radionuclides has improved radiation protection. For example, the mercury compounds 203chloro-merodrin and 197chloro-merodrin were abandoned in the 1960s as substances were developed that allowed a higher photon yield with less radiation exposure. Beta emitters such as 131I and 90Y are used in radionuclide therapy. In nuclear medicine diagnostics, the beta+ emitters 18F, 11C, 13N, and 15O are used as radioactive markers for tracers in positron emission tomography (PET). Radiopharmaceuticals (isotope-labeled drugs) are being developed on an ongoing basis. Radiopharmaceutical residues, such as empty application syringes and contaminated residues from the patient's toilet, shower and washing water, are collected in tanks and stored until they can be safely pumped into the sewer system. The storage time depends on the half-life and ranges from a few weeks to a few months, depending on the radionuclide. Since 2001, by § 29 (in German) of the Radiation Protection Ordinance, the specific radioactivity in the waste containers has been recorded in release measuring stations and the release time is calculated automatically. This requires measurements of the sample activity in Bq/g and the surface contamination in Bq/cm2. In addition, the behavior of the patients after their discharge from the clinic is prescribed. To protect personnel, syringe filling systems, borehole measurement stations for nuclide-specific measurement of low-activity, small volume individual samples, a lift system into the measurement chamber to reduce radiation exposure when handling highly active samples, probe measurement stations, ILP (isolated limb perfusion) measurement stations to monitor activity with one or more detectors during surgery and report leakage to the surgical oncologist. Radioiodine therapy. Radioiodine Therapy (RIT) is a nuclear medicine procedure used to treat thyroid hyperfunction, Graves' disease, thyroid enlargement, and certain forms of thyroid cancer. The radioactive iodine isotope used is 131Iodine, a predominant beta emitter with a half-life of eight days, which is only stored in thyroid cells in the human body. In 1942, Saul Hertz (1905-1950) of the Massachusetts General Hospital and the physicist Arthur Roberts published their report on the first radioiodine therapy (1941) for Graves' disease, at that time still predominantly using the 130iodine isotope with a half-life of 12.4 hours. At the same time, Joseph Gilbert Hamilton (1907-1957) and John Hundale Lawrence (1904-1991) performed the first therapy with 131iodine, the isotope still used today. Radioiodine therapy is subject to special legal regulations in many countries, and in Germany may only be performed on an inpatient basis. There are approximately 120 treatment centers in Germany (as of 2014), performing approximately 50,000 treatments per year. In Germany, the minimum length of stay is 48 hours. Discharge depends on the residual activity remaining in the body. In 1999, the limit for residual activity was raised. The dose rate may not exceed 3.5 μSv per hour at a distance of 2 meters from the patient, which means that a radiation exposure of 1 mSv may not be exceeded within one year at a distance of 2 meters. This corresponds to a residual activity of about 250 MBq. Similar regulations exist in Austria. In Switzerland, a maximum radiation exposure of 1 mSv per year and a maximum of 5 mSv per year for the patient's relatives may not be exceeded. After discharge following radioiodine therapy, a maximum dose rate of 5 μSv per hour at a distance of 1 meter is permitted, which corresponds to a residual activity of approximately 150 MBq. In the event of early discharge, the supervisory authority must be notified up to a dose rate of 17.5 μSv/h; above 17.5 μSv/h, permission must be obtained. If the patient is transferred to another ward, the responsible radiation protection officer must ensure that appropriate radiation protection measures are taken there, e.g. that a temporary control area is set up. Scintigraphy. Scintigraphy is a nuclear medicine procedure in which low-level radioactive substances are injected into the patient for diagnostic purposes. These include bone scintigraphy, thyroid scintigraphy, octreotide scintigraphy , and, as a further development of the procedure, single photon emission computed tomography (SPECT). For example, 201Tl thallium(I) chloride, technetium compounds (99mTc tracer, 99mtechnetium tetrofosmin), PET tracers (with radiation exposure of 1100 MBq each with 15O-water, 555 MBq with 13N ammonia, or 1850 MBq with 82Rb rubidium chloride) are used in myocardial scintigraphy to diagnose blood flow conditions and function of the heart muscle (myocardium). The examination with 74 MBq 201Thallium Chloride causes a radiation exposure of about 16 mSv (effective dose equivalent), the examination with 740 MBq 99mTechnetium-MIBI about 7 mSv. Metastable 99mTc is by far the most important nuclide used as a tracer in scintigraphy because of its short half-life, the 140 keV gamma radiation it emits, and its ability to bind to many active biomolecules. Most of this radiation is excreted after the examination. The remaining 99mTc decays rapidly to 99Tc with a half-life of 6 hours. This has a long half-life of 212,000 years and, because of the relatively weak beta radiation released during its decay, contributes only a small amount of additional radiation exposure over the remaining lifetime. In the United States alone, approximately seven million individual doses of 99mTc are administered each year for diagnostic purposes. To reduce radiation exposure, the American Society of Nuclear Cardiology (ASNC) issued dosage recommendations in 2010. The effective dose is 2.4 mSv for 13N-ammonia, 2.5 mSv for 15O-water, 7 mSv for 18F-fluorodeoxyglucose, and 13.5 mSv for 82Rb-rubidium chloride. Compliance with these recommendations is expected to reduce the average radiation exposure to = 9 mSv. The Ordinance on Radioactive Drugs or Drugs Treated with Ionizing Radiation § 2 (in German) regulates the approval procedures for the marketability of radioactive drugs. Brachytherapy. Brachytherapy is used to place a sealed radioactive source inside or near the body to treat cancer, such as prostate cancer. Afterloading brachytherapy is often combined with teletherapy, which is external radiation delivered from a greater distance than brachytherapy. It is not classified as a nuclear medicine procedure, although like nuclear medicine, it uses the radiation emitted by radionuclides. After initial interest in brachytherapy in the early 20th century, its use declined in the mid-20th century because of the radiation exposure to physicians from manual handling of the radiation sources. It was not until the development of remote-controlled afterloading systems and the use of new radiation sources in the 1950s and 1960s that the risk of unnecessary radiation exposure to physicians and patients was reduced. In the afterloading procedure, an empty, tubular applicator is inserted into the target volume (e.g., the uterus) before the actual therapy and, after checking the position, loaded with a radioactive preparation. The preparation is located at the tip of a steel wire that is advanced and retracted step by step under computer control. After the pre-calculated time, the source is withdrawn into a safe and the applicator is removed. The procedure is used for breast cancer, bronchial carcinoma or oral floor carcinoma, among others. Beta emitters such as 90Sr or 106Ru or 192Ir are used. As a precaution, patients undergoing permanent brachytherapy are advised not to hold small children immediately after treatment and not to be in the vicinity of pregnant women, since low-dose radioactive sources (seeds) remain in the body after treatment with permanent brachytherapy. This is to protect the particularly radiation-sensitive tissues of a fetus or infant. Thorium as a drug and X-ray contrast agent. Radioactive thorium was used in the 1950s and 60s to treat tuberculosis and other benign diseases (including children), with serious consequences (see Peteosthor). A stabilized suspension of colloidal thorium(IV) oxide, co-developed by António Egas Moniz (1874-1954), was used from 1929 under the trade name Thorotrast as an X-ray contrast agent for angiography in several million patients worldwide until it was banned in the mid-1950s. It accumulates in the reticulohistiocytic system and can lead to cancer due to locally increased radiation exposure. The same is true for cholangiocarcinoma and angiosarcoma of the liver, two rare liver cancers. Carcinomas of the paranasal sinuses have also been described following administration of Thorotrast. Typical onset of disease is 30-35 years after exposure. The biological half-life of Thorotrast is approximately 400 years. The largest study in this area was conducted in Germany in 2004 and showed a particularly high mortality rate among patients exposed in this way. The median life expectancy over a seventy-year observation period was 14 years shorter than in the comparison group. Nuclear weapons and nuclear energy. Radiation effects of the atomic bomb attack and consequences for radiation protection. After the U.S. atomic bombs were dropped on Hiroshima and Nagasaki on August 6 and 9, 1945, an additional 130,000 people - in addition to the 100,000 immediate victims - died from the effects of radiation by the end of 1945. Some experienced the so-called walking ghost phase, an acute radiation sickness caused by a high equivalent dose of 6 to 20 Sievert after a lethal whole-body dose. The phase describes the period of apparent recovery of a patient between the onset of the first massive symptoms and the inevitable death. In the years that followed, a number of deaths from radiation-induced diseases were added. In Japan, the radiation-damaged survivors are called "hibakusha" () and are conservatively estimated to number about 100,000. In 1946, the Atomic Bomb Casualty Commission (ABCC) was established by the National Research Council of the National Academy of Sciences by order of U.S. President Harry S. Truman to study the long-term effects of radiation on survivors of the atomic bombings. In 1975, the ABCC was replaced by the Radiation Effects Research Foundation (RERF). Organizations such as the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), founded in 1955, and the National Academy of Sciences - Advisory Committee on the Biological Effects of Ionizing Radiation (BEIR Committee), founded in 1972, analyze the effects of radiation exposure on humans on the basis of atomic bomb victims who have been examined and, in some cases, medically monitored for decades. They determine the course of the mortality rate as a function of the age of the radiation victims in comparison with the spontaneous rate, and also the dose-dependency of the number of additional deaths. To date, 26 UNSCEAR reports have been published and are available online, most recently in 2017 on the effects of the Fukushima nuclear accident. By 1949, Americans felt increasingly threatened by the possibility of nuclear war with the Soviet Union and sought ways to survive a nuclear attack. The U.S. Federal Civil Defense Administration (USFCDA) was created by the government to educate the public on how to prepare for such an attack. In 1951, with the help of this agency, a children's educational film was produced in the U.S. called Duck and Cover, in which a turtle demonstrates how to protect oneself from the immediate effects of an atomic bomb explosion by using a coat, tablecloths, or even a newspaper. Recognizing that existing medical capacity would not be sufficient in an emergency, dentists were called upon to either assist physicians in an emergency or, if necessary, to provide assistance themselves. To mobilize the profession with the help of a prominent representative, dentist Russell Welford Bunting (1881-1962), dean of the University of Michigan Dental School, was recruited in July 1951 as a dental consultant to the USFCDA. The American physicist Karl Ziegler Morgan (1907-1999) was one of the founders of radiation health physics. In later life, after a long career with the Manhattan Project and Oak Ridge National Laboratory (ORNL), he became a critic of nuclear power and nuclear weapons production. Morgan was Director of Health Physics at ORNL from the late 1940s until his retirement in 1972. In 1955, he became the first president of the Health Physics Society and served as editor of the journal Health Physics from 1955 to 1977. Nuclear fallout shelters are designed to protect for an extended period. Due to the nature of nuclear warfare, such shelters must be completely self-sufficient for long periods. In particular, because of the radioactive contamination of the surrounding area, such a facility must be able to survive for several weeks. In 1959, top-secret construction began in Germany on a government bunker in the Ahr valley. In June 1964, 144 test persons survived for six days in a civilian nuclear bunker. The bunker in Dortmund had been built during the Second World War and had been converted at great expense in the early 1960s into a nuclear-weapon-proof building. However, it would be impossible to build a bunker for millions of German citizens. The Swiss Army built about 7800 nuclear fallout shelters in 1964. In the United States in particular, but also Europe, citizens built private fallout shelters in their front yards on their initiative. This construction was largely kept secret because the owners feared that third parties might take possession of the bunker in the event of a crisis. Fallout and contamination. On July 16, 1945, the first atomic bomb test took place near the town of Alamogordo (New Mexico, USA). As a result of the atmospheric nuclear weapons tests carried out by the United States, the Soviet Union, France, Great Britain, and China, the Earth's atmosphere became increasingly contaminated with fission products from these tests from the 1950s onwards. The radioactive fallout landed on the earth's surface and ended up in plants and, via animal feed, in food of animal origin. Ultimately, they entered the human body and could be detected in bones and teeth as strontium-90, among other things. The radioactivity in the field was measured with a gamma scope, as shown at the "air raid equipment exhibition" in Bad Godesberg in 1954. Around 180 tests were carried out in 1962 alone. The extent of the radioactive contamination of the food sparked worldwide protests in the early 1960s. During World War II and the Cold War, the Hanford Site produced plutonium for U.S. nuclear weapons for more than 50 years. The plutonium for the first plutonium bomb, Fat Man, also came from there. Hanford is considered the most radioactively contaminated site in the Western Hemisphere. A total of 110,000 tons of nuclear fuel was produced there. In 1948, a radioactive cloud leaked from the plant. The amount of 131I alone was 5500 curies. Most of the reactors at Hanford were shut down in the 1960s, but no disposal or decontamination was done. After preliminary work, the world's largest decontamination operation began at Hanford in 2001 to safely dispose of the radioactive and toxic waste. In 2006, some 11,000 workers were still cleaning up contaminated buildings and soil to reduce radiation levels at the site to acceptable levels. This work is expected to continue until 2052. It is estimated that more than four million liters of radioactive liquid have leaked from storage tanks. It was only after the two superpowers agreed on a Partial Test Ban Treaty in 1963, which allowed only underground nuclear weapons testing, that the level of radioactivity in food began to decline. Shields Warren (1896-1980), one of the authors of a report on the effects of the atomic bombs dropped on Japan, was criticized for downplaying the effects of residual radiation in Hiroshima and Nagasaki, but later warned of the dangers of fallout. Fallout refers to the spread of radioactivity in the context of a given meteorological situation. A model experiment was conducted in 2008. The International Campaign to Abolish Nuclear Weapons (ICAN) is an international alliance of non-governmental organizations committed to the elimination of all nuclear weapons through a binding international treaty - a Nuclear Weapons Convention. ICAN was founded in 2007 by IPPNW (International Physicians for the Prevention of Nuclear War) and other organizations at the Nuclear Non-Proliferation Treaty Conference in Vienna and launched in twelve countries. Today, 468 organizations in 101 countries are involved in the campaign (as of 2017). ICAN was awarded the 2017 Nobel Peace Prize. Radioprotectors. A radioprotector is a pharmacon that, when administered, selectively protects healthy cells from the toxic effects of ionizing radiation. The first work with radioprotectors began as part of the Manhattan Project, a military research project to develop and build an atomic bomb. Iodine absorbed by the body is almost completely stored in the thyroid gland and has a biological half-life of about 120 days. If the iodine is radioactive (131I), it can irradiate and damage the thyroid gland in high doses during this time. Because the thyroid gland can only absorb a limited amount of iodine, prophylactic administration of non-radioactive iodine may result in iodine blockade. Potassium iodide in tablet form (colloquially known as "iodine tablets") reduces the uptake of radioactive iodine into the thyroid by a factor of 90 or more, thus acting as a radioprotector. All other radiation damage remains unaffected by taking iodine tablets. In Germany, the Potassium Iodide Ordinance (KIV) was enacted in 2003 to ensure "the supply of the population with potassium iodide-containing medicines in the event of radiological incidents". § 1 (in German) Potassium iodide is usually stored in communities near nuclear facilities for distribution to the population in the event of a disaster. People over the age of 45 should not take iodine tablets because the risk of side effects is higher than the risk of developing thyroid cancer. In Switzerland, as a precautionary measure, tablets have been distributed every five years since 2004 to the population living within 20 km of nuclear power plants (from 2014, 50 km). In Austria, large stocks of iodine tablets have been kept in pharmacies, kindergartens, schools, the army and the "federal reserve" since 2002. Thanks to the protective function of radioprotectors, the dose of radiation used to treat malignant tumors (cancer) can be increased, thereby increasing the effectiveness of the therapy. There are also radiosensitizers, which increase the sensitivity of malignant tumor cells to ionizing radiation. As early as 1921, the German radiologist Hermann Holthusen (1886-1971) described that oxygen increases the sensitivity of cells. Nuclear accidents and catastrophes. Founded in 1957 as a sub-organization of the Organization for Economic Cooperation and Development (OECD), the Nuclear Energy Agency (NEA) pools the scientific and financial resources of participating countries' nuclear research programs. It operates various databases and also manages the International Reporting System for Operating Experience (IRS or IAEA/NEA Incident Reporting System) of the International Atomic Energy Agency (IAEA). The IAEA records and investigates radiation accidents that have occurred worldwide in connection with nuclear medical procedures and the disposal of related materials. The International Nuclear and Radiological Event Scale (INES) is a scale for safety-related events, in particular nuclear incidents and accidents in nuclear facilities. It was developed by an international group of experts and officially adopted in 1990 by the International Atomic Energy Agency (IAEA) and the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD). The purpose of the scale is to inform the public quickly about the safety significance of an event by means of a comprehensible classification of events. At the end of its useful life, the proper disposal of the remaining high activity is of paramount importance. Improper disposal of the radionuclide cobalt-60, used in cobalt guns for radiotherapy, has led to serious radiation accidents, such as the Ciudad Juárez (Mexico) radiological accident in 1983/84, the Goiânia (Brazil) accident in 1987, the Samut Prakan (Thailand) nuclear accident in 2000, and the Mayapuri (India) accident in 2010. Eleven Therac-25 linear accelerators were built by the Canadian company Atomic Energy of Canada Limited (AECL) between 1982 and 1985 and installed in clinics in the United States and Canada. Software errors and a lack of quality assurance led to a serious malfunction that killed three patients and seriously injured three others between June 1985 and 1987 before appropriate countermeasures were taken. The radiation exposure in the six cases was subsequently estimated to be between 40 and 200 Gray; normal treatment is equivalent to a dose of less than 2 Gray. Around 1990, about one hundred cobalt guns were still in use in Germany. In the meantime, electron linear accelerators were introduced and the last cobalt gun was decommissioned in 2000. The Fukushima nuclear accident in 2011 reinforced the need for proper safety management and the derivation of safety indicators regarding the frequency of errors and incorrect actions by personnel, i.e., the human factor. The Nuclear Safety Commission of Japan () was a body of scientists that advised the Japanese government on nuclear safety issues. The commission was established in 1978, but was dissolved after the Fukushima nuclear disaster on September 19, 2012, and replaced by the "Genshiryoku Kisei Iinkai" (). It is an independent agency ("gaikyoku", "external office") of the Japanese Ministry of the Environment that regulates and monitors the safety of Japan's nuclear power plants and related facilities. As a result of the Chernobyl nuclear disaster in 1986, the IAEA coined the term "safety culture" for the first time in 1991 to draw attention to the importance of human and organizational issues for the safe operation of nuclear power plants. After this nuclear disaster, the sand in children's playgrounds in Germany was removed and replaced with uncontaminated sand to protect children who were most vulnerable to radioactivity. Some families temporarily left Germany to escape the fallout. Infant mortality increased significantly by 5% in 1987, the year after Chernobyl. In total, 316 more newborns died that year than statistically expected. In Germany, the caesium137 inventories from the Chernobyl nuclear disaster in soil and food decrease by 2-3% each year; however, the contamination of game and mushrooms was still comparatively high in 2015, especially in Bavaria; there are several cases of game meat, especially wild boar, exceeding the limits. However, controls are insufficient. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"In particular, wild boars in southern Bavaria are repeatedly found to have a very high radioactive contamination of over 10,000 Becquerel/kg. The limit is 600 Becquerel/kg. For this reason, the Bavarian Consumer Center advises against eating wild boar from the Bavarian Forest and south of the Danube too often. Whoever buys wild boar from a hunter, should ask for the measurement protocol." Ocean dumping of radioactive waste. Between 1969 and 1982, conditioned low- and intermediate-level radioactive waste was disposed of in the Atlantic Ocean at a depth of about 4,000 meters under the supervision of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) in accordance with the provisions of the European Convention on the Prevention of Marine Pollution by Dumping of Waste of All Kinds (London Dumping Convention of June 11, 1974). This was carried out jointly by several European countries. Since 1993, international treaties have prohibited the dumping of radioactive waste in the oceans. For decades, this dumping of nuclear waste went largely unnoticed by the public until Greenpeace denounced it in the 1980s. Repository for heat-generating radioactive waste. Since the commissioning of the first commercial nuclear power plants (USA 1956, Germany 1962), various final storage concepts for radioactive materials have been proposed in the following decades, of which only storage in deep geological formations appeared to be safe and feasible within a reasonable period of time and was pursued further. Due to the high activity of the short-lived fission products, spent fuel is initially handled only under water and stored for several years in a decay pool. The water is used for cooling and also shields much of the emitted radiation. This is followed either by reprocessing or by decades of interim storage. Waste from reprocessing must also be stored temporarily until the heat has decreased enough to allow final disposal. Casks are special containers for the storage and transport of highly radioactive materials. Their maximum permissible dose rate is 0.35 mSv/h, of which a maximum of 0.25 mSv/h is due to neutron radiation. The safety of these transport containers has been discussed every three years since 1980 at the International Symposium on the Packaging and Transportation of Radioactive Materials (PATRAM). Following various experiments, such as the Gorleben exploratory mine or the Asse mine, a working group on the selection procedure for repository sites (AkEnd) developed recommendations for a new selection procedure for repository sites between 1999 and 2002. In Germany, the "Site Selection Act" was passed in 2013 and the "Act on the Further Development of the Site Search" was passed on March 23, 2017. A suitable site is to be sought throughout Germany and identified by 2031. In principle, crystalline (granite), salt or clay rock types can be considered for a repository. There will be no "ideal" site. The "best possible" site will be sought. Mining areas and regions where volcanoes have been active or where there is a risk of earthquakes are excluded. Internationally, experts are advocating storage in rock formations several hundred meters below the earth's surface. This involves building a repository mine and storing the waste there. It is then permanently sealed. Geological and technical barriers surrounding the waste are designed to keep it safe for thousands of years. For example, 300 meters of rock will separate the repository from the earth's surface. It will be surrounded by a 100-meter-thick layer of granite, salt or clay. The first waste is not expected to be stored until 2050. The Federal Office for the Safety of Nuclear Waste Management (BfE) took up its activities on September 1, 2004. Its remit includes tasks relating to nuclear safety, the safety of nuclear waste management, the site selection procedure including research activities in these areas and, later on, further tasks in the area of licensing and supervision of repositories. In the USA, Yucca Mountain was initially selected as the final storage site, but this project was temporarily halted in February 2009. Yucca Mountain was the starting point for an investigation into atomic semiotics. Atomic semiotics. The operation of nuclear power plants and other nuclear facilities produces radioactive materials that can have lethal health effects for thousands of years. It is important to note that there is no institution capable of maintaining the necessary knowledge of the dangers over such periods, and of ensuring that warnings about the dangers of nuclear waste in nuclear repositories will be understood by posterity in the distant future. A few years ago, even the capsules of the radionuclide cobalt-60, which were appropriately labeled, went unnoticed. Improper disposal led to the opening of these capsules, resulting in fatal consequences. The dimensions of time exceed previous human standards. For instance, cuneiform writing, which is only about 5000 years old (about 150 human generations), can only be understood after a long period of research and by experts. In 1981, research into the development of atomic semiotics began in the USA, in the German-speaking world, Roland Posner (1942-2020) of the Center for Semiotics at Technische Universität Berlin worked on this in 1982/83. In the USA, the time horizon for such warning signs was set at 10,000 years; later, as in Germany, it was set at a period of one million years, which would correspond to about 30,000 (human) generations. To date, no satisfactory solution to this problem has been found. Radiation protection during flights. High-altitude radiation. In 1912, Victor Franz Hess (1883-1964) discovered (secondary) cosmic rays in the Earth's atmosphere using balloon flights. For this discovery he received the Nobel Prize in Physics in 1936. He was also one of the "martyrs" of early radiation research and had to undergo a thumb amputation and larynx surgery due to radium burns. In the United States and the Soviet Union, balloon flights to altitudes of about 30 km, followed by parachute jumps from the stratosphere, were conducted before 1960 to study human exposure to cosmic radiation in space. The American Manhigh and Excelsior projects with Joseph Kittinger (1928-2022) became particularly well known, but the Soviet parachutist Yevgeny Andreyev (1926-2000) also set new records. High-energy radiation from space is much stronger at high altitudes than at sea level. The radiation exposure of flight crews and air travelers is therefore increased. The International Commission on Radiological Protection (ICRP) has issued recommendations for dose limits, which were incorporated into European law in 1996 and into the German Radiation Protection Ordinance in 2001. Radiation exposure is particularly high when flying in the polar regions or over the polar route. The average annual effective dose for aviation personnel was 1.9 mSv in 2015 and 2.0 mSv in 2016. The highest annual personal dose was 5.7 mSv in 2015 and 6.0 mSv in 2016. The collective dose for 2015 was about 76 person-Sv. This means that flight personnel are among the occupational groups in Germany with the highest radiation exposure in terms of collective dose and average annual dose. This group also includes frequent flyers, with Thomas Stuker holding the "record" - also in terms of radiation exposure - by reaching the 10 million mile mark with United Airlines MileagePlus on 5,900 flights between 1982 and the summer of 2011. In 2017, he passed the 18 million mile mark. The program EPCARD (European Program Package for the Calculation of Aviation Route Dose) was developed at the University of Siegen and the Helmholtz Munich and can be used to calculate the dose from all components of natural penetrating cosmic radiation on any flight route and flight profile - also online. Radiation protection in space. From the earliest crewed space flights to the first moon landing and the construction of the International Space Station (ISS), radiation protection has been a major concern. Spacesuits used for extravehicular activities are coated on the outside with aluminum, which largely protects against cosmic radiation. The largest international research project to determine the effective dose or effective dose equivalent was the Matryoshka experiment in 2010, named after the Russian Matryoshka dolls, because it uses a human-sized phantom that can be cut into slices. As part of Matroshka, an anthropomorphic phantom was exposed to the outside of the space station for the first time to simulate an astronaut performing an extravehicular activity (spacewalk) and determine their exposure to radiation. Microelectronics on satellites must also be protected from radiation. Japanese scientists from the Japan Aerospace Exploration Agency (JAXA) have discovered a huge cave on the moon with their Kaguya lunar probe, which could offer astronauts protection from dangerous radiation during future lunar landings, especially during the planned stopover of a Mars mission. As part of a human mission to Mars, astronauts must be protected from cosmic radiation. During Curiosity's mission to Mars, a Radiation Assessment Detector (RAD) was used to measure radiation exposure. The radiation exposure of 1.8 millisieverts per day was mainly due to the constant presence of high-energy galactic particle radiation. In contrast, radiation from the sun accounted for only about three to five percent of the radiation levels measured during Curiosity's flight to Mars. On the way to Mars, the RAD instrument detected a total of five major radiation events caused by solar flares. To protect the astronauts, a plasma bubble will surround the spacecraft as an energy shield and its magnetic field will protect the crew from cosmic radiation. This would eliminate the need for conventional radiation shields, which are several centimeters thick and correspondingly heavy. In the Space Radiation Superconducting Shield (SR2S) project, which was completed in December 2015, magnesium diboride was found to be a suitable material for generating a suitable force field. Development of metrological principles of radiation protection. Dosimeter. Dosimeters are instruments used to measure radiation dose - as absorbed dose or dose equivalent - and are an important cornerstone of radiation protection. Film dosimeter. At the October 1907 meeting of the American Roentgen Ray Society, Rome Vernon Wagner, an X-ray tube manufacturer, reported that he had begun carrying a photographic plate in his pocket and developing it every evening. This allowed him to determine how much radiation he had been exposed to. This was the forerunner of the film dosimeter. His efforts came too late, as he had already developed cancer and died six months after the conference. In the 1920s, the physical chemist John Eggert (1891-1973) played a key role in the introduction of film dosimetry for routine personal monitoring. Since then, it has been successively improved and, in particular, the evaluation technique has been automated since the 1960s. At the same time, Hermann Joseph Muller (1890-1967) discovered mutations as genetic consequences of X-rays, for which he was awarded the Nobel Prize in 1946. At the same time, the roentgen (R) was introduced as a unit for quantitative measurement of radiation exposure. A dosimeter for film is divided into multiple segments, each containing a light- or radiation-sensitive film surrounded by layers of copper and lead with varying thickness. The degree of radiation penetration determines whether the segment is not blackened or blackened to varying degrees. The absorbed radiation effect during the measurement time is summed up, and the radiation dose can be determined from the blackening. Guidelines for evaluation exist, with those for Germany being published in 1994 and last updated on December 8, 2003. Particle and quantum detectors. With the invention of the Geiger gaseous ionization detector in 1913, which became the Geiger-Müller gaseous ionization detector in 1928 - named after the physicists Hans Geiger (1882-1945) and Walther Müller (1905-1979) - the individual particles or quanta of ionizing radiation could be detected and measured. Detectors developed later, such as proportional counters or scintillation counters, which not only "count" but also measure energy and distinguish between types of radiation, also became important for radiation protection. Scintillation measurement is one of the oldest methods of detecting ionizing radiation or X-rays; originally, a zinc sulfide screen was held in the path of the beam and the scintillation events were either counted as flashes or, in the case of X-ray diagnostics, viewed as an image. A scintillation counter known as a spinthariscope was developed in 1903 by William Crookes (1832-1919) and used by Ernest Rutherford (1871-1937) to study the scattering of alpha particles from atomic nuclei. Thermoluminescence dosimeter. Lithium fluoride had already been proposed in the USA in 1950 by Farrington Daniels (1889-1972), Charles A. Boyd and Donald F. Saunders (1924-2013) for solid-state dosimetry using thermoluminescent dosimeters. The intensity of the thermoluminescent light is proportional to the amount of radiation previously absorbed. This type of dosimetry has been used since 1953 in the treatment of cancer patients and wherever people are occupationally exposed to radiation. The thermoluminescence dosimeter was followed by OSL dosimetry, which is not based on heat but on optically stimulated luminescence and was developed by Zenobia Jacobs and Richard Roberts at the University of Wollongong (Australia). The detector emits the stored energy as light. The light output, measured with photomultipliers, is then a measure of the dose. Whole body counter. Since 2003, whole-body counters have been used in radiation protection to monitor the absorption (incorporation) of radionuclides in people who handle gamma-emitting open radioactive materials and who may be contaminated through food, inhalation of dusts and gases, or open wounds. (α and β emitters are not measurable). Test specimen. Constancy testing is the verification of reference values as part of quality assurance in x-ray diagnostics, nuclear medicine diagnostics, and radiotherapy. National regulations specify which parameters are to be tested, which limits are to be observed, which test methods are to be used, and which test samples are to be used. In Germany, the Radiation Protection in Medicine Directive and the relevant DIN 6855 standard in nuclear medicine require regular (in some cases daily) constancy testing. Test sources are used to check the response of probe measuring stations as well as in vivo and in vitro measuring stations. Before starting the tests, the background count rate and the setting of the energy window must be checked every working day, and the settings and the yield with reproducible geometry must be checked at least once a week with a suitable test source, e.g. 137Caesium (DIN 6855-1). The reference values for the constancy test are determined during the acceptance test. Compact test specimens for medical X-ray images were not created until 1982. Prior to this, the patient himself often served as the object for producing X-ray test images. Prototypes of such an X-ray phantom with integrated structures were developed by Thomas Bronder at the "Physikalisch-Technische Bundesanstalt." A water phantom is a Plexiglas container filled with distilled water that is used as a substitute for living tissue to test electron linear accelerators used in radiation therapy. According to regulatory requirements, water phantom testing must be performed approximately every three months to ensure that the radiation dose delivered by the treatment system is consistent with the radiation planning. The Alderson-Rando phantom, invented by Samuel W. Alderson (1914-2005), has become the standard X-ray phantom. It was followed by the Alderson Radio Therapy (ART) phantom, which he patented in 1967. The ART phantom is cut horizontally into 2.5 cm thick slices. Each slice has holes sealed with bone-equivalent, soft-tissue-equivalent, or lung-equivalent pins that can be replaced by thermoluminescent dosimeters. Alderson is also known as the inventor of the crash test dummy. Dose reconstruction with ESR spectroscopy of deciduous teeth. As a result of accidents or improper use and disposal of radiation sources, a significant number of people are exposed to varying degrees of radiation. Radioactivity and local dose measurements are not sufficient to fully assess the effects of radiation. To retrospectively determine the individual radiation dose, measurements are made on teeth, i.e. on biological, endogenous materials. Tooth enamel is particularly suitable for the detection of ionizing radiation due to its high mineral content (hydroxyapatite), which has been known since 1968 thanks to the research of John M. Brady, Norman O. Aarestad and Harold M. Swartz. The measurements are performed on milk teeth, preferably molars, using electron paramagnetic resonance spectroscopy (ESR, EPR). The concentration of radicals generated by ionizing radiation is measured in the mineral part of the tooth. Due to the high stability of the radicals, this method can be used for dosimetry of long past exposures. Dose reconstruction using biological dosimetry. Since about 1988, in addition to physical dosimetry, biological dosimetry has made it possible to reconstruct the individual dose of ionizing radiation. This is especially important for unforeseen and accidental exposures, where radiation exposures occur without physical dose monitoring. Biological markers, particularly cytogenetic markers in blood lymphocytes, are used for this purpose. Techniques for detecting radiation damage include analyzing dicentric chromosomes after acute radiation exposure. Dicentric chromosomes result from defective repair of chromosome breaks in two chromosomes, resulting in two centromeres instead of one like undamaged chromosomes. Symmetric translocations, detected through fluorescence in situ hybridization (FISH), are used after chronic or long-term exposure to radiation. The micronucleus test and the premature chromosome condensation (PCC) test are available to measure acute exposure. Measured variables and units. In principle, reducing the exposure of the human organism to ionizing radiation to zero is not possible and perhaps not even sensible. The human organism has been accustomed to natural radioactivity for thousands of years and ultimately this also triggers mutations (changes in genetic material), which are the cause of the development of life on earth. The mutation-inducing effect of high-energy radiation was first demonstrated in 1927 by Hermann Joseph Muller (1890-1967). Three years after its establishment in 1958, the United Nations Scientific Committee on the Effects of Atomic Radiation adopted the Linear No-Threshold (LNT) model - a linear dose-effect relationship without a threshold - largely at the instigation of the Soviet Union. The dose-response relationship measured at high doses was extrapolated linearly to low doses. There would be no threshold, since even the smallest amounts of ionizing radiation would trigger some biological effect. The LNT model ignores not only possible radiation hormesis, but also the known ability of cells to repair genetic damage and the ability of the organism to remove damaged cells. Between 1963 and 1969, John W. Gofman (1918-2007) and Arthur R. Tamplin of the University of California, Berkeley, conducted research for the United States Atomic Energy Commission (USAEC, 1946-1974) investigating the relationship between radiation doses and cancer incidence. Their findings sparked a fierce controversy in the United States beginning in 1969. Starting in 1970, Ernest J. Sternglass, a radiologist at the University of Pittsburgh, published several studies describing the effect of radiation from nuclear tests and the vicinity of nuclear power plants on infant mortality. In 1971, the UASEC reduced the maximum allowable radiation dose by a factor of 100. Subsequently, nuclear technology was based on the principle of "As Low As Reasonably Achievable" (ALARA). This was a coherent principle as long as it was assumed that there was no threshold and that all doses were additive. In the meantime, a transition to "As High As Reasonably Safe" (AHARS) is increasingly being discussed. For the question of evacuation after accidents, a transition to AHARS seems absolutely necessary. In both the Chernobyl and Fukushima cases, hasty, poorly organized and poorly communicated evacuations caused psychological and physical damage to those affected - including documented deaths in the case of Fukushima. By some estimates, this damage is greater than would have been expected had the evacuation not taken place. Voices such as Geraldine Thomas therefore question such evacuations in principle and call for a transition to shelter-in-place wherever possible. Absorbed dose and dose equivalent. The British physicist and radiologist and founder of radiobiology Louis Harold Gray (1905-1965) introduced the unit Rad (acronym for radiation absorbed dose) in the 1930s, which was renamed Gray (Gy) after him in 1978. One gray is a mass-specific quantity and corresponds to the energy of one joule absorbed by one kilogram of body weight. Acute whole-body exposures in excess of four Gy are usually fatal to humans. The different types of radiation ionize to different degrees. Ionization is any process in which one or more electrons are removed from an atom or molecule, leaving the atom or molecule as a positively charged ion (cation). Each type of radiation is therefore assigned a dimensionless weighting factor that expresses its biological effectiveness. For X-rays, gamma and beta radiation, the factor is one, alpha radiation reaches a factor of twenty, and for neutron radiation it is between five and twenty, depending on the energy. Multiplying the absorbed dose in Gy by the weighting factor gives the equivalent dose, expressed in Sievert (Sv). It is named after the Swedish physician and physicist Rolf Maximilian Sievert (1896-1966). Sievert was the founder of radiation protection research and developed the Sievert chamber in 1929 to measure the intensity of X-rays. He founded the International Commission on Radiation Units and Measurements (ICRU) and later became chairman of the International Commission on Radiological Protection (ICRP). The ICRU and ICRP specify differently defined weighting factors that apply to environmental measurements (quality factor) and body-related dose equivalent data (radiation weighting factor). In relation to the body, the relevant dose term is the Organ Equivalent Dose (formerly "Organ Dose"). This is the dose equivalent averaged over an organ. Multiplied by organ-specific tissue weighting factors and summed over all organs, the effective dose is obtained, which represents a dose balance. In relation to environmental measurements, the ambient dose equivalent or local dose is relevant. Its increase over time is called the local dose rate. Even at very low effective doses, stochastic effects (genetic and cancer risk) are expected. At effective doses above 0.1 Sv, deterministic effects also occur (tissue damage up to radiation sickness at very high doses). Correspondingly high radiation doses are now only given in units of Gy. Natural radiation exposure in Germany, with an annual average effective dose of about 0.002 Sv, is well below this range. Tolerance dose. In 1931, the U.S. Advisory Committee on X-Ray and Radium Protection (ACXRP, now the National Council on Radiation Protection and Measurements, NCRP), founded in 1929, published the results of a study on the so-called tolerance dose, on which a scientifically based radiation protection guideline was based. Exposure limits were gradually lowered. In 1936 the tolerance dose was 0.1 R/day. The unit "R" (the X-ray) from the CGS unit system has been obsolete since the end of 1985. Since then, the SI unit of ion dose has been "coulomb per kilogram". Relative biological effectiveness. After World War II, the concept of tolerance dose was replaced by that of maximum permissible dose and the concept of relative biological effectiveness was introduced. The limit was set in 1956 by the National Council on Radiation Protection &amp; Measurements (NCRP) and the International Commission on Radiological Protection (ICRP) at 5 rem (50 mSv) per year for radiation workers and 0.5 rem per year for the general population. The unit Rem as a physical measure of radiation dose (from the English roentgen equivalent in man) was replaced by the unit Sv (sievert) in 1978. This was due to the advent of nuclear energy and its associated dangers. Prior to 1991, the equivalent dose was used both as a measure of dose and as a term for the body dose that determines the course and survival of radiation sickness. ICRP Publication 60 introduced the radiation weighting factor formula_0 was introduced. For examples of equivalent doses as body doses, see Banana equivalent dose. The origin of the concept of using a banana equivalent dose (BED) as a benchmark is unknown. In 1995, Gary Mansfield of the Lawrence Livermore National Laboratory found the Banana Equivalent Dose (BED) to be very useful in explaining radiation risks to the public. It is not a formally used dose. The banana equivalent dose is the dose of ionizing radiation to which a person is exposed by eating one banana. Bananas contain potassium. Natural potassium consists of 0.0117% of the radioactive isotope 40K (potassium-40) and has a specific activity of 30,346 becquerels per kilogram, or about 30 becquerels per gram. The radiation dose from eating a banana is about 0.1 μSv. The value of this reference dose is given as "1" and thus becomes the "unit of measurement" banana equivalent dose. Consequently, other radiation exposures can be compared to the consumption of one banana. For example, the average daily total radiation exposure of a person is 100 banana equivalent doses. At 0.17 mSv per year, almost 10 percent of natural radioactive exposure in Germany (an average of 2.1 mSv per year) is caused by the body's own (vital) potassium. The banana equivalent dose does not take into account the fact that no radioactive nuclide is accumulated in the body through the consumption of potassium-containing foods. The potassium content of the body is in homeostasis and is kept constant. Disregard of radiation protection. Unethical radiation experiments. The Trinity test was the first nuclear weapon explosion conducted as part of the US Manhattan Project. There were no warnings to residents about the fallout, nor information about shelters or possible evacuations. This was followed in 1946 by tests in the Marshall Islands (Operation Crossroads), as recounted by chemist Harold Carpenter Hodge (1904-1990), toxicologist for the Manhattan Project, in his lecture (1947) as president of the International Association for Dental Research. Hodge's reputation was severely damaged by historian Eileen Welsome's 1999 Pulitzer Prize-winning book The Plutonium Files - America's Secret Medical Experiments in the Cold War. She documents horrific human experiments in which the subjects (including Hodge) were unaware that they were being used as "guinea pigs" to test the safety limits of uranium and plutonium. The experiments on the unidentified subjects were continued by the United States Atomic Energy Commission (AEC) into the 1970s. The abuse of radiation continues to this day. During the Cold War, ethically reprehensible radiation experiments were conducted in the United States on untrained human subjects to determine the detailed effects of radiation on human health. Between 1945 and 1947, 18 people were injected with plutonium by Manhattan Project doctors. In Nashville, pregnant women were given radioactive mixtures. In Cincinnati, about 200 patients were irradiated over a 15-year period. In Chicago, 102 people received injections of strontium and caesium solutions. In Massachusetts, 57 children with developmental disorders were given oatmeal with radioactive markers. These radiation experiments were not stopped until 1993 under President Bill Clinton. But the injustice committed was not atoned for. For years, uranium hexafluoride caused radiation damage at a DuPont Company plant and to local residents. At times, the plant even deliberately released uranium hexafluoride in its heated gaseous state into the surrounding area to study the effects of the radioactive and chemically aggressive gas. Stasi border controls. Between 1978 and 1989, vehicles were checked with 137Cs gamma sources at 17 border crossings between the German Democratic Republic and the Federal Republic of Germany. According to the Transit Agreement, vehicles could only be screened if there was reasonable suspicion. For this reason, the Ministry for State Security (Stasi) installed and operated a secret radioactive screening technology, codenamed "Technik V," which was generally used to screen all transit passengers to detect "deserters from the Republic." Ordinary GDR customs officers were unaware of the secret radioactive screening technology and were subject to strict "entry regulations" designed to "protect" them as much as possible from radiation exposure. Lieutenant General Heinz Fiedler (1929-1993), as the highest ranking border guard of the MfS, was responsible for all radiation controls. On February 17, 1995, the Radiation Protection Commission published a statement in which it said: "Even if we assume that individual persons stopped more frequently in the radiation field and that a fluoroscopy lasting up to three minutes increases the annual radiation exposure by one to a few mSv, this does not result in a dose that is harmful to health". In contrast, the designer of this type of border control calculated 15 nSv per crossing. Lorenz of the former State Office for Radiation Protection and Nuclear Safety of the GDR came up with a dose estimate of 1000 nSv, which was corrected to 50 nSv a few weeks later. Radar systems. Radar equipment is used at airports, in airplanes, at missile sites, on tanks, and on ships. The radar technology commonly used in the 20th century produced X-rays as a technically unavoidable by-product in the high-voltage electronics of the equipment. In the 1960s and 1970s, German soldiers and technicians were largely unaware of the dangers, as were those in the GDR's National People's Army. The problem had been known internationally since the 1950s, and to the German Armed Forces since at least 1958. However, no radiation protection measures were taken, such as the wearing of lead aprons. Until about the mid-1980s, radiation shielding was inadequate, especially for pulse switch tubes. Particularly affected were maintenance technicians (radar mechanics) who were exposed to the X-ray generating parts for hours without any protection. The permissible annual limit value could be exceeded after just 3 minutes. It was not until 1976 that warning notices were put up and protective measures taken in the German Navy, and not until the early 1980s in general. As late as the 1990s, the German Armed Forces denied any connection between radar equipment and cancer or genetic damage. The number of victims amounted to several thousand. The connection was later acknowledged by the German Armed Forces and in many cases a supplementary pension was paid. In 2012, a foundation was set up to provide unbureaucratic compensation for the victims. Radiation protection crimes. National Socialism. The harmful effects of X-rays were recognized during the National Socialist era. The function of the gonads (ovaries or testicles) was destroyed by ionizing radiation, leading to infertility. In July 1942, Heinrich Himmler (1900-1945) decided to conduct forced sterilization experiments at the Auschwitz-Birkenau concentration camp, which were carried out by Horst Schumann (1906-1983), previously a doctor in Aktion T4. Each test victim had to stand between two X-ray machines, which were arranged in such a way that the test victim had just enough space between them. Opposite the x-ray machines was a booth with lead walls and a small window. From the booth, Schumann could direct X-rays at the test victims' sexual organs without endangering himself. Human radiation castration experiments were also conducted in concentration camps under the direction of Viktor Brack (1904-1948). As part of the "Law for the Prevention of Hereditary Diseases," people were often subjected to radiation castration during interrogations without their knowledge. Approximately 150 radiologists from hospitals throughout Germany participated in the forced castration of approximately 7,200 people using X-rays or radium. Polonium murder. On November 23, 2006, Alexander Alexanderovich Litvinenko (1962-2006) was murdered under unexplained circumstances as a result of radiation sickness caused by polonium. This was also briefly suspected in the case of Yasser Arafat (1929-2004), who died in 2004. Radiation offenses. The misuse of ionizing radiation is a radiation offence under German criminal law. The use of ionizing radiation to harm persons or property is punishable. Since 1998, the regulations can be found in § 309 (in German) (previously § 311a StGB old version); the regulations go back to § 41 AtG old version. In the Austrian Criminal Code, relevant criminal offenses are defined in the seventh section, ""Criminal acts dangerous to the public" and "Criminal acts against the environment"". In Switzerland, endangerment by nuclear energy, radioactive substances or ionizing radiation is punishable under Art. 326 of the Swiss Criminal Code and disregard of safety regulations under Chapter 9 of the Nuclear Energy Act of 21 March 2003. Radiation protection for less energetic types of radiation. Originally, the term radiation protection referred only to ionizing radiation. Today, non-ionizing radiation is also included and is the responsibility of the Federal Office for Radiation Protection, the Radiation Protection Division of the Federal Office of Public Health and the Ministry of Climate Action and Energy (Austria). The project collected, evaluated and compared data on the legal situation in all European countries (47 countries plus Germany) and major non-European countries (China, India, Australia, Japan, Canada, New Zealand and the USA) regarding electric, magnetic and electromagnetic fields (EMF) and optical radiation (OS). The results were very different and in some cases deviated from the recommendations of the International Commission on Non-Ionizing Radiation Protection (ICNIRP). UV light. For many centuries, the Inuit (Eskimos) have used snow goggles with narrow slits, carved from seal bones or reindeer antlers, to protect against snow blindness (photokeratitis). In the 1960s, Australia - particularly Queensland - launched the first awareness campaign on the dangers of ultraviolet (UV) radiation in the spirit of primary prevention. In the 1980s, many countries in Europe and overseas initiated similar UV protection campaigns. UV radiation has a thermal effect on the skin and eyes and can lead to skin cancer (malignant melanoma) and eye inflammation or cataracts. To protect the skin from harmful UV radiation, such as photodermatosis, acne aestivalis, actinic keratosis or urticaria solaris, normal clothing, special UV protective clothing (SPF 40-50) and high SPF sunscreen can be used. The Australian-New Zealand Standard (AS/NZS 4399) of 1996 measures new textile materials in an unstretched and dry state for the manufacture of protective clothing worn while bathing, especially by children, and for the manufacture of shading textiles (sunshades, awnings). The UV Standard 801 assumes a maximum radiation intensity with the solar spectrum in Melbourne, Australia, on January 1 of a year (at the height of the Australian summer), the most sensitive skin type of the wearer, and under wearing conditions. As the solar spectrum in the northern hemisphere differs from that in Australia, the measurement method according to the European standard EN 13758-1 is based on the solar spectrum of Albuquerque (New Mexico, USA), which corresponds approximately to that of southern Europe. To protect your eyes, wear sunglasses with UV protection or special goggles that also shield the sides to prevent snow blindness. A defensive reaction of the skin is the formation of a light callus, the skin's own sun protection, which corresponds to a protection factor of about 5. At the same time, the production of brown skin pigments (melanin) in the corresponding cells (melanocytes) is stimulated. A solar control film is usually a film made of polyethylene terephthalate (PET) that is applied to windows to reduce the light and heat from the sun's rays. The film filters UV-A and UV-B radiation. Polyethylene terephthalate goes back to an invention by the two Englishmen John Rex Whinfield (1902-1966) and James Tennant Dickson in 1941. The fact that UV-B radiation (Dorno radiation, after Carl Dorno (1865-1942)) is a proven carcinogen, but is also required for the body's own synthesis of vitamin-D3 (cholecalciferol), leads to internationally conflicting recommendations regarding health-promoting UV exposure. In 2014, based on the scientific evidence of the last decades, 20 scientific authorities, professional societies and associations from the fields of radiation protection, health, risk assessment, medicine and nutrition published a recommendation on "UV exposure for the formation of the body's own vitamin D". It was the first interdisciplinary recommendation on this topic worldwide. Using a solarium for the first time at a young age (&lt;35 years) almost doubles the risk of developing malignant melanoma. In Germany, the use of tanning beds by minors has been prohibited by law since March 2010. As of August 1, 2012, sunbeds must not exceed a maximum irradiance of 0.3 watts per square meter of skin. Sunbeds must be labeled accordingly. The new irradiance limit corresponds to the highest UV dose that can be measured on Earth at 12 noon under a cloudless sky at the equator. The minimum erythema dose (MED) is determined for medical applications. The MED is defined as the lowest dose of radiation that produces a barely visible erythema. It is determined 24 hours after the test irradiation. It is performed with the type of lamp intended for the therapy by applying so-called light stairs to skin that is not normally exposed to light (for example, on the buttocks). Sun lamp. Richard Küch (1860-1915) was able to melt quartz glass - the basis for UV radiation sources - for the first time in 1890 and founded the "Heraeus Quarzschmelze". He developed the first quartz lamp (sun lamp) for generating UV radiation in 1904, thus laying the foundation for this form of light therapy. Despite the dosage problems, doctors increasingly used quartz lamps in the early 20th century. Internal medicine specialists and dermatologists were among the most eager testers. After successful treatment of skin tuberculosis, internal medicine began to treat tuberculous pleurisy, glandular tuberculosis and intestinal tuberculosis. In addition, doctors tested the effect of quartz lamps on other infectious diseases such as syphilis, metabolic diseases, cardiovascular diseases, nerve pain such as sciatica, or nervous diseases such as neurasthenia and hysteria. In dermatology, fungal diseases, ulcers and wounds, psoriasis, acne, freckles and hair loss were also treated with quartz lamps, while in gynecology, abdominal diseases were treated with quartz lamps. Rejuvenation specialists used artificial high-altitude sunlight to stimulate gonadal activity and treated infertility, impotentia generandi (inability to conceive), and lack of sexual desire by irradiating the genitals. For this purpose, Philipp Keller (1891-1973) developed an erythema dosimeter with which he measured the amount of radiation not in Finsen units (UV radiation with a wavelength λ of 296.7 nm and an irradiance E of 10−5 W/m2), but in height solar units (HSE). It was the only instrument in use around 1930, but it was not widely accepted in medical circles. Treatment of acne with ultraviolet radiation is still controversial. Although UV radiation can have an antibacterial effect, it can also induce proliferative hyperkeratosis. This can lead to the formation of comedones ("blackheads"). Phototoxic effects may also occur. In addition, it is carcinogenic and promotes skin aging. UV therapy is increasingly being abandoned in favor of photodynamic therapy. Laser. The ruby laser was developed in 1960 by Theodore Maiman (1927-2007) as the first laser based on the ruby maser. Soon after, the dangers of lasers were discovered, especially for the eyes and skin, due to the laser's low penetration depth. Lasers have numerous applications in technology and research as well as in everyday life, from simple laser pointers to distance measuring devices, cutting and welding tools, reproduction of optical storage media such as CDs, DVDs and Blu-ray discs, communication, laser scalpels and other devices using laser light in everyday medical practice. The Radiation Protection Commission requires that laser applications on human skin be performed only by a specially trained physician. Lasers are also used for show effects in discotheques and at events. Lasers can cause biological damage due to the properties of their radiation and their sometimes extremely concentrated electromagnetic power. For this reason, lasers must be labeled with standardized warnings depending on the laser class. The classification is based on the DIN standard EN 60825-1, which distinguishes between ranges of wavelengths and exposure times that lead to characteristic injuries and injury thresholds for power or energy density. The CO2-Laser was developed in 1964 by the Indian electrical engineer and physicist Chandra Kumar Naranbhai Patel (*1938) at the same time as the (neodymium-doped yttrium aluminum garnet laser) at Bell Laboratories by LeGrand Van Uitert (1922-1999) and Joseph E. Geusic (*1931) and the (erbium-doped yttrium aluminum garnet laser) and has been used in dentistry since the early 1970s. In the hard laser field, two systems in particular are emerging for use in the oral cavity: the CO2 laser for use in soft tissue and the Er:YAG laser for use in dental hard and soft tissue. The goal of soft laser treatment is to achieve biostimulation with low energy densities. The Commission on Radiological Protection strongly recommends that the possession and purchase of class 3B and 4 laser pointers be regulated by law to prevent misuse. This is due to the increase in dangerous dazzle attacks caused by high-power laser pointers. In addition to pilots, these include truck and car drivers, train operators, soccer players, referees, and even spectators at soccer games. Such glare can lead to serious accidents and, in the case of pilots and truck drivers, to occupational disability due to eye damage. The first accident prevention regulation was published on April 1, 1988 as BGV B2, followed on January 1, 1997 by DGUV Regulation 11 of the German Social Accident Insurance. Between January and mid-September 2010, the German Federal Aviation Office registered 229 dazzle attacks on helicopters and airplanes of German airlines nationwide. On October 18, 2017, a perpetrator of a dazzle attack on a federal police helicopter was sentenced to one year and six months in prison without parole. Electromagnetic radiation exposure. Electrosmog is colloquially understood as the exposure of humans and the environment to electric, magnetic and electromagnetic fields, some of which are believed to have undesirable biological effects. Electromagnetic environmental compatibility (EMC) refers to the effects on living organisms, some of which are considered electrosensitive. Fears of such effects have existed since the beginning of technological use in the mid-19th century. In 1890, for example, officials of the Royal General Directorate in Bavaria were forbidden to attend the opening ceremony of Germany's first alternating current power plant, the Reichenhall Electricity Works, or to enter the machine room. With the establishment of the first radio telegraphy and its telegraph stations, the U.S. magazine The Atlanta Constitution reported in April 1911 on the potential dangers of radio telegraph waves, which, in addition to "tooth loss," were said to cause hair loss and make people "crazy" over time. Full-body protection was recommended as a preventive measure. During the second half of the 20th century, other sources of electromagnetic fields have become the focus of health concerns, such as power lines, photovoltaic systems, microwave ovens, computer and television screens, security devices, radar equipment, and more recently, cordless telephones (DECT), cell phones, their base stations, energy-saving lamps, and Bluetooth connections. Electrified railroad lines, tram overhead lines and subway tracks are also strong sources of electrosmog. In 1996, the World Health Organization (WHO) launched the EMF (ElectroMagnetic Fields) Project to bring together current knowledge and available resources from key international and national organizations and scientific institutions on electromagnetic fields. The German Federal Office for Radiation Protection ("BfS") published the following recommendation in 2006: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"In order to avoid possible health risks, the German Federal Office for Radiation Protection recommends that you minimize your personal exposure to radiation through your own initiative." As of 2016, the EMF Guideline 2016 of EUROPAEM (European Academy For Environmental Medicine) on the prevention, diagnosis and treatment of EMF-related complaints and diseases applies. Microwaves. A microwave oven, invented in 1950 by U.S. researcher Percy Spencer (1894-1970), is used to quickly heat food using microwave radiation at a frequency of 2.45 gigahertz. In an intact microwave oven, leakage radiation is relatively low due to the shielding of the cooking chamber. An "emission limit of five milliwatts per square centimeter (equivalent to 50 watts per square meter) at a distance of five centimeters from the surface of the appliance" (radiation density or power flux density) is specified. Children should not stand directly in front of or next to the appliance while food is being prepared. In addition, the "Federal Office for Radiation Protection" lists pregnant women as particularly at risk. In microwave therapy, electromagnetic waves are generated for heat treatment. The penetration depth and energy distribution vary depending on the frequency of application (short waves, ultra short waves, microwaves). To achieve greater penetration, pulsed microwaves are used, each of which delivers high energy to the tissue. A pulse pause ensures that no burns occur. Metal implants and pacemakers are contraindications. Cell phones. The discussion about possible health risks from mobile phone radiation has been controversial to date, although there are currently no valid results. According to the German Federal Office for Radiation Protection &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"there are still uncertainties in the risk assessment that could not be completely eliminated by the German Mobile Telecommunication Research Program, in particular possible health risks of long-term exposure to high-frequency electromagnetic fields from cell phone calls in adults (intensive cell phone use over more than 10 years) and the question of whether the use of cell phones by children could have an effect on health. For these reasons, the Federal Office for Radiation Protection still considers preventive health protection (precaution) to be necessary: exposure to electromagnetic fields should be kept as low as possible." The German Federal Office for Radiation Protection recommends, among other things, mobile phones with a low SAR (Specific Absorption Rate) and the use of headsets or hands-free devices to keep the mobile phone away from the head. There is some discussion that mobile phone radiation may increase the incidence of acoustic neuroma, a benign tumor that arises from the vestibulocochlear nerve. It should therefore be reduced. In everyday life, a mobile phone transmits at maximum power only in exceptional cases. As soon as it is near a cell where maximum power is no longer needed, it is instructed by that cell to reduce its power. Electrosmog or cell phone radiation filters built into cell phones are supposed to protect against radiation. The effect is doubtful from the point of view of electromagnetic environmental compatibility, because the radiation intensity of the cell phone is increased disproportionately in order to obtain the necessary power. The same is true for use in a car without an external antenna, as the necessary radiation can only penetrate through the windows, or in areas with poor network coverage. Since 2004, radio network repeaters have been developed for mobile phone networks (GSM, UMTS, Tetrapol) that can amplify the reception of a mobile phone cell in shaded buildings. This reduces the SAR value of the mobile phone when making calls. The SAR value of a WLAN router is only a tenth of that of a cell phone, although this drops by a further 80% at a distance of just one meter. The router can be set so that it switches off when not in use, for example at night. Electric fields. High-voltage power lines. Until now, electrical energy has been transported from the power plant to the consumer almost exclusively via high-voltage lines, in which alternating current flows at a frequency of 50 Hertz. As part of the energy transition, high-voltage direct current (HVDC) transmission systems are also planned in Germany. Since the amendment of the 26th Federal Immission Control Ordinance (BImSchV) in 2013, emissions from HVDC systems are also regulated by law. The limit is set to prevent interference with electronic implants caused by static magnetic fields. No limit has been set for static electric fields. Domestic electrical installation. Ground fault interrupters are available to reduce electric fields and (in the case of current flow) magnetic fields from residential electrical installations. In plaster installations, only a small part of the electric field can escape from the wall. However, a mains disconnect switch automatically disconnects the relevant line as long as no electrical load is switched on; as soon as a load is switched on, the mains voltage is also switched on. Ground fault interrupters were introduced in 1973 and have been continuously improved over the decades. In 1990, for example, it became possible to disconnect the PEN conductor (formerly known as the neutral conductor). Circuit breakers can be installed in several different circuits, preferably in those that supply bedrooms. However, they only turn off when no continuous current consumers such as air conditioners, fans, humidifiers, electric alarm clocks, night lights, standby devices, alarm systems, chargers, and similar devices are turned on. Instead of the mains voltage, a low voltage (2-12 volts) is applied, which can be used to detect when a consumer is switched on. Rooms can also be shielded with copper wallpaper or special wall paints containing metal, thus applying the Faraday cage principle. Body scanner. Since about 2005, body scanners have been used primarily at airports for security (passenger) screening. Passive scanners detect the natural radiation emitted by a person's body and use it to locate objects worn or concealed on the body. Active systems also use artificial radiation to improve detection by analyzing the backscatter. A distinction is made between body scanners that use ionizing radiation (usually X-rays) and those that use non-ionizing radiation (terahertz radiation). The integrated components operating in the lower terahertz range emit less than 1 mW (-3 dBm), so no health effects are expected. There are conflicting studies from 2009 on whether genetic damage can be detected as a result of terahertz radiation. In the U.S., backscatter x-ray scanners make up the majority of devices used. Scientists fear that a future increase in cancer could pose a greater threat to the life and limb of passengers than terrorism itself. It is not clear to the passenger whether the body scanners used during a particular checkpoint use only terahertz or also X-ray radiation. According to the Federal Office for Radiation Protection, the few available results from investigations in the frequency range of active whole-body scanners that work with millimeter wave or terahertz radiation do not yet allow a conclusive assessment from a radiation protection perspective (as of 24 May 2017). In the vicinity of the plant, where employees or other third parties may be present, the limit value of the permissible annual dose for a single person in the population of one millisievert (1 mSv, including pregnant women and children) is not exceeded, even in the case of permanent presence. In the case of X-ray scanners for hand luggage, it is not necessary to set up a radiation protection area by Section §19 "RöV", as the radiation exposure during a hand luggage check for passengers does not exceed 0.2 microsievert (μSv), even under unfavorable assumptions. For this reason, employees involved in baggage screening are not considered to be occupationally exposed to radiation in accordance with Section §31 X-ray Ordinance and therefore do not have to wear a dosimeter. Radiation protection for electromedical treatment procedures. Electromagnetic alternating fields have been used in medicine since 1764, mainly for heating and increasing blood circulation (diathermy, short-wave therapy) to improve wound and bone healing. The relevant radiation protection is regulated by the Medical Devices Act together with the Medical Devices Operator Ordinance. The Medical Devices Act came into force in Germany on January 14, 1985. It divided the medical devices known at that time into groups according to their degree of risk to the patient. The Medical Devices Ordinance regulated the handling of medical devices until January 1, 2002, when it was replaced by the Medical Devices Act. When ionizing radiation is used in medicine, the benefit must outweigh the potential risk of tissue damage (justifiable indication). For this reason, radiation protection is of great importance. The design should be optimized according to the ALARA (As Low As Reasonably Achievable) principle as soon as an application is described as suitable. Since 1996, the European ALARA Network (EAN), founded by the European Commission, has been working on the further implementation of the ALARA principle in radiation protection. Infrared radiation. Discovered around 1800 by the German-British astronomer, engineer and musician Friedrich Wilhelm Herschel (1738-1822), infrared radiation primarily produces heat. If the increase in body temperature and the duration of exposure exceed critical limits, heat damage and even heat stroke can result. Due to the still unsatisfactory data situation and the partly contradictory results, it is not yet possible to give clear recommendations for radiation protection with regard to infrared radiation. However, the findings regarding the acceleration of skin aging by infrared radiation are sufficient to describe the use of infrared radiation against wrinkles as counterproductive. In 2011, the Institute for Occupational Safety and Health of the German Social Accident Insurance established exposure limit values to protect the skin from burns caused by thermal radiation. The IFA recommends that, in addition to the limit specified in EU Directive 2006/25/EC to protect the skin from burns for exposure times up to 10 seconds, a limit for exposure times between 10 and 1000 seconds should be applied. In addition, all radiation components in the wavelength range from 380 to 20000 nm should be considered for comparison with the limit values. Radiation protection regulations. First radiation protection regulations. A leaflet published by the German Radiological Society (DRG) in 1913 was the first systematic approach to radiation protection. The physicist and co-founder of the society, Bernhard Walter (1861-1950), was one of the pioneers of radiation protection. The International Commission on Radiological Protection (ICRP) and the International Commission on Radiation Units and Measurements (ICRU) were established at the Second International Congress of Radiology in Stockholm in 1928. In the same year, the first international radiation protection recommendations were adopted and each country represented was asked to develop a coordinated radiation control program. The United States representative, Lauriston Taylor of the US Bureau of Standards (NSB), formed the Advisory Committee on X-Ray and Radium Protection, later renamed the National Committee on Radiation Protection and Measurements (NCRP). The NCRP received a Congressional charter in 1964 and continues to develop guidelines to protect individuals and the public from excessive radiation. In the years that followed, numerous other organizations were established by almost every president. Radiation protection monitoring. Individuals in professions such as pilots, nuclear physicians, and nuclear power plant workers are regularly exposed to ionizing radiation. In Germany, over 400,000 workers undergo occupational radiation monitoring to safeguard against the harmful effects of radiation. Approximately 70,000 individuals employed across various industries possess a radiation pass (distinct from an X-ray pass - see below). Individuals who may receive an annual effective dose of more than 1 millisievert during their work are required to undergo radiation protection monitoring. In Germany, the effective dose from natural radiation is 2.1 millisieverts per year. Radiation dose is measured using dosimeters, and the occupational dose limit is 20 millisieverts per year. Monitoring also applies to buildings, plant components or (radioactive) substances. These are exempted from the scope of the Radiation Protection Ordinance by a special administrative act, the exemption in radiation protection. To this end, it must be ensured that the resulting radiation exposure for an individual member of the public does not exceed 10 μSv per calendar year and that the resulting collective dose does not exceed 1 person sievert per year. Radiation protection register. According to § 170 "[Radiation Protection Act]" (in German) all occupationally exposed persons and holders of radiation passports require a radiation protection register number (SSR number or SSRN), a unique personal identification number, as of December 31, 2018. The SSR number facilitates and improves the allocation and balancing of individual dose values from occupational radiation exposure in the radiation protection register. It replaces the former radiation passport number. It is used to monitor dose limits. Companies are obliged to deploy their employees in such a way that the radiation dose to which they are exposed does not exceed the limit of 20 millisieverts per calendar year. In Germany, about 440,000 people were classified as occupationally exposed to radiation in 2016. According to § 145 "[German Radiation Protection Act]" (in German) paragraph 1, Sentence 1, "in the case of remediation and other measures to prevent and reduce exposure at radioactively contaminated sites, the person who carries out the measures himself or has them carried out by workers under his supervision must carry out an assessment of the body dose of the workers before starting the measures". Applications for SSR numbers must be submitted to the Federal Office for Radiation Protection ("BfS") by March 31, 2019 for all employees currently under surveillance. The application for the SSR number at the Federal Office and the transmission of the necessary data must be ensured following § 170 "[German Radiation Protection Act]" (in German) paragraph 4 sentence 4 by § 131 (in German) paragraph 1 or § 145 (in German) paragraph 1 sentence 1 or by the § 115 (in German) paragraph 2 or § 153 (in German) paragraph 1. The SSR numbers must then be available for further use as part of normal communication with monitoring stations or radiation pass authorities. The SSR number is derived from the social security number and personal data using non-traceable encryption. The transmission takes place online. Approximately 420,00 persons are monitored for radiation protection in Germany (as of 2019). Emergency responders (including volunteers) who are not occupationally exposed persons within the meaning of the Radiation Protection Act also require an SSR number retrospectively, i.e. after an operation in which they were exposed to radiation above the limits specified in the Radiation Protection Ordinance, as all relevant exposures must be recorded in the Radiation Protection Register. Radiation protection areas. Radiation protection areas are spatial areas in which either people can receive certain body doses during their stay or in which a certain local dose rate is exceeded. They are defined in § 36 of the Radiation Protection Ordinance and in §§ 19 and 20 of the X-Ray Ordinance. According to the Radiation Protection Ordinance, radiation protection areas are divided into restricted areas (local dose rate ≥ 3 mSv/hour), control areas (effective dose &gt; 6 mSv/year) and monitoring areas (effective dose &gt; 1 mSv/year), depending on the hazard. Radiological emergency response projects. Early warning systems. Germany, Austria and Switzerland, among many other countries, have early warning systems in place to protect the population. The local dose rate measurement network (ODL measurement network) is a measurement system for radioactivity operated by the German Federal Office for Radiation Protection, which determines the local dose rate at the measurement site. In Austria, the Radiation Early Warning System is a measurement and reporting system established in the late 1970s to provide early detection of elevated levels of ionizing radiation in the country and to enable the necessary measures to be taken. The readings are automatically sent to the central office at the Ministry, where they can be accessed by the relevant departments, such as the Federal Warning Center or the warning centers of the federal states. NADAM (Network for Automatic Dose Alerting and Measurement) is the gamma radiation monitoring network of the Swiss National Emergency Operations Center. The monitoring network is complemented by the MADUK stations (Monitoring Network for Automatic Dose Rate Monitoring in the Environment of Nuclear Power Plants) of the Swiss Federal Nuclear Safety Inspectorate (ENSI). Project NERIS-TP. In 2011-2014, the NERIS-TP project aimed to discuss the lessons learned from the European EURANOS project on nuclear emergency response with all relevant stakeholders. Project PREPARE. The European PREPARE project aims to fill gaps in nuclear and radiological emergency preparedness identified after the Fukushima accident. The project aims to review emergency response concepts for long-lived releases, to address issues of measurement methods and food safety in the case of transboundary contamination, and to fill gaps in decision support systems (source term reconstruction, improved dispersion modeling, consideration of aquatic dispersion pathways in European river systems). Project IMIS. Environmental radioactivity has been monitored in Germany since the 1950s. Until 1986, this was carried out by various authorities that did not coordinate with each other. Following the confusion during the Chernobyl reactor disaster in April 1986, measurement activities were pooled in the IMIS (Integrated Measurement and Information System) project, an environmental information system for monitoring radioactivity in Germany. Previously, the measuring equipment was affiliated to the warning offices under the name "WADIS" ("Warning service information system"). Project CONCERT. The aim of the CONCERT (European Joint Programme for the Integration of Radiation Protection Research) project is to establish a joint European program for radiation protection research in Europe in 2018, based on the current strategic research programs of the European research platforms MELODI (radiation effects and radiation risks), ALLIANCE (radioecology), NERIS (nuclear and radiological emergency response), EURADOS (radiation dosimetry) and EURAMED (medical radiation protection). Project REWARD. The REWARD (Real time wide area radiation surveillance system) project was established to address the threats of nuclear terrorism, missing radioactive sources, radioactive contamination and nuclear accidents. The consortium developed a mobile system for real time wide area radiation monitoring based on the integration of new miniaturized solid state sensors. Two sensors are used: a cadmium zinc telluride (CdZnTe) detector for gamma radiation and a high efficiency neutron detector based on novel silicon technologies. The gamma and neutron detectors are integrated into a single monitoring device called a tag. The sensor unit includes a wireless communication interface to remotely transmit data to a monitoring base station, which also uses a GPS system to calculate the tag's position. Task force for all types of nuclear emergencies. The Nuclear Emergency Support Team (NEST) is a US program for all types of nuclear emergencies of the National Nuclear Security Administration (NNSA) of the United States Department of Energy and is also a counter-terrorism unit that responds to incidents involving radioactive materials or nuclear weapons in US possession abroad. It was founded in 1974/75 under US President Gerald Ford and renamed the Nuclear Emergency Support Team in 2002. In 1988, a secret agreement from 1976 between the USA and the Federal Republic of Germany became known, which stipulates the deployment of NEST in the Federal Republic. In Germany, a similar unit has existed since 2003 with the name Central Federal Support Group for Serious Cases of Nuclear-Specific Emergency Response ("ZUB"). Legal basis. As early as 1905, the Frenchman Viktor Hennecart called for special legislation to regulate the use of X-rays. In England, Sidney Russ (1879-1963) suggested to the British Roentgen Society in 1915 that it should develop its own set of safety standards, which it did in July 1921 with the formation of the British X-Ray and Radium Protection Committee. In the United States, the American X-Ray Society developed its own guidelines in 1922. In the German Reich, a special committee of the German X-Ray Society under Franz Maximilian Groedel (1881-1951), Hans Liniger (1863-1933) and Heinz Lossen (1893-1967) formulated the first guidelines after the First World War. In 1953, the employers' liability insurance associations issued the accident prevention regulation "Use of X-rays in medical facilities" based on the legal basis in § 848a of the Reich Insurance Code (RVG). In the GDR, the Occupational Safety and Health Regulation (ASAO) 950 was in effect from 1954 to 1971. It was replaced by ASAO 980 on April 1, 1971. EURATOM. The European Atomic Energy Community (EURATOM) was founded on March 25, 1957, by the Treaty of Rome between France, Italy, the Benelux countries and the Federal Republic of Germany, and remains almost unchanged to this day. Chapter 3 of the Euratom Treaty regulates measures to protect the health of the population. Article 35 requires facilities for the continuous monitoring of soil, air and water for radioactivity. As a result, monitoring networks have been set up in all Member States and the data collected is sent to the EU's central database (EURDEP, European Radiological Data Exchange Platform). The platform is part of the EU's ECURIE system for the exchange of information in the event of radiological emergencies and became operational in 1995. Switzerland also participates in this information system. Legal basis in Germany. In Germany, the first X-ray regulation ("RGBl". I p. 88) was issued in 1941 and originally applied to non-medical companies. The first medical regulations were issued in October 1953 by the Main Association of Industrial Employer's Liability Insurance Associations as accident prevention regulations for the Reich Insurance Code. Basic standards for radiation protection were introduced by directives of the European Atomic Energy Community (EURATOM) on February 2, 1959. The Atomic Energy Act of December 23, 1959 is the national legal basis for all radiation protection legislation in the Federal Republic of Germany (West) with the Radiation Protection Ordinance of June 24, 1960 (only for radioactive substances), the Radiation Protection Ordinance of July 18, 1964 (for the medical sector) and the X-ray Ordinance of March 1, 1973. Radiation protection was formulated in § 1, according to which "life, health and property are to be protected from the dangers of nuclear energy and the harmful effects of ionizing radiation and damage caused by nuclear energy or ionizing radiation is to be compensated." The Radiation Protection Ordinance sets dose limits for the general population and for occupationally exposed persons. In general, any use of ionizing radiation must be justified and radiation exposure must be kept as low as possible even below the limit values. To this end, physicians, dentists and veterinarians, for example, must provide proof every five years - by Section 18a (2) X-ray Ordinance"." in the version dated April 30, 2003 - that their specialist knowledge in radiation protection has been updated and must complete a full-day course with a final examination. Specialist knowledge in radiation protection is required by the Technical Knowledge Guideline according to X-ray Ordinance"." - R3 for persons who work with baggage screening equipment, industrial measuring equipment and interfering emitters. Since 2019, the regulatory areas of the previous X-ray and radiation protection ordinances have been merged in the amended Radiation Protection Ordinance. The Radiation Protection Commission ("SSK") was founded in 1974 as an advisory body to the Federal Ministry of the Interior. It emerged from Commission IV "Radiation Protection" of the German Atomic Energy Commission, which was founded on January 26, 1956. After the Chernobyl nuclear disaster in 1986, the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection was established in the Federal Republic of Germany. The creation of this ministry was primarily a response to the perceived lack of coordination in the political response to the Chernobyl disaster and its aftermath. On December 11, 1986, the German Bundestag passed the Precautionary Radiation Protection Act ("StrVG") to protect the population, to monitor radioactivity in the environment, and to minimize human exposure to radiation and radioactive contamination of the environment in the event of radioactive accidents or incidents. The last revision of the X-Ray Ordinance was issued on January 8, 1987. As part of a comprehensive modernization of German radiation protection law, which is largely based on Directive 2013/59/Euratom, the provisions of the X-Ray Ordinance have been incorporated into the revised Radiation Protection Ordinance. Among many other measures, contaminated food was withdrawn from the market on a large scale. Parents were strongly advised not to let their children play in sandboxes. Some of the contaminated sand was replaced. In 1989, the Federal Office for Radiation Protection was incorporated into the Ministry of the Environment. On April 30, 2003, a new precautionary radiation protection law was promulgated to implement two EU directives on the health protection of persons against the dangers of ionizing radiation during medical exposure. The protection of workers from optical radiation (infrared radiation (IR), visible light (VIS) and ultraviolet radiation (UV)), which falls under the category of "non-ionizing radiation, is regulated by the Ordinance on the Protection of Workers from Artificial Optical Radiation of 19 July 2010". It is based on the EU Directive 2006/25/EC of April 27, 2006. On March 1, 2010, the "Act on the Protection of Humans from Non-Ionizing Radiation" ("NiSG"), BGBl. I p. 2433, came into force, according to which the use of sunbeds by minors has been prohibited since August 4, 2009, in accordance with § 4 "[Network and Information Systems Security Ordinance – NIS Ordinance]" (in German) A new Radiation Protection Act came into force in Germany on October 1, 2017. In Germany, a radiation protection officer directs and supervises activities to ensure radiation protection when handling radioactive materials or ionizing radiation. Their duties are described in § 31-33 (in German) of the Radiation Protection Ordinance and § 13-15 (in German) of the X-Ray Ordinance. They are appointed by the radiation protection officer, who is responsible for ensuring that all radiation protection regulations are observed. X-ray passport. Since 2002, an x-ray pass is a document in which the examining physician or dentist must enter information about the x-ray examinations performed on the patient. The main aim was to avoid unnecessary repeat examinations. According to the new Radiation Protection Ordinance ("StrlSchV"), practices and clinics are no longer obliged to offer their patients X-ray passports and to enter examinations in them. The Radiation Protection Ordinance came into force on December 31, 2018, together with the Radiation Protection Act (StrlSchG) passed in 2017, replacing the previous Radiation Protection Ordinance and the X-ray Ordinance. The Federal Office for Radiation Protection ("BfS") continues to advise patients to keep records of their own radiation diagnostic examinations. On its website, the BfS provides a downloadable document that can be used for personal documentation. Legal basis in Switzerland. In Switzerland, institutionalized radiation protection began in 1955 with the issuance of "guidelines for protection against ionizing radiation in medicine, laboratories, industry and manufacturing plants", although these were only recommendations. The legal basis was created by a new constitutional article (Art. 24), according to which the federal government issues regulations on protection against the dangers of ionizing radiation. On this basis, a corresponding federal law entered into force on July 1, 1960. The first Swiss ordinance on radiation protection entered into force on May 1, 1963. On October 7, 1963, the Federal Department of Home Affairs (EDI) issued the following decrees to supplement the ordinance: Another 40 regulations followed. The monitoring of such facilities took many years due to a lack of personnel. From 1963, dosimeters were to be used for personal protection, but this met with great resistance. It was not until 1989 that an updated radiation protection law was passed, accompanied by radiation protection training for the people concerned. Legal basis in Austria. The legal basis for radiation protection in Austria is the Radiation Protection Act (BGBl. 277/69 as amended) of June 11, 1969. The tasks of radiation protection extend to the fields of medicine, commerce and industry, research, schools, worker protection and food. The General Radiation Protection Ordinance, Federal Law Gazette II No. 191/2006, has been in force since June 1, 2006. Based on the Radiation Protection Act, it regulates the handling of radiation sources and measures for protection against ionizing radiation. The Optical Radiation Ordinance ("VOPST") is a detailed ordinance to the Occupational Safety and Health Act ("ASchG"). On August 1, 2020, a new radiation protection law came into force, which largely harmonized the radiation protection regulations for artificial radioactive substances and terrestrial natural radioactive substances. They are now enshrined in the General Radiation Protection Ordinance 2020. Companies that carry out activities with naturally occurring radioactive substances are now subject to the licensing or notification requirements pursuant to Sections 15 to 17 of the Radiation Protection Act 2020, unless an exemption provision pursuant to Sections 7 or 8 of the General Radiation Protection Ordinance 2020 applies. Cement production including maintenance of clinker kilns, production of primary iron and tin, lead and copper smelting are included in the scope. If a company falls within the scope of the General Radiation Protection Ordinance 2020, its owner must commission an officially authorized monitoring body. The mandate includes dose assessment for workers who may be exposed to increased radiation exposure and, if necessary, determination of the activity concentration of residues and radioactive substances discharged with the air or waste water. Bibliography. Wikipedia does not give medical advice Wikipedia is an encyclopedia anyone can edit. As a result, medical information on Wikipedia is not guaranteed to be true, correct, precise, or up-to-date! Wikipedia is not a substitute for a doctor or medical professional. None of the volunteers who write articles, maintain the systems or assist users can take responsibility for medical advice, and the same applies for the Wikimedia Foundation. If you need medical assistance, please call your national emergency telephone number, or contact a medical professional (for instance, a qualified doctor/physician, nurse, pharmacist/chemist, and so on) for advice. Nothing on Wikipedia.org or included as part of any project of Wikimedia Foundation Inc., should be construed as an attempt to offer or render a medical opinion or otherwise engage in the practice of medicine. Please see the article for more information.
[ { "math_id": 0, "text": "w_R" } ]
https://en.wikipedia.org/wiki?curid=76492308
7650282
Picture language
In formal language theory, a picture language is a set of "pictures", where a picture is a 2D array of characters over some alphabet. For example, the language formula_0 defines the language of rectangles composed of the character formula_1. This language formula_2 contains pictures such as: formula_3 The study of picture languages was initially motivated by the problems of pattern recognition and image processing, but two-dimensional patterns also appear in the study of cellular automata and other parallel computing models. Some formal systems have been created to define picture languages, such as array grammars and tiling systems.
[ { "math_id": 0, "text": "L = \\left \\{ a^{n,n+1} \\mid n > 0 \\right \\} " }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "L" }, { "math_id": 3, "text": "\\begin{pmatrix}a\\\\a\\end{pmatrix},\n\\begin{pmatrix} a&a\\\\a&a\\\\a&a\\end{pmatrix},\n\\begin{pmatrix} a&a&a\\\\a&a&a\\\\a&a&a\\\\a&a&a\\end{pmatrix}\n\\in L" } ]
https://en.wikipedia.org/wiki?curid=7650282
76515881
Factorial (disambiguation)
The factorial of a number formula_0 is formula_1, the product of positive integers up to formula_0. Factorial may also refer to: Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "n!=n\\cdot(n-1)\\cdot(n-2)\\cdots" } ]
https://en.wikipedia.org/wiki?curid=76515881
7651625
Diagram (category theory)
Indexed collection of objects and morphisms in a category In category theory, a branch of mathematics, a diagram is the categorical analogue of an indexed family in set theory. The primary difference is that in the categorical setting one has morphisms that also need indexing. An indexed family of sets is a collection of sets, indexed by a fixed set; equivalently, a "function" from a fixed index "set" to the class of "sets". A diagram is a collection of objects and morphisms, indexed by a fixed category; equivalently, a "functor" from a fixed index "category" to some "category". Definition. Formally, a diagram of type "J" in a category "C" is a (covariant) functor &lt;templatestyles src="Block indent/styles.css"/&gt;"D" : "J" → "C." The category "J" is called the index category or the scheme of the diagram "D"; the functor is sometimes called a "J"-shaped diagram. The actual objects and morphisms in "J" are largely irrelevant; only the way in which they are interrelated matters. The diagram "D" is thought of as indexing a collection of objects and morphisms in "C" patterned on "J". Although, technically, there is no difference between an individual "diagram" and a "functor" or between a "scheme" and a "category", the change in terminology reflects a change in perspective, just as in the set theoretic case: one fixes the index category, and allows the functor (and, secondarily, the target category) to vary. One is most often interested in the case where the scheme "J" is a small or even finite category. A diagram is said to be small or finite whenever "J" is. A morphism of diagrams of type "J" in a category "C" is a natural transformation between functors. One can then interpret the category of diagrams of type "J" in "C" as the functor category "C""J", and a diagram is then an object in this category. Cones and limits. A cone with vertex "N" of a diagram "D" : "J" → "C" is a morphism from the constant diagram Δ("N") to "D". The constant diagram is the diagram which sends every object of "J" to an object "N" of "C" and every morphism to the identity morphism on "N". The limit of a diagram "D" is a universal cone to "D". That is, a cone through which all other cones uniquely factor. If the limit exists in a category "C" for all diagrams of type "J" one obtains a functor &lt;templatestyles src="Block indent/styles.css"/&gt;lim : "C""J" → "C" which sends each diagram to its limit. Dually, the colimit of diagram "D" is a universal cone from "D". If the colimit exists for all diagrams of type "J" one has a functor &lt;templatestyles src="Block indent/styles.css"/&gt;colim : "C""J" → "C" which sends each diagram to its colimit. The universal functor of a diagram is the diagonal functor; its right adjoint is the limit and its left adjoint is the colimit. A cone can be thought of as a natural transformation from the diagonal functor to some arbitrary diagram. Commutative diagrams. Diagrams and functor categories are often visualized by commutative diagrams, particularly if the index category is a finite poset category with few elements: one draws a commutative diagram with a node for every object in the index category, and an arrow for a generating set of morphisms, omitting identity maps and morphisms that can be expressed as compositions. The commutativity corresponds to the uniqueness of a map between two objects in a poset category. Conversely, every commutative diagram represents a diagram (a functor from a poset index category) in this way. Not every diagram commutes, as not every index category is a poset category: most simply, the diagram of a single object with an endomorphism (formula_5), or with two parallel arrows (formula_6; formula_7) need not commute. Further, diagrams may be impossible to draw (because they are infinite) or simply messy (because there are too many objects or morphisms); however, schematic commutative diagrams (for subcategories of the index category, or with ellipses, such as for a directed system) are used to clarify such complex diagrams. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\underline A" }, { "math_id": 2, "text": "J = 0 \\rightrightarrows 1" }, { "math_id": 3, "text": "J" }, { "math_id": 4, "text": "(f,g\\colon X \\to Y)" }, { "math_id": 5, "text": "f\\colon X \\to X" }, { "math_id": 6, "text": "\\bullet \\rightrightarrows \\bullet" }, { "math_id": 7, "text": "f,g\\colon X \\to Y" } ]
https://en.wikipedia.org/wiki?curid=7651625
7651674
Ceramic foam
Ceramic foam is a tough foam made from ceramics. Manufacturing techniques include impregnating open-cell polymer foams internally with ceramic slurry and then firing in a kiln, leaving only ceramic material. The foams may consist of several ceramic materials such as aluminium oxide, a common high-temperature ceramic, and gets insulating properties from the many tiny air-filled voids within the material. The foam can be used not only for thermal insulation, but for a variety of other applications such as acoustic insulation, absorption of environmental pollutants, filtration of molten metal alloys, and as substrate for catalysts requiring large internal surface area. It has been used as stiff lightweight structural material, specifically for support of reflecting telescope mirrors. Properties. Ceramic foams are hardened ceramics with pockets of air or another gas trapped in pores throughout the body of the material. With its ability to create a large specific surface area, these materials can be fabricated as high as 94 to 96% air by volume with temperature resistances as high as 1700 °C. Because many ceramics are already oxides or other inert compounds, there is little danger of oxidation or reduction of the material. Previously, pores had been avoided in ceramic components due to their brittle properties. However, in practice ceramic foams have somewhat advantageous mechanical properties, showing high strength and plastic toughness, compared to bulk ceramics. One example is crack propagation, given by: formula_0 where σt is the stress at the tip of the crack, σ is the applied stress, a is the crack size and r is the radius of curvature. For certain stress applications, this means ceramic foams actually outperform bulk ceramics because the porous pockets of air act to blunt the crack tip radius, leading to a disruption of its propagation and a decrease in the likelihood of failure. Preparation Methods. Organic Foam Impregnating Method. The organic foam impregnating method is one of the more widely used in industry, creating the ceramic foam with a 3D mesh skeleton structure and coat a ceramic slurry on a polyurethane organic foam mesh body. The ceramic foam is obtained by allowing the body to dry at room temperature and burn the mesh body to retrieve the ceramic foam. This method is best used to prepare silicon carbide foam ceramics. Foaming Method. The foaming method uses a chemical reaction of a foaming agent. The foaming agent generates volatile gas that foams the slurry. The slurry is dried and sintered to obtain the ceramic foam. The product’s shape and density can be controlled and manipulated with the foaming method. This method can be used in the preparation of small pore size closed cell ceramics. Manufacturing. Much like metal foams, there are a number of accepted methods for creating ceramic foams. One of the earliest and still most common is the polymeric sponge method. A polymeric sponge is covered with a ceramic in suspension, and after rolling to ensure all pores have been filled, the ceramic-coated sponge is dried and pyrolysed to decompose the polymer, leaving only the porous ceramic structure. The foam must then be sintered for final densification. This method is widely used because it is effective with any ceramic able to be suspended; however, large amounts of gaseous byproducts are released and cracking due to differences in thermal expansion coefficients is common. While the above are both based on the use of a sacrificial template, there are also direct foaming methods that can be used. These methods involve pumping air into a suspended ceramic before setting and sintering. This is difficult because wet foams are thermodynamically unstable and can end up with very large pores after setting. A recent method of creating aluminum oxide foams has also been developed. This technique involves heating crystals with the metal and forming compounds until a solution is created. At this point, polymer chains form and grow, causing the entire mixture to separate into a solvent and polymer. As the mixture begins to boil, air bubbles are trapped in solution and locked in to place as the material is heated and polymer is burned off. Use. Insulation. Due to ceramics' extremely low thermal conductivity, the most obvious use of a ceramic is as an insulation material. Ceramic foams are notable in this regard because their composition by very common compounds, such as aluminum oxide, makes them completely harmless, unlike asbestos and other ceramic fibers. Their high strength and hardness also allows them to be used as structural materials for low stress applications. Electronics. With easily controlled porosities and microstructures, ceramic foams have seen growing use in evolving electronics applications. These applications include electrodes, and scaffolds for solid oxide fuel cells and batteries. Foams can also be used as cooling components for electronics by separating a pumped coolant from the circuits themselves. For this application, silica, aluminum oxide, and aluminum borosilicate fibers can be used. Pollution Control. Ceramic foams have been proposed as a means of pollutant control, particularly for particulate matter from engines. They are effective because the voids can capture particulates as well as support a catalyst that can induce oxidation of the captured particulates. Due to the easy means of deposition of other materials within ceramic foams, these oxidation-inducing catalysts can easily be distributed through the entire foam, increasing effectiveness. Filtering. Ceramic foam filters (CFF) are used for the filtration of liquid metal. Passing liquid metal through the ceramic foam filter reduces impurities, including nonmetallic inclusions, in the liquid metal and the corresponding finished product (casting, sheet, billet, etc). It has found success in its application and use in continuous casting (sheet), semi-continuous casting (billet and slab), and casting gating systems in metal foundries. Wastewater Treatment. Due to the foam’s unique pore structure and large specific surface area, it sees a use as a filter for wastewater. The filtration process is a combination of adsorption, surface filtration, and deep filtration with deep filtration providing a majority of the filtration process. Construction. Close-cell ceramic foam serves as a good insulation material for walls and roofs. The large number of closed cells allow the material to be resistant to corrosion and absorb sound internally and externally. Buildings in China have utilized ceramic foam as a thermal insulation material. Noise Reduction. Foam ceramics has its use in sound absorption in wet and oily environments. The sound waves vibrate in the pores of the foam and transform the energy into heat through friction and air resistance, thus reducing echos in the environment. Automobile. Due to the three-dimensional connected mesh structure, high temperature resistance, and thermal stability of ceramic foam, its use in catalytic converters in exhaust systems help remove oxides and other particulate matter from exhaust gasses. Biomaterial. Current research sees ceramic foams often formulated with Bioglass to create tissue scaffolds for bone repair Their porous characteristic shows promise in load-bearing bone tissue engineering applications. The bioglass allows the material to be bioactive and form hyaluronic acid on the surface of the material as biological fluid contacts with glass-ceramic foam. Glass-ceramic shows promise as its properties of having adequate porosity to allow cells to migrate through the scaffold, high mechanical strength to bear load, and good bioactivity to allow cells to flourish. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma_t = 2\\sigma\\left ( \\frac{a}{r} \\right )^\\frac{1}{2}" } ]
https://en.wikipedia.org/wiki?curid=7651674
76517062
Solution-friction model
Reverse osmosis transport model The solution-friction model (SF model) is a mechanistic transport model developed to describe the transport processes across porous membranes, such as reverse osmosis (RO) and nanofiltration (NF). Unlike traditional models, such as those based on Darcy’s law, which primarily describes pressure-driven solvent (water) transport in homogeneous porous mediums, the SF model also accounts for the coupled transport of both solvent (water) and solutes (salts). Overview. The solution-friction model is derived on a pore-flow or viscous flow mechanism, but extends its applicability by incorporating the force balances on the species transporting through the membrane. This inclusion allows for a detailed understanding of the interdependent fluxes of water and salt, influenced by interactions between salt ions and water molecules. The SF model has been able to successfully describe the transport of water and salt in RO membranes, showing good agreement with experiments. The development of the SF model also corrects the misconception that RO water transport is a diffusion-based process. Ion transport. Ion transport through the RO membrane is driven by the gradient of chemical potential within the membrane. The solution-friction model describes this transport by considering the frictions between ions, ions and water, and ions and membrane. The force balance for an ion is given by the equation: formula_0 Note that the membrane is stationary and its velocity formula_9 is therefore set to zero. By considering only the coordinate perpendicular to the membrane surface, the ion flux (formula_5) governed by diffusion, electromigration, and advection can be expressed as: formula_10 Water transport. Water transport is governed by the gradient of total pressure, counterbalanced by water-membrane and ion-water frictions. The balance is expressed as: formula_18 Substituting the expression of ion velocity into water velocity, we arrive at the following expression for the force balance on water: formula_22 When ion-membrane friction is negligible (i.e.,formula_23), this equation can be written as formula_24 The equation indicates that the water permeance is influenced by the electrical potential gradient inside the membrane, which has been verified by salt permeation through highly charged Nafion membranes. Due to the interactions between ions and water, increasing salt concentration decreases the water permeance. Nevertheless, a simplification can be made when a membrane has a low volumetric charge density (i.e., within the membrane), like in typical RO membranes. Therefore, the electrical potential gradient can be neglected as it is relatively small compared to the concentration gradient. The equation for water flux can be eventually simplified as: formula_25 Defining formula_28 and formula_29, the water permeability velocity is obtained as: formula_30 This equation is identical in form to the Spiegler-Kedem-Katchalsky equation, a classic model in irreversible thermodynamics for water transport through semipermeable membranes. This ensures that the SF model aligns with basic thermodynamic principles. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "-\\nabla \\mu_i = RTf_{i-w} (v_i - v_w) + RTf_{i-m} v_i" }, { "math_id": 1, "text": "\\mu_i" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "f_{i-w}" }, { "math_id": 4, "text": " f_{i-m} " }, { "math_id": 5, "text": "v_i" }, { "math_id": 6, "text": "v_w" }, { "math_id": 7, "text": "R" }, { "math_id": 8, "text": "T" }, { "math_id": 9, "text": "v_m" }, { "math_id": 10, "text": "v_i = K_{w,i} v_w - K_{w,i} D_{i,m} \\left( \\frac{d \\ln c_i}{dx} + z_i \\frac{d\\varphi}{dx} \\right)\n" }, { "math_id": 11, "text": " D_{i,m} " }, { "math_id": 12, "text": " f_{i-w} " }, { "math_id": 13, "text": " K_{w,i} " }, { "math_id": 14, "text": " \\frac{f_{i-w}}{f_{i-w}+f_{i-m}}" }, { "math_id": 15, "text": " c_i " }, { "math_id": 16, "text": " z_i " }, { "math_id": 17, "text": " \\varphi " }, { "math_id": 18, "text": " -\\nabla P^{\\text{tot}} = RTf_{w-m} v_w + RT \\sum_i f_{i-w} c_i (v_w - v_i) " }, { "math_id": 19, "text": " P^{\\text{tot}} " }, { "math_id": 20, "text": "(P-\\Pi) " }, { "math_id": 21, "text": " v_i " }, { "math_id": 22, "text": " -\\frac{1}{RT} \\frac{dP^{\\text{tot}}}{dx} = f_{w-m} v_w + \\sum f_{i-w} c_i (1 - K_{w,i}) + \\sum K_{w,i} \\frac{dc_i}{dx} + \\sum K_{w,i} c_i z_i \\frac{d\\varphi}{dx} " }, { "math_id": 23, "text": " K_{w,i} = 1 " }, { "math_id": 24, "text": " -\\frac{1}{RT} \\frac{dP^{\\text{tot}}}{dx} = f_{w-m} v_w + + \\sum \\frac{dc_i}{dx} + \\sum c_i z_i \\frac{d\\varphi}{dx} " }, { "math_id": 25, "text": " v_w = \\frac{1}{RT f_{w-m} L_m} \\Delta P - \\frac{1 - \\Phi}{RT f_{w-m} L_m} \\Delta \\Pi " }, { "math_id": 26, "text": " L_m " }, { "math_id": 27, "text": "\\Phi" }, { "math_id": 28, "text": " \\frac{1}{RT f_{w-m} L_m} = A " }, { "math_id": 29, "text": " 1 - \\Phi = \\sigma " }, { "math_id": 30, "text": " v_w = A (\\Delta P - \\sigma \\Delta \\Pi) " } ]
https://en.wikipedia.org/wiki?curid=76517062
7652097
Perceptual paradox
A perceptual paradox illustrates the failure of a theoretical prediction. Theories of perception are supposed to help a researcher predict what will be perceived when senses are stimulated. A theory usually comprises a mathematical model (formula), rules for collecting physical measurements for input into the model, and rules for collecting physical measurements to which model outputs should map. When arbitrarily choosing valid input data, the model should reliably generate output data that is indistinguishable from that which is measured in the system being modeled. Although each theory may be useful for some limited predictions, theories of vision, hearing, touch, smell, and taste are not typically reliable for comprehensive modeling of perception based on sensory inputs. A paradox illustrates where a theoretical prediction fails. Sometimes, even in the absence of a predictive theory, the characteristics of perception seem nonsensical. This page lists some paradoxes and seemingly impossible properties of perception. When an animal is not named in connection with the discussion, human perception should be assumed since the majority of perceptual research data applies to humans. Definition. A perceptual paradox, in its purest form is a statement illustrating the failure of a formula to predict what we perceive from what our senses transduce. A seemingly nonsensical characteristic is a statement of factual observation that is sufficiently intractable that no theory has been proposed to account for it. Mathematical modeling. One branch of research into perception attempts to explain what we perceive by applying formulae to sensory inputs and expecting outputs similar to that which we perceive. For example: what we measure with our eyes should be predicted by applying formulae to what we measure with instruments that imitate our eye. Past researchers have made formulae that predict some, but not all, perceptual phenomena from their sensory origins. Modern researchers continue to make formulae to overcome the shortcomings of earlier formulae. Some formulae are carefully constructed to mimic actual structures and functions of sensory mechanisms. Other formulae are constructed by great leaps of faith about similarity in mathematical curves. No perceptual formulae have been raised to the status of "natural law" in the way that the laws of gravitation and electrical attraction have. So, perceptual formulae continue to be an active area of development as scientists strive towards the great insight required of a law. History. Some Nobel laureates have paved the way with clear statements of good practice: In the preface to his Histology Santiago Ramón y Cajal wrote that "Practitioners will only be able to claim that a valid explanation of a histological observation has been provided if three questions can be answered satisfactorily: what is the functional role of the arrangement in the animal; what mechanisms underlie this function; and what sequence of chemical and mechanical events during evolution and development gave rise to these mechanisms?" Allvar Gullstrand described the problems that arise when approaching the optics of the eye as if they were as predictable as camera optics. Charles Scott Sherrington, considered the brain to be the "crowning achievement of the reflex system", (which can be interpreted as opening all aspects of perception to simple formulae expressed over complex distributions). Statements of Paradox. See:Visual. Contrast Invariance Boundaries between brighter and darker areas appear to remain of constant relative contrast when the ratio of logarithms of the two intensities remains constant: formula_2 But the use of logarithms is forbidden for values that can become zero such as formula_3, and division is forbidden by values that can become zero such as formula_4. No published neuroanatomical model predicts the perception of contrast invariance. 10 Decade Transduction Local Contrast Color Constancy When observing objects in a scene, colors appears constant. An apple looks red regardless of where it is viewed. In bright direct sunshine, under a blue sky with the sun obscured, during a colorful sunset, under a canopy of green leaves, and even under most man-made light sources, the color of the apple remains unchanging. Color perception appears to be independent of light wavelength. Edwin Land demonstrated this by illuminating a room with two wavelengths of light of approximately 500 nm and 520 nm (both improperly called "green"). The room was perceived in full color, with all colors appearing unattenuated, like red, orange, yellow, blue, and purple, despite the absence of photons other than two close to 510 nm. Note that formula_1 light misuses the terminology RGB since color is a perception and there are no such things as "Red", "Green", or "Blue" photons. Jerome Lettvin wrote an article in the Scientific American illustrating the importance of boundaries and vertices in the perception of color. Yet, no published formula predicts the perceived color of objects in a single image of arbitrary scene illumination. Transverse Chromatic Deaberration Light that goes through a simple lens such as found in an eye undergoes refraction, splitting colors. An formula_1 point-source that is off-center to the eye projects to a pattern where with color separation along a line radial to the central axis of the eye. The color separation can be many photoreceptors wide. Yet, an formula_1 pixel on a television or computer screen appears white even when seen sidelong. No published neuroanatomical model predicts the perception of the eccentric white pixel. Longitudinal Chromatic Deaberration As in Transverse Chromatic Deaberration, color splitting projects also projects the R, G, and B components of the formula_1 pixel to different focal lengths, resulting in a bulls-eye-like color distribution of light even at the center of vision. No published neuroanatomical model predicts the perception of the centered white pixel. Spherical Deaberration Eyes have corneas and lenses that are imperfectly spherical. This inhomogeneous shape results in a non-circular distribution of photons on the retina. No published neuroanatomical model predicts the perception of the non-circularly distributed white pixel. Hyperacuity People report discrimination much finer than can be predicted by interpolating sense data between photosensors. High performing hyperacute vision in some people has been measured to less than a tenth the radius of a single photoreceptor. Among measures of hyperacuity are the vernier discrimination of two adjacent lines and the discrimination of two stars in a night sky. No published neuroanatomical model predicts the discrimination of the two white pixels closer together than a single photoreceptor. Pupil Size Inversion When pupils are narrowed to around 1mm for reading fine print, the size of the central "Airy" disk increases to a diameter of 10 photoreceptors. The so-called "blur" is increased for reading. When pupils are widened for fight/flight response, the size of the central "Airy" disk decreases to a diameter of about 1.5 photoreceptors. The so-called "blur" is decreased in anticipation of large movements. No published neuroanatomical model predicts that discrimination improves when pupils are narrowed. Pupil Shape Inversion Eyes have pupils (apertures) that cause diffraction. A point-source of light is distributed on the retina. The distribution for a perfectly circular aperture is known by the name "Airy rings". Human pupils are rarely perfectly circular. Cat pupils range from almost circular to a vertical slit. Goat pupils tend to be horizontal rectangular with rounded corners. Gecko pupils range from circular, to a slit, to a series of pinholes. Cuttlefish pupils have complex shapes. No published neuroanatomical model predicts the perception of the various pupil shape distributed white pixel. Smell:Olfactory. One paradoxical perception concerning the sense of smell is the theory of one's own ability to smell. Smell is intrinsic to being alive, and is even shown to be a matter of genetics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "SUN_{white}" }, { "math_id": 1, "text": "RGB_{white}" }, { "math_id": 2, "text": "Contrast \\propto \\frac{log I_a}{log I_b}" }, { "math_id": 3, "text": "I_a\\," }, { "math_id": 4, "text": "log I_b\\," } ]
https://en.wikipedia.org/wiki?curid=7652097
7652409
Szilassi polyhedron
Toroidal polyhedron with 7 faces In geometry, the Szilassi polyhedron is a nonconvex polyhedron, topologically a torus, with seven hexagonal faces. Coloring and symmetry. The 14 vertices and 21 edges of the Szilassi polyhedron form an embedding of the Heawood graph onto the surface of a torus. Each face of this polyhedron shares an edge with each other face. As a result, it requires seven colours to colour all adjacent faces. This example shows that, on surfaces topologically equivalent to a torus, some subdivisions require seven colors, providing the lower bound for the seven colour theorem. The other half of the theorem states that all toroidal subdivisions can be colored with seven or fewer colors. The Szilassi polyhedron has an axis of 180-degree symmetry. This symmetry swaps three pairs of congruent faces, leaving one unpaired hexagon that has the same rotational symmetry as the polyhedron. Complete face adjacency. The tetrahedron and the Szilassi polyhedron are the only two known polyhedra in which each face shares an edge with each other face. If a polyhedron with "f"  faces is embedded onto a surface with "h"  holes, in such a way that each face shares an edge with each other face, it follows by some manipulation of the Euler characteristic that formula_0 This equation is satisfied for the tetrahedron with "h" = 0 and "f" = 4, and for the Szilassi polyhedron with "h" = 1 and "f" = 7. The next possible solution, "h" = 6 and "f" = 12, would correspond to a polyhedron with 44 vertices and 66 edges. However, it is not known whether such a polyhedron can be realized geometrically without self-crossings (rather than as an abstract polytope). More generally this equation can be satisfied precisely when "f"  is congruent to 0, 3, 4, or 7 modulo 12. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Is there a non-convex polyhedron without self-intersections with more than seven faces, all of which share an edge with each other? History. The Szilassi polyhedron is named after Hungarian mathematician Lajos Szilassi, who discovered it in 1977. The dual to the Szilassi polyhedron, the Császár polyhedron, was discovered earlier by Ákos Császár (1949); it has seven vertices, 21 edges connecting every pair of vertices, and 14 triangular faces. Like the Szilassi polyhedron, the Császár polyhedron has the topology of a torus. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h = \\frac{(f - 4)(f - 3)}{12}." } ]
https://en.wikipedia.org/wiki?curid=7652409
76526800
T5 (language model)
Series of large language models developed by Google AI T5 (Text-to-Text Transfer Transformer) is a series of large language models developed by Google AI. Introduced in 2019, T5 models are trained on a massive dataset of text and code using a text-to-text framework. The T5 models are capable of performing the text-based tasks that they were pretrained for. They can also be finetuned to perform other tasks. They have been employed in various applications, including chatbots, machine translation systems, text summarization tools, code generation, and robotics. Like the original Transformer model, T5 models are encoder-decoder Transformers, where the encoder processes the input text, and the decoder generates the output text. Training. The original T5 models are pre-trained on the Colossal Clean Crawled Corpus (C4), containing text and code scraped from the internet. This pre-training process enables the models to learn general language understanding and generation abilities. T5 models can then be fine-tuned on specific downstream tasks, adapting their knowledge to perform well in various applications. The T5 models were pretrained on many tasks, all in the format of codice_0 -&gt; codice_1. Some examples are: Architecture. The T5 series encompasses several models with varying sizes and capabilities. These models are often distinguished by their parameter count, which indicates the complexity and potential capacity of the model. The original paper reported the following 5 models: In the above table, Variants. Several subsequent models used the T5 architecture, with non-standardized naming conventions used to differentiate them. This section attempts to collect the main ones. An exhaustive list of the variants released by Google Brain is on the GitHub repo for T5X. Some models are trained from scratch while others are trained by starting with a previous trained model. By default, each model is trained from scratch, except otherwise noted. Applications. The T5 model itself is an encoder-decoder model, allowing it to be used for instruction following. The encoder encodes the instruction, and the decoder autoregressively generates the reply. The T5 encoder can be used as a text encoder, much like BERT. It encodes a text into a sequence of real-number vectors, which can be used for downstream applications. For example, Google Imagen uses T5-XXL as text encoder, and the encoded text vectors are used as conditioning on a diffusion model. As another example, the AuraFlow diffusion model uses "Pile-T5-XL".
[ { "math_id": 0, "text": "d_{model}" }, { "math_id": 1, "text": "d_{ff}" }, { "math_id": 2, "text": "d_{kv}" } ]
https://en.wikipedia.org/wiki?curid=76526800
76547408
KPZ fixed point
In probability theory, the KPZ fixed point is a Markov field and conjectured to be a universal limit of a wide range of stochastic models forming the universality class of a non-linear stochastic partial differential equation called the KPZ equation. Even though the universality class was already introduced in 1986 with the KPZ equation itself, the KPZ fixed point was not concretely specified until 2021 when mathematicians Konstantin Matetski, Jeremy Quastel and Daniel Remenik gave an explicit description of the transition probabilities in terms of Fredholm determinants. Introduction. All models in the KPZ class have in common, that they have a fluctuating "height function" or some analogue function, that can be thought of as a function, that models the growth of the model by time. The KPZ equation itself is also a member of this class and the canonical model of modelling random interface growth. The "strong KPZ universality conjecture" conjectures that all models in the KPZ universality class converge under a specific scaling of the height function to the KPZ fixed point and only depend on the initial condition. Matetski-Quastel-Remenik constructed the KPZ fixed point for the formula_0-dimensional KPZ universality class (i.e. one space and one time dimension) on the polish space of upper semicontinous functions (UC) with the topology of local UC convergence. They did this by studying a particular model of the KPZ universality class the TASEP („Totally Asymmetric Simple Exclusion Process“) with general initial conditions and the random walk of its associated height function. They achieved this by rewriting the biorthogonal function of the correlation kernel, that appears in the Fredholm determinant formula for the multi-point distribution of the particles in the Weyl chamber. Then they showed convergence to the fixed point. KPZ fixed point. Let formula_1 denote a height function of some probabilistic model with formula_2 denoting space-time. So far only the case for formula_3, also noted as formula_0, was deeply studied, therefore we fix this dimension for the rest of the article. In the KPZ universality class exist two equilibrium points or fixed points, the trivial "Edwards-Wilkinson (EW) fixed point" and the non-trivial "KPZ fixed point". The KPZ equation connects them together. The KPZ fixed point is rather defined as a height function formula_4 and not as a particular model with a height function. KPZ fixed point. The KPZ fixed point formula_5 is a Markov process, such that the n-point distribution for formula_6 and formula_7 can be represented as formula_8 where formula_9 and formula_10 is a trace class operator called the "extended Brownian scattering operator" and the subscript means that the process in formula_11 starts. KPZ universality conjectures. The KPZ conjecture conjectures that the height function formula_1 of all models in the KPZ universality at time formula_12 fluctuate around the mean with an order of formula_13 and the spacial correlation of the fluctuation is of order formula_14. This motivates the so-called "1:2:3 scaling" which is the characteristic scaling for the KPZ fixed point. The EW fixed point has also a scaling the "1:2:4 scaling". The fixed points are invariant under their associated scaling. 1:2:3 scaling. The "1:2:3 scaling" of a height function is for formula_15 formula_16 where "1:3" and "2:3" stand for the proportions of the exponents and formula_17 is just a constant. Strong conjecture. The "strong conjecture" says, that all models in the KPZ universality class converge under "1:2:3 scaling" of the height function if their initial conditions also converge, i.e. formula_18 with initial condition formula_19 where formula_20 are constants depending on the model. Weak conjecture. If we remove the growth term in the KPZ equation, we get formula_21 which converges under the "1:2:4 scaling" formula_22 to the EW fixed point. The weak conjecture says now, that the KPZ equation is the only Heteroclinic orbit between the KPZ and EW fixed point. Airy process. If one fixes the time dimension and looks at the limit formula_23 then one gets the Airy process formula_24 which also occurs in the theory of random matrices. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(1+1)" }, { "math_id": 1, "text": "h(t,\\vec{x})" }, { "math_id": 2, "text": "(t,\\vec{x})\\in \\mathbb{R}\\times \\mathbb{R}^d" }, { "math_id": 3, "text": "d=1" }, { "math_id": 4, "text": "\\mathfrak{h}(t,\\vec{x})" }, { "math_id": 5, "text": "(\\mathfrak{h}(t,x))_{t\\geq 0,x\\in \\R}" }, { "math_id": 6, "text": "x_1<x_2<\\cdots <x_n\\in\\R" }, { "math_id": 7, "text": "t>0" }, { "math_id": 8, "text": "\\mathbb{P}_{\\mathfrak{h}(0,\\cdot)}(\\mathfrak{h}(t,x_1)\\leq a_1,\\mathfrak{h}(t,x_2)\\leq a_2,\\dots,\\mathfrak{h}(t,x_n)\\leq a_n)=\\det(I-K)_{L^2(\\{x_1,x_2,\\dots,x_n\\}\\times \\R)}" }, { "math_id": 9, "text": "a_1,\\dots,a_n\\in\\R" }, { "math_id": 10, "text": "K" }, { "math_id": 11, "text": "\\mathfrak{h}(0,\\cdot)" }, { "math_id": 12, "text": "t" }, { "math_id": 13, "text": "t^{1/3}" }, { "math_id": 14, "text": "t^{2/3}" }, { "math_id": 15, "text": "\\varepsilon>0" }, { "math_id": 16, "text": "\\varepsilon^{1/2}h(\\varepsilon^{-3/2}t,\\varepsilon^{-1}x)-C_{\\varepsilon}t," }, { "math_id": 17, "text": "C_{\\varepsilon}" }, { "math_id": 18, "text": "\\lim\\limits_{\\varepsilon\\to 0}\\varepsilon^{1/2}(h(c_1\\varepsilon^{-3/2}t,c_2\\varepsilon^{-1}x)-c_3\\varepsilon^{-3/2}t)\\;\\stackrel{(d)}{=}\\;\\mathfrak{h}(t,x)" }, { "math_id": 19, "text": "\\mathfrak{h}(0,x):=\\lim\\limits_{\\varepsilon\\to 0}\\varepsilon^{1/2}h(0,c_2\\varepsilon^{-1}x)," }, { "math_id": 20, "text": "c_1,c_2,c_3" }, { "math_id": 21, "text": "\\partial_t h(t,x)= \\nu \\partial^2_x h +\\sigma\\xi," }, { "math_id": 22, "text": "\\lim\\limits_{\\varepsilon\\to 0}\\varepsilon^{1/2}(h(c_1\\varepsilon^{-2}t,c_2\\varepsilon^{-1}x)-c_3\\varepsilon^{-3/2}t)\\;\\stackrel{(d)}{=}\\;\\mathfrak{h}(t,x)" }, { "math_id": 23, "text": "\\lim\\limits_{t\\to\\infty}t^{-1/3}(h(c_1t,c_2t^{2/3}x)-c_3t)\\stackrel{(d)}{=}\\;\\mathcal{A}(x)," }, { "math_id": 24, "text": "(\\mathcal{A}(x))_{x\\in\\R}" } ]
https://en.wikipedia.org/wiki?curid=76547408
76554949
Young function
Mathematical functions In mathematics, certain functions useful in functional analysis are called Young functions. A function formula_0 is a Young function, iff it is convex, even, lower semicontinuous, and non-trivial, in the sense that it is not the zero function formula_1 , and it is not the convex dual of the zero function formula_2 A Young function is finite iff it does not take value formula_3. The convex dual of a Young function is denoted formula_4. A Young function formula_5 is strict iff both formula_5 and formula_4 are finite. That is, formula_6 The inverse of a Young function isformula_7 The definition of Young functions is not fully standardized, but the above definition is usually used. Different authors disagree about certain corner cases. For example, the zero function formula_1 might be counted as "trivial Young function". Some authors (such as Krasnosel'skii and Rutickii) also require formula_8 References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\theta : \\R \\to [0, \\infty]" }, { "math_id": 1, "text": "x \\mapsto 0" }, { "math_id": 2, "text": "x \\mapsto \\begin{cases} 0 \\text{ if } x = 0, \\\\ +\\infty \\text{ else.}\\end{cases}" }, { "math_id": 3, "text": "\\infty" }, { "math_id": 4, "text": "\\theta^*" }, { "math_id": 5, "text": "\\theta" }, { "math_id": 6, "text": "\\frac{\\theta(x)} x \\to \\infty,\\quad\\text{as }x\\to \\infty," }, { "math_id": 7, "text": "\\theta^{-1}(y)=\\inf \\{x: \\theta(x)>y\\}" }, { "math_id": 8, "text": "\\lim_{x \\downarrow 0} \\frac{\\theta(x)}{x} = 0" } ]
https://en.wikipedia.org/wiki?curid=76554949
7655721
MASH-1
Cryptographic hash function For a cryptographic hash function (a mathematical algorithm), a MASH-1 (Modular Arithmetic Secure Hash) is a hash function based on modular arithmetic. History. Despite many proposals, few hash functions based on modular arithmetic have withstood attack, and most that have tend to be relatively inefficient. MASH-1 evolved from a long line of related proposals successively broken and repaired. Standard. Committee Draft ISO/IEC 10118-4 (Nov 95) Description. MASH-1 involves use of an RSA-like modulus formula_0, whose bitlength affects the security. formula_0 is a product of two prime numbers and should be difficult to factor, and for formula_0 of unknown factorization, the security is based in part on the difficulty of extracting modular roots. Let formula_1 be the length of a message block in bit. formula_0 is chosen to have a binary representation a few bits longer than formula_1, typically formula_2. The message is padded by appending the message length and is separated into blocks formula_3 of length formula_4. From each of these blocks formula_5, an enlarged block formula_6 of length formula_1 is created by placing four bits from formula_5 in the lower half of each byte and four bits of value 1 in the higher half. These blocks are processed iteratively by a compression function: formula_7 formula_8 Where formula_9 and formula_10. formula_11 denotes the bitwise OR and formula_12 the bitwise XOR. From formula_13 are now calculated more data blocks formula_14 by linear operations (where formula_15 denotes concatenation): formula_16 formula_17 formula_18 These data blocks are now enlarged to formula_19 like above, and with these the compression process continues with eight more steps: formula_20 Finally the hash value is formula_21, where formula_22 is a prime number with formula_23. MASH-2. There is a newer version of the algorithm called MASH-2 with a different exponent. The original formula_10 is replaced by formula_24. This is the only difference between these versions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "L < |N| \\leq L+16" }, { "math_id": 3, "text": "D_1, \\cdots, D_q" }, { "math_id": 4, "text": "L/2" }, { "math_id": 5, "text": "D_i" }, { "math_id": 6, "text": "B_i" }, { "math_id": 7, "text": "H_0 = IV" }, { "math_id": 8, "text": "H_i = f(B_i, H_{i-1}) = ((((B_i \\oplus H_{i-1}) \\vee E)^e \\bmod N) \\bmod 2^L) \\oplus H_{i-1}; \\quad i=1,\\cdots,q" }, { "math_id": 9, "text": "E=15 \\cdot 2^{L-4}" }, { "math_id": 10, "text": "e=2" }, { "math_id": 11, "text": "\\vee" }, { "math_id": 12, "text": "\\oplus" }, { "math_id": 13, "text": "H_q" }, { "math_id": 14, "text": "D_{q+1},\\cdots,D_{q+8}" }, { "math_id": 15, "text": "\\|" }, { "math_id": 16, "text": "H_q = Y_1 \\,\\|\\, Y_3 \\,\\|\\, Y_0 \\,\\|\\, Y_2; \\quad |Y_i| = L/4" }, { "math_id": 17, "text": "Y_i = Y_{i-1} \\oplus Y_{i-4}; \\quad i=4,\\cdots,15" }, { "math_id": 18, "text": "D_{q+i} = Y_{2i-2} \\,\\|\\, Y_{2i-1}; \\quad i=1,\\cdots,8" }, { "math_id": 19, "text": "B_{q+1},\\cdots,B_{q+8}" }, { "math_id": 20, "text": "H_i = f(B_i, H_{i-1}); \\quad i=q+1,\\cdots,q+8" }, { "math_id": 21, "text": "H_{q+8} \\bmod p" }, { "math_id": 22, "text": "p" }, { "math_id": 23, "text": "7\\cdot 2^{L/2-3} < p < 2^{L/2}" }, { "math_id": 24, "text": "e=2^8+1" } ]
https://en.wikipedia.org/wiki?curid=7655721
7655739
Passive ventilation
Ventilation without use of mechanical systems Passive ventilation is the process of supplying air to and removing air from an indoor space without using mechanical systems. It refers to the flow of external air to an indoor space as a result of pressure differences arising from natural forces. There are two types of natural ventilation occurring in buildings: "wind driven ventilation" and "buoyancy-driven ventilation". Wind driven ventilation arises from the different pressures created by wind around a building or structure, and openings being formed on the perimeter which then permit flow through the building. Buoyancy-driven ventilation occurs as a result of the directional buoyancy force that results from temperature differences between the interior and exterior. Since the internal heat gains which create temperature differences between the interior and exterior are created by natural processes, including the heat from people, and wind effects are variable, naturally ventilated buildings are sometimes called "breathing buildings". Process. The static pressure of air is the pressure in a free-flowing air stream and is depicted by isobars in weather maps. Differences in static pressure arise from global and microclimate thermal phenomena and create the air flow we call wind. Dynamic pressure is the pressure exerted when the wind comes into contact with an object such as a hill or a building and it is described by the following equation: formula_0 where (using SI units): The impact of wind on a building affects the ventilation and infiltration rates through it and the associated heat losses or heat gains. Wind speed increases with height and is lower towards the ground due to frictional drag. In practical terms wind pressure will vary considerably creating complex air flows and turbulence by its interaction with elements of the natural environment (trees, hills) and urban context (buildings, structures). Vernacular and traditional buildings in different climatic regions rely heavily upon natural ventilation for maintaining thermal comfort conditions in the enclosed spaces. Design. Design guidelines are offered in building regulations and other related literature and include a variety of recommendations on many specific areas such as: The following design guidelines are selected from the Whole Building Design Guide, a program of the National Institute of Building Sciences: Wind driven ventilation. Wind driven ventilation can be classified as cross ventilation and single-sided ventilation. Wind driven ventilation depends on wind behavior, on the interactions with the building envelope and on openings or other air exchange devices such as inlets or windcatchers. The knowledge of the urban climatology i.e. the wind around the buildings is crucial when evaluating the air quality and thermal comfort inside buildings as air and heat exchange depends on the wind pressure on facades. As observed in the equation (1), the air exchange depends linearly on the wind speed in the urban place where the architectural project will be built. CFD (Computational Fluid Dynamics) tools and zonal modelings are usually used to design naturally ventilated buildings. Windcatchers are able to aid wind driven ventilation by directing air in and out of buildings. Buoyancy-driven ventilation. Buoyancy driven ventilation arise due to differences in density of interior and exterior air, which in large part arises from differences in temperature. When there is a temperature difference between two adjoining volumes of air the warmer air will have lower density and be more buoyant thus will rise above the cold air creating an upward air stream. Forced upflow buoyancy driven ventilation in a building takes place in a traditional fireplace. Passive stack ventilators are common in most bathrooms and other type of spaces without direct access to the outdoors. Limitations of buoyancy-driven ventilation: Natural ventilation in buildings can rely mostly on wind pressure differences in windy conditions, but buoyancy effects can a) augment this type of ventilation and b) ensure air flow rates during still days. Buoyancy-driven ventilation can be implemented in ways that air inflow in the building does not rely solely on wind direction. In this respect, it may provide improved air quality in some types of polluted environments such as cities. For example, air can be drawn through the backside or courtyards of buildings avoiding the direct pollution and noise of the street facade. Wind can augment the buoyancy effect, but can also reduce its effect depending on its speed, direction and the design of air inlets and outlets. Therefore, prevailing winds must be taken into account when designing for stack effect ventilation. Estimating buoyancy-driven ventilation. The natural ventilation flow rate for buoyancy-driven natural ventilation with vents at two different heights can be estimated with this equation: formula_1 English units: SI units: Assessing performance. One way to measure the performance of a naturally ventilated space is to measure the air changes per hour in an interior space. In order for ventilation to be effective, there must be exchange between outdoor air and room air. A common method for measuring ventilation effectiveness is to use a tracer gas. The first step is to close all windows, doors, and openings in the space. Then a tracer gas is added to the air. The reference, American Society for Testing and Materials (ASTM) Standard E741: Standard Test Method for Determining Air Change in a Single Zone by Means of a Tracer Gas Dilution, describes which tracer gases can be used for this kind of testing and provides information about the chemical properties, health impacts, and ease of detection. Once the tracer gas has been added, mixing fans can be used to distribute the tracer gas as uniformly as possible throughout the space. To do a decay test, the concentration of the tracer gas is first measured when the concentration of the tracer gas is constant. Windows and doors are then opened and the concentration of the tracer gas in the space is measured at regular time intervals to determine the decay rate of the tracer gas. The airflow can be deduced by looking at the change in concentration of the tracer gas over time. For further details on this test method, refer to ASTM Standard E741. While natural ventilation eliminates electrical energy consumed by fans, overall energy consumption of natural ventilation systems is often higher than that of modern mechanical ventilation systems featuring heat recovery. Typical modern mechanical ventilation systems use as little as 2000 J/m3 for fan operation, and in cold weather they can recover much more energy than this in the form of heat transferred from waste exhaust air to fresh supply air using recuperators. Ventilation heat loss can be calculated as: formula_2 Where: The temperature differential needed between indoor and outdoor air for mechanical ventilation with heat recovery to outperform natural ventilation in terms of overall energy efficiency can therefore be calculated as: formula_8 Where: SFP is specific fan power in Pa, J/m3, or W/(m3/s) Under typical comfort ventilation conditions with a heat recovery efficiency of 80% and a SFP of 2000 J/m3 we get: formula_9 In climates where the mean absolute difference between inside and outside temperatures exceeds ~10K the energy conservation argument for choosing natural over mechanical ventilation might therefore be questioned. It should however be noted that heating energy might be cheaper and more environmentally friendly than electricity. This is especially the case in areas where district heating is available. To develop natural ventilation systems with heat recovery two inherent challenges must first be solved: Research aiming at the development of natural ventilation systems featuring heat recovery have been made as early as 1993 where Shultz et al. proposed and tested a chimney type design relying on stack effect while recovering heat using a large counterflow recuperator constructed from corrugated galvanized iron. Both supply and exhaust happened through an unconditioned attic space, with exhaust air being extracted at ceiling height and air being supplied at floor level through a vertical duct. The device was found to provide sufficient ventilation air flow for a single family home and heat recovery with an efficiency around 40%. The device was however found to be too large and heavy to be practical, and the heat recovery efficiency too low to be competitive with mechanical systems of the time. Later attempts have primarily focused on wind as the main driving force due to its higher pressure potential. This however introduces an issue of there being large fluctuations in driving pressure. With the use of wind towers placed on the roof of ventilated spaces, supply and exhaust can be placed close to each other on opposing sides of the small towers. These systems often feature finned heat pipes although this limits the theoretical maximum heat recovery efficiency. Liquid coupled run around loops have also been tested to achieve indirect thermal connection between exhaust and supply air. While these tests have been somewhat successful, liquid coupling introduces mechanical pumps that consume energy to circulate the working fluid. While some commercially available solutions have been available for years, the claimed performance by manufacturers has yet to be verified by independent scientific studies. This might explain the apparent lack of market impact of these commercially available products claiming to deliver natural ventilation and high heat recovery efficiencies. A radically new approach to natural ventilation with heat recovery is currently being developed at Aarhus University, where heat exchange tubes are integrated into structural concrete slabs between building floors. Standards. For standards relating to ventilation rates, in the United States refer to ASHRAE Standard 62.1-2010: Ventilation for Acceptable Indoor Air Quality. These requirements are for "all spaces intended for human occupancy except those within single-family houses, multifamily structures of three stories or fewer above grade, vehicles, and aircraft." In the revision to the standard in 2010, Section 6.4 was modified to specify that most buildings designed to have systems to naturally condition spaces must also "include a mechanical ventilation system designed to meet the Ventilation Rate or IAQ procedures [in ASHRAE 62.1-2010]. The mechanical system is to be used when windows are closed due to extreme outdoor temperatures noise and security concerns". The standard states that two exceptions in which naturally conditioned buildings do not require mechanical systems are when: Also, an authority having jurisdiction may allow for the design of conditioning system that does not have a mechanical system but relies only on natural systems. In reference for how controls of conditioning systems should be designed, the standard states that they must take into consideration measures to "properly coordinate operation of the natural and mechanical ventilation systems." Another reference is ASHRAE Standard 62.2-2010: Ventilation and Acceptable Indoor Air Quality in low-rise Residential Buildings. These requirements are for "single-family houses and multifamily structures of three stories or fewer above grade, including manufactured and modular houses," but is not applicable "to transient housing such as hotels, motels, nursing homes, dormitories, or jails." For standards relating to ventilation rates, in the United States refer to ASHRAE Standard 55-2010: Thermal Environmental Conditions for Human Occupancy. Throughout its revisions, its scope has been consistent with its currently articulated purpose, “to specify the combinations of indoor thermal environmental factors and personal factors that will produce thermal environmental conditions acceptable to a majority of the occupants within the space.” The standard was revised in 2004 after field study results from the ASHRAE research project, RP-884: developing an adaptive model of thermal comfort and preference, indicated that there are differences between naturally and mechanically conditioned spaces with regards to occupant thermal response, change in clothing, availability of control, and shifts in occupant expectations. The addition to the standard, 5.3: Optional Method For Determining Acceptable Thermal Conditions in Naturally Ventilated Spaces, uses an adaptive thermal comfort approach for naturally conditioned buildings by specifying acceptable operative temperature ranges for naturally conditioned spaces. As a result, the design of natural ventilation systems became more feasible, which was acknowledged by ASHRAE as a way to further sustainable, energy efficient, and occupant-friendly design. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. University-based research centers that currently conduct natural ventilation research: Natural Ventilation Guidelines:
[ { "math_id": 0, "text": "q = \\tfrac12\\, \\rho\\, v^{2}," }, { "math_id": 1, "text": "Q_{S} = C_{d}\\; A\\; \\sqrt {2\\;g\\;H_{d}\\;\\frac{T_I-T_O}{T_I}}" }, { "math_id": 2, "text": "\\theta=C_p \\cdot \\rho \\cdot \\Delta T \\cdot (1-\\eta)." }, { "math_id": 3, "text": "\\theta" }, { "math_id": 4, "text": "C_p" }, { "math_id": 5, "text": "\\rho" }, { "math_id": 6, "text": "\\Delta T" }, { "math_id": 7, "text": "\\eta" }, { "math_id": 8, "text": "\\Delta T = \\frac{SFP}{C_p \\cdot \\rho \\cdot (1-\\eta)}" }, { "math_id": 9, "text": "\\Delta T = 2000/(1000*1.2*(1-0.8))=8.33 [K]" } ]
https://en.wikipedia.org/wiki?curid=7655739
7656250
Calculated Carbon Aromaticity Index
Index of the ignition quality of residual fuel oil The calculated carbon aromaticity index (CCAI) is an index of the ignition quality of residual fuel oil. The running of all internal combustion engines is dependent on the ignition quality of the fuel. For spark-ignition engines the fuel has an octane rating. For diesel engines it depends on the type of fuel, for distillate fuels the cetane numbers are used. Cetane numbers are tested using a special test engine and the existing engine was not made for residual fuels. For residual fuel oil two other empirical indexes are used: CCAI and Calculated Ignition Index (CII). Both CCAI and CII are calculated from the density and kinematic viscosity of the fuel. Definition. Formula for CCAI: formula_0 which is equivalent to: formula_1 Where: D= density at 15°C (kg/m3) V= viscosity (cST) t = viscosity temperature (°C) Use. This will normally give a value somewhere between 800 and 880. The lower the value is the better the ignition quality. Fuels with a CCAI higher than 880 are often problematic or even unusable in a diesel engine. CCAI are often calculated under testing of marine fuel. In case of high CCAI, the manufacturers recommendations and guidance limits should be consulted to ensure that the fuel falls within the permissible range for the engine type. Attention should be given to the combustion profile, peak pressures and exhaust temperatures on the Engine. As the name suggests, CCAI is a calculation based on the density and viscosity of a given fuel. The formula is rather complex but in general, the higher the CCAI, the poorer the ignition quality of the fuel is considered to be. Once the CCAI goes above 860, it is an indication that some combustion problems may occur. Studies carried out by engine manufacturers indicate that combustion related problems caused by fuels with high CCAI can be reduced by avoiding running the engine at part load. It is therefore suggested that wherever possible, the engine load should be maintained above 50% and the chief engineer should listen for indications of poor combustion (i.e. knocking). Should any such problems be noted then it is recommended that an alternative fuel is used whilst further investigations are carried out on samples from this fuel
[ { "math_id": 0, "text": "CCAI=D-140.7 \\log (\\log (V+0.85))-80.6-210 \\ln \\left (\\frac{t+273}{323} \\right )" }, { "math_id": 1, "text": "CCAI=D-140.7 \\log (\\log (V+0.85))-80.6-483.5 \\log \\left (\\frac{t+273}{323} \\right )" } ]
https://en.wikipedia.org/wiki?curid=7656250
7656652
Calculated Ignition Index
Index of the ignition quality of residual fuel oil The Calculated Ignition Index (CII) is an index of the ignition quality of residual fuel oil. It is used to determine the suitability of heavy fuel oil for (marine) engines. Background. The effective and efficient running of internal combustion engines is dependent on the ignition quality of the fuel. Incorrect or out of specification fuel can cause problems or severe damage to the engines and associated equipment. The effects may include: corrosion, abrasive wear, clogged cylinder valves, fuel equipment damage, fuel pump failure, premature ignition, ignition failure, explosion, or engine failure. Several indices are used to characterise fuels. For spark-ignition engines the fuel has an octane rating. For diesel engines it depends on the type of fuel, for distillate fuels the cetane numbers are used. Cetane numbers are tested using a special test engine, however, the existing test engine was not intended for residual fuels. For residual fuel oil two auxiliary indexes have been developed: the Calculated Ignition Index (CII) and Calculated Carbon Aromaticity Index (CCAI). CII index. The calculated ignition index (CII), together with the calculated carbon aromaticity index (CCAI), are empirical indicators which describe the characteristics or properties of a fuel.   Both CII and CCAI are calculated from the density and kinematic viscosity of the fuel. CII was developed by BP to calculate the autoignition capacity of heavy fuel oils (HFO). It is calculated using the measured kinematic viscosity "V" (cSt or mm2/s) of a given fuel determined at temperature "t" (°C) and the density ρ15 at 15°C (kg/m3). Definition. Formula for CII: formula_0 Where: D = density at 15°C (kg/m3) V = kinematic viscosity (cSt) T =kinematic viscosity temperature (°C) A CCAI and CII calculator is available on several websites. Use. CII was designed to produce numbers of the same order as the cetane index number for distillate fuels. The relationship between fuel properties and CII number is shown in the table. CII gives an indication of the quality of heavy fuel oil. Certain practical steps can be taken to ensure the continuing quality. This includes close supervision of fuel, including tests during bunkering using the viscosity data to determine the CII and CCAI. Avoid mixing fuels; checking water content as the water settles out; monitoring the condition and performance of engines. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "CII = (270.795 + 0.1038T)-0.254565D+23.708 \\log \\log(V+0.7)\\," } ]
https://en.wikipedia.org/wiki?curid=7656652
76571690
DotCode
Type of matrix barcode DotCode is two-dimensional (2D) matrix barcode invented in 2008 by Hand Held Products company to replace outdated Code 128. At this time, it is issued by Association for Automatic Identification and Mobility (AIM) as “ISS DotCode Symbology Specification 4.0”. DotCode consists of sparse black round dots and white spaces on white background. In case of black background round dots, creating barcode, can be white. DotCode was developed to use with high-speed industrial printers where printing accuracy can be low. Because DotCode by the standard does not require complicated elements like continuous lines or special shapes it can be applied with laser engraving or industrial drills. DotCode can be represented as rectangular array with minimal size of each side 5X dots. Maximal size of DotCode is not limited by the standard (as Code 128 is not limited) but practical limit is recommended as 100x99 which can encode around 730 digits, 366 alphanumeric characters or 304 bytes. As an extension of Code 128 barcode, DotCode allows more compact encoding of 8-bit data array and Unicode support with Extended Channel Interpretation feature. Additionally, DotCode provides much more data density and Reed–Solomon error correction which allows to restore partially damaged barcode. However, the main DotCode implementation, the same as Code 128, is effective encoding of GS1 data which is used in worldwide shipping and packaging industry. History and standards. DotCode barcode was invented in 2008 by Dr. Andrew Longacre from Hand Held Products company and standardized in 2009 by AIM as “Bar code symbology specification - DotCode”. In 2019 DotCode was reviewed as “ISS DotCode Symbology Specification 4.0”. Set of patents is registered, which are related with DotCode encoding and decoding: Application. DotCode barcode can be used in the same way as Code 128 or any (2D) matrix barcode. At this time, it is used mostly to encode GS1 data in tobacco, alcoholic and non-alcoholic beverage, pharmaceutical and grocery industries. The main implementation at this time is in tobacco industry. Main advantages of DotCode are: Barcode design. DotCode represents data in rectangular structure which consists from black round dots and white spaces on white background or white round dots on black background. DotCode does not have finder pattern, like other 2D barcodes and it must be detected with slow blob detection algorithms like Gabor filter or Circle Hough Transform. All data, metadata and error correction codewords are encoded in the same dots array which does not have any visual difference. Here are some samples of DotCode: DotCode symbol is constructed from the following elements: The DotCode bits array is represented as: &lt;br&gt;"(Two mask bits: M2, M1)(Data bits)(Corner bits, can be data or padding bits: C1 – C6)" The data codewords in 0 – 112 range are encoded in 5-of-9 binary dot patterns which are encoded from 9 dots where 5 black dots and 4 white spaces. The rest of barcode matrix (rest from division on 9) is padded with black padding bits. The padding bits can be from 0 to 8. The logically DotCode bits array is represented as: &lt;br&gt;"(2 mask bits)(Data codewords 9 bits each)(Padding bits 0 – 8 bits)" DotCode size has the following requirements: Data masking. To minimize DotCode problematic symbols, the data codewords are masked to create others visual sequences. The mask pattern is applied only to data sequence and does not affect error correction codewords. DotCode standard has 4 mask pattern which are codded into 2 bits and placed as the first 2 bits of symbol bits array. Error correction. DotCode uses Reed–Solomon error correction with prime power of 3 and finite field formula_2 or GF(113). The data codewords is represented with values from 0 to 112 and mask value is counted as leading data codeword from 0 to 3. In this way the data protected array length is (1 + ND). But amount of error correction codewords is calculated only from ND: &lt;br&gt;formula_3, &lt;br&gt;where ND is data codewords and NC - error correction codewords. The resulting codewords NW with error correction codewords is: &lt;br&gt;formula_4, &lt;br&gt;where NW is all encoding codewords: 1 mask codeword + data codewords(ND) + error correction codewords(NC). Because Reed–Solomon error correction cannot correct amount of codewords which are more than polynomial, if NW happens to exceed 112, the data is split into error correction blocks: &lt;br&gt;formula_5, &lt;br&gt;where B is block counts. The data can be split into block in the following way, for each block ‘’’n’’’, for n equals 1 to B: The error correction data formula_9 is written after single data block formula_10 in scrambled mode: &lt;br&gt;"(ND)(NC1_1)(NC2_1)(NC3_1)...(NC1_n)(NC2_m)(NC3_k)" Encoding. DotCode encoding size is not limited by standard, but practical encoding size in 100x99 version which includes 4950 dots can encode 366 raw data codewords, 730 digits, 365 alphanumeric characters, or 304 bytes. The data message in DotCode is represented with data codewords from 0 to 112 which are encoded with 5-of-9 binary dot patterns. DotCode supports the following features: There three main rules at message encoding start: Binary byte encoding. DotCode can encode full 8-bit charset in two ways: Upper Shift modes can encode (128 to 255) extended ASCII characters in two codewords with returning to previous mode: Binary Latch mode can encode 8-bit charset and ECI sequences from 1 to 5 symbols. It uses the following rules: As we see in the following table, Binary Latch encodes data more effectively, starting from 3 bytes. ECI encoding. DotCode can encode ECI indicator int two ways: FNC2 in any position except at the end of data signals the insertion of an ECI sequence – "\nnnnnn", which represents values between 000000 and 811799. The values can be encoded in 1 or 3 codewords: GS1 encoding. Any two digits in the position of the first codeword identify a symbol as GS1 encoded (opposite to Code 128). In case of symbol with two digits in the position of the first codeword must be decoded as ordinary data, the FNC1 (omitted in decoded message) must be inserted at the place of the first codeword. FNC1 in the other than the first position works as GS1 Application Identifier splitter and decoded as GS (ASCII value 29) character. Codeword 100 in Code Set C encodes application GS1 AI (17) the next 3 codewords is an expiration date and inserts GS1 AI (10) before decoding other codewords: &lt;br&gt;"(100)(24)(12)(30)(56)(64) -&gt; 17241230105664" Macros mode. Some data codewords 97 – 100 in the lead data position in Code Set B can encode “Macros”. In any other position it encodes ASCII symbol: &lt;br&gt;"(Latch B)(HT) -&gt; [)&gt;RS05GS … RSEoT" &lt;br&gt;"(Shift B)(HT) -&gt; [)&gt;RS05GS … RSEoT" Structured append. DotCode can create composite symbol, where data from multiple DotCode symbols can be logically united. This can be made with FNC2 symbol in last data position. When FNC2 is in the final data position, then the preceding two message characters, digits and uppercase letters in order 1 to 9 then A to Z (for values 10 to 35) shall as "m" and "n" designate where this message belongs in a "m out of n" sequence. As an example, a symbol whose message ends "4 B FNC2" shall be the 4th symbol out of 11 that comprise the entire message. Special modes encoding. FNC3 in the first codeword position indicates that the message is the instructions for initialization or reprogramming of the bar code reader. FNC3 in any other position than first indicates that encoded message must be logically separated into two distinct messages (before and after it); Data padding. DotCode symbol codewords capacity is: &lt;br&gt;formula_12 DotCode symbol data codewords capacity is: &lt;br&gt;formula_13 In this way we need to pad data codewords in case with have free space. There are two rules: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(W + H)\\pmod{2} = 1" }, { "math_id": 1, "text": "6 <= (((W * H) / 2) - 2)\\pmod{9}" }, { "math_id": 2, "text": "\\mathbb{F}_{113}" }, { "math_id": 3, "text": "NC = 3 + (ND / 2)" }, { "math_id": 4, "text": "NW = (1 + ND) + NC" }, { "math_id": 5, "text": "B = (NW + 111) / 112" }, { "math_id": 6, "text": "ND(block) = ((1+ND) - (n-1) + (B-1)) / B" }, { "math_id": 7, "text": "NW(block) = (NW - (n-1) + (B-1)) / B" }, { "math_id": 8, "text": "NC(block) = NW(block) - ND(block)" }, { "math_id": 9, "text": "NC" }, { "math_id": 10, "text": "ND" }, { "math_id": 11, "text": "(A - 40) * 12769 + B * 113 + C + 40" }, { "math_id": 12, "text": "NW = ((H * W) / 2 - 2) \\pmod{9})" }, { "math_id": 13, "text": "ND = (NW - 3) - (NW - 3) / 3" } ]
https://en.wikipedia.org/wiki?curid=76571690
765970
Dehn twist
In geometric topology, a branch of mathematics, a Dehn twist is a certain type of self-homeomorphism of a surface (two-dimensional manifold). Definition. Suppose that "c" is a simple closed curve in a closed, orientable surface "S". Let "A" be a tubular neighborhood of "c". Then "A" is an annulus, homeomorphic to the Cartesian product of a circle and a unit interval "I": formula_0 Give "A" coordinates ("s", "t") where "s" is a complex number of the form formula_1 with formula_2 and "t" ∈ [0, 1]. Let "f" be the map from "S" to itself which is the identity outside of "A" and inside "A" we have formula_3 Then "f" is a Dehn twist about the curve "c". Dehn twists can also be defined on a non-orientable surface "S", provided one starts with a 2-sided simple closed curve "c" on "S". Example. Consider the torus represented by a fundamental polygon with edges "a" and "b" formula_4 Let a closed curve be the line along the edge "a" called formula_5. Given the choice of gluing homeomorphism in the figure, a tubular neighborhood of the curve formula_5 will look like a band linked around a doughnut. This neighborhood is homeomorphic to an annulus, say formula_6 in the complex plane. By extending to the torus the twisting map formula_7 of the annulus, through the homeomorphisms of the annulus to an open cylinder to the neighborhood of formula_5, yields a Dehn twist of the torus by "a". formula_8 This self homeomorphism acts on the closed curve along "b". In the tubular neighborhood it takes the curve of "b" once along the curve of "a". A homeomorphism between topological spaces induces a natural isomorphism between their fundamental groups. Therefore one has an automorphism formula_9 where ["x"] are the homotopy classes of the closed curve "x" in the torus. Notice formula_10 and formula_11, where formula_12 is the path travelled around "b" then "a". Mapping class group. It is a theorem of Max Dehn that maps of this form generate the mapping class group of isotopy classes of orientation-preserving homeomorphisms of any closed, oriented genus-formula_13 surface. W. B. R. Lickorish later rediscovered this result with a simpler proof and in addition showed that Dehn twists along formula_14 explicit curves generate the mapping class group (this is called by the punning name "Lickorish twist theorem"); this number was later improved by Stephen P. Humphries to formula_15, for formula_16, which he showed was the minimal number. Lickorish also obtained an analogous result for non-orientable surfaces, which require not only Dehn twists, but also "Y-homeomorphisms."
[ { "math_id": 0, "text": "c \\subset A \\cong S^1 \\times I." }, { "math_id": 1, "text": "e^{i\\theta}" }, { "math_id": 2, "text": "\\theta \\in [0, 2\\pi]," }, { "math_id": 3, "text": "f(s, t) = \\left(se^{i2\\pi t}, t\\right)." }, { "math_id": 4, "text": "\\mathbb{T}^2 \\cong \\mathbb{R}^2/\\mathbb{Z}^2." }, { "math_id": 5, "text": "\\gamma_a" }, { "math_id": 6, "text": "a(0; 0, 1) = \\{z \\in \\mathbb{C}: 0 < |z| < 1\\}" }, { "math_id": 7, "text": "\\left(e^{i\\theta}, t\\right) \\mapsto \\left(e^{i\\left(\\theta + 2\\pi t\\right)}, t\\right)" }, { "math_id": 8, "text": "T_a: \\mathbb{T}^2 \\to \\mathbb{T}^2" }, { "math_id": 9, "text": "{T_a}_\\ast: \\pi_1\\left(\\mathbb{T}^2\\right) \\to \\pi_1\\left(\\mathbb{T}^2\\right): [x] \\mapsto \\left[T_a(x)\\right]" }, { "math_id": 10, "text": "{T_a}_\\ast([a]) = [a]" }, { "math_id": 11, "text": "{T_a}_\\ast([b]) = [b*a]" }, { "math_id": 12, "text": "b*a" }, { "math_id": 13, "text": "g" }, { "math_id": 14, "text": "3g - 1" }, { "math_id": 15, "text": "2g + 1" }, { "math_id": 16, "text": "g > 1" } ]
https://en.wikipedia.org/wiki?curid=765970
765972
SRGB
Standard RGB color space sRGB is a standard RGB (red, green, blue) color space that HP and Microsoft created cooperatively in 1996 to use on monitors, printers, and the World Wide Web. It was subsequently standardized by the International Electrotechnical Commission (IEC) as IEC 61966-2-1:1999. sRGB is the current defined standard colorspace for the web, and it is usually the assumed colorspace for images that are neither tagged for a colorspace nor have an embedded color profile. sRGB essentially codifies the display specifications for the computer monitors in use at that time, which greatly aided its acceptance. sRGB uses the same color primaries and white point as ITU-R BT.709 standard for HDTV, a transfer function (or gamma) compatible with the era's CRT displays, and a viewing environment designed to match typical home and office viewing conditions. sRGB definition. Gamut. sRGB defines the chromaticities of the red, green, and blue primaries, the colors where one of the three channels is nonzero and the other two are zero. The gamut of chromaticities that can be represented in sRGB is the color triangle defined by these primaries, which are set such that the range of colors inside the triangle is well within the range of colors visible to a human with normal trichromatic vision. As with any RGB color space, for non-negative values of R, G, and B it is not possible to represent colors outside this triangle. The primaries come from HDTV (ITU-R BT.709), which are somewhat different from those for older color TV systems (ITU-R BT.601). These values were chosen to reflect the approximate color of consumer CRT phosphors at the time of its design. Since flat-panel displays at the time were generally designed to emulate CRT characteristics, the values also reflected prevailing practice for other display devices as well. Transfer function ("gamma"). The reference display characterisation is based on the characterisation in CIE 122. The reference display is characterised by a nominal power-law gamma of 2.2, which the sRGB working group determined was representative of the CRTs used with Windows operating systems at the time. The ability to directly display sRGB images on a CRT without any lookup greatly helped sRGB's adoption. Gamma also usefully encodes more data near the black, which reduces visible noise and quantization artifacts. The standard also defines a opto-electronic transfer function (OETF), which defines the conversion of linear light or signal intensity to a gamma-compressed image data. It is a piecewise compound function and has an approximate formula_0 of 2.2, with a linear portion near zero to avoid an infinite slope. Near zero, a formula_1 power law curve intercepts a straight-line section that leads to zero. In practice, there is still debate and confusion around whether sRGB data should be displayed with pure 2.2 gamma as defined in the standard, or with the inverse of the OETF. Some display manufacturers and calibrators use the former, while some use the latter. When a power law formula_2 is used to display data that was intended to be displayed on displays that use the piecewise function, the result is that the shadow details will be lost. Computing the transfer function. A straight line that passes through (0,0) is formula_3, and a gamma curve that passes through (1,1) is formula_4 If these are joined at the point ("X","X"/Φ) then: formula_5 To avoid a kink where the two segments meet, the derivatives must be equal at this point: formula_6 We now have two equations. If we take the two unknowns to be X and Φ then we can solve to give formula_7 The values "A" = 0.055 and Γ = 2.4 were chosen so the curve closely resembled the gamma-2.2 curve. This gives "X" ≈ 0.0392857, Φ ≈ 12.9232102. These values, rounded to "X" = 0.03928, Φ = 12.92321 sometimes describe sRGB conversion. Draft publications by sRGB's creators further rounded Φ = 12.92, resulting in a small discontinuity in the curve. Some authors adopted these incorrect values, in part because the draft paper was freely available and the official IEC standard is behind a paywall. For the standard, the rounded value of Φ was kept and X was recomputed as 0.04045 to make the curve continuous, resulting in a slope discontinuity from 1/12.92 below the intersection to 1/12.70 above. Viewing environment. The sRGB specification assumes a dimly lit encoding (creation) environment with an ambient correlated color temperature (CCT) of 5003 K. This differs from the CCT of the illuminant (D65). Using D50 for both would have made the white point of most photographic paper appear excessively blue. The other parameters, such as the luminance level, are representative of a typical CRT monitor. For optimal results, the ICC recommends using the encoding viewing environment (i.e., dim, diffuse lighting) rather than the less-stringent typical viewing environment. Transformation. From sRGB to CIE XYZ. The sRGB component values formula_8, formula_9, formula_10 are in the range 0 to 1. When represented digitally as 8-bit numbers, these color component values are in the range of 0 to 255, and should be divided (in a floating point representation) by 255 to convert to the range of 0 to 1. formula_11 where formula_12 is formula_13, formula_14, or formula_15. These gamma-expanded values (sometimes called "linear values" or "linear-light values") are multiplied by a matrix to obtain CIE XYZ (the matrix has infinite precision, any change in its values or adding non-zeroes is not allowed): formula_16 This is actually the matrix for BT.709 primaries, not just for sRGB, the second row corresponds to the BT.709-2 luma coefficients (BT.709-1 had a typo in these coefficients). From CIE XYZ to sRGB. The CIE XYZ values must be scaled so that the "Y" of D65 ("white") is 1.0 ("X" = 0.9505, "Y" = 1.0000, "Z" = 1.0890). This is usually true but some color spaces use 100 or other values (such as in CIELAB, when using specified white points). The first step in the calculation of sRGB from CIE XYZ is a linear transformation, which may be carried out by a matrix multiplication. (The numerical values below match those in the official sRGB specification, which corrected small rounding errors in the original publication by sRGB's creators, and assume the 2° standard colorimetric observer for CIE XYZ.) This matrix depends on the bitdepth. formula_17 These linear RGB values are not the final result; gamma correction must still be applied. The following formula transforms the linear values into sRGB: formula_18 where formula_12 is formula_13, formula_14, or formula_15. These gamma-compressed values (sometimes called "non-linear values") are usually clipped to the 0 to 1 range. This clipping can be done before or after the gamma calculation, or done as part of converting to 8 bits. If values in the range 0 to 255 are required, e.g. for video display or 8-bit graphics, the usual technique is to multiply by 255 and round to an integer. Usage. Due to the standardization of sRGB on the Internet, on computers, and on printers, many low- to medium-end consumer digital cameras and scanners use sRGB as the default (or only available) working color space. However, consumer-level CCDs are typically uncalibrated, meaning that even though the image is being labeled as sRGB, one can not conclude that the image is color-accurate sRGB. If the color space of an image is unknown and it is an 8 bit image format, sRGB is usually the assumed default, in part because color spaces with a larger gamut need a higher bit depth to maintain a low color error rate (∆E). An ICC profile or a lookup table may be used to convert sRGB to other color spaces. ICC profiles for sRGB are widely distributed, and the ICC distributes several variants of sRGB profiles, including variants for ICCmax, version 4, and version 2. Version 4 is generally recommended, but version 2 is still commonly used and is the most compatible with other software including browsers. Version 2 of the ICC profile specification does not officially support piecewise parametric curve encoding ("para"), though version 2 does support simple power-law functions. Nevertheless, lookup tables are more commonly used as they are computationally more efficient. Even when parametric curves are used, software will often reduce to a run-time lookup table for efficient processing. As the sRGB gamut meets or exceeds the gamut of a low-end inkjet printer, an sRGB image is often regarded as satisfactory for home printing. sRGB is sometimes avoided by high-end print publishing professionals because its color gamut is not big enough, especially in the blue-green colors, to include all the colors that can be reproduced in CMYK printing. Images intended for professional printing via a fully color-managed workflow (e.g. prepress output) sometimes use another color space such as Adobe RGB (1998), which accommodates a wider gamut. Such images used on the Internet may be converted to sRGB using color management tools that are usually included with software that works in these other color spaces. The two dominant programming interfaces for 3D graphics, OpenGL and Direct3D, have both incorporated support for the sRGB gamma curve. OpenGL supports textures with sRGB gamma encoded color components (first introduced with EXT_texture_sRGB extension, added to the core in OpenGL 2.1) and rendering into sRGB gamma encoded framebuffers (first introduced with EXT_framebuffer_sRGB extension, added to the core in OpenGL 3.0). Correct mipmapping and interpolation of sRGB gamma textures has direct hardware support in texturing units of most modern GPUs (for example nVidia GeForce 8 performs conversion from 8-bit texture to linear values before interpolating those values), and does not have any performance penalty. sYCC. Amendment 1 to IEC 61966-2-1:1999, approved in 2003, includes the definition of a Y′Cb′Cr′ color representation called sYCC. Although the RGB color primaries are based on BT.709, the equations for transformation from sRGB to sYCC and vice versa are based on BT.601. sYCC uses 8 bits for the components resulting in a range of approximately 0–1 for Y; -0.5–0.5 for C. The amendment also contains a 10-bit-or-more encoding called bg-sRGB where 0–1 is mapped to &lt;templatestyles src="Fraction/styles.css" /&gt;-384⁄510...&lt;templatestyles src="Fraction/styles.css" /&gt;639⁄510, and bg-sYCC using the same number of bits for a range of approximately -0.75–1.25 for Y; -1–1 for C. As this conversion can result in sRGB values outside the range 0–1, the amendment describes how to apply the gamma correction to negative values, by applying −"f"(−"x") when x is negative (and f is the sRGB↔linear functions described above). This is also used by scRGB. The amendment also recommends a higher-precision XYZ to sRGB matrix using seven decimal points, to more accurately invert the sRGB to XYZ matrix (which remains at the precision shown above): formula_19. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma" }, { "math_id": 1, "text": "\\gamma^{1/2.4}" }, { "math_id": 2, "text": "\\gamma^{2.2}" }, { "math_id": 3, "text": "y = \\frac{x}{\\Phi}" }, { "math_id": 4, "text": "y = \\left(\\frac{x+A}{1+A}\\right)^\\Gamma" }, { "math_id": 5, "text": "\\frac{X}{\\Phi} = \\left(\\frac{X+A}{1+A}\\right)^\\Gamma" }, { "math_id": 6, "text": "\\frac{1}{\\Phi} = \\Gamma\\left(\\frac{X+A}{1+A}\\right)^{\\Gamma-1}\\left(\\frac{1}{1+A}\\right)" }, { "math_id": 7, "text": "X = \\frac{A}{\\Gamma-1}, \\Phi=\\frac{(1+A)^\\Gamma(\\Gamma-1)^{\\Gamma-1}}{(A^{\\Gamma-1})(\\Gamma^\\Gamma)}" }, { "math_id": 8, "text": "R_\\mathrm{srgb}" }, { "math_id": 9, "text": "G_\\mathrm{srgb}" }, { "math_id": 10, "text": "B_\\mathrm{srgb}" }, { "math_id": 11, "text": "C_\\mathrm{linear}=\n\\begin{cases}\\dfrac{C_\\mathrm{srgb}}{12.92}, & C_\\mathrm{srgb}\\le0.04045 \\\\[5mu]\n\\left(\\dfrac{C_\\mathrm{srgb}+0.055}{1.055}\\right)^{\\!2.4}, & C_\\mathrm{srgb}>0.04045\n\\end{cases}\n" }, { "math_id": 12, "text": "C" }, { "math_id": 13, "text": "R" }, { "math_id": 14, "text": "G" }, { "math_id": 15, "text": "B" }, { "math_id": 16, "text": "\n\\begin{bmatrix} X_{D65} \\\\ Y_{D65} \\\\ Z_{D65} \\end{bmatrix}\n=\n\\begin{bmatrix}\n 0.4124 & 0.3576 & 0.1805 \\\\\n 0.2126 & 0.7152 & 0.0722 \\\\\n 0.0193 & 0.1192 & 0.9505\n\\end{bmatrix}\n\\begin{bmatrix} R_\\text{linear} \\\\ G_\\text{linear} \\\\ B_\\text{linear} \\end{bmatrix}\n" }, { "math_id": 17, "text": "\n\\begin{bmatrix} R_\\text{linear} \\\\ G_\\text{linear} \\\\ B_\\text{linear} \\end{bmatrix}\n= \\begin{bmatrix}\n +3.2406 & -1.5372 & -0.4986 \\\\\n -0.9689 & +1.8758 & +0.0415 \\\\\n +0.0557 & -0.2040 & +1.0570\n\\end{bmatrix}\n\\begin{bmatrix} X_{D65} \\\\ Y_{D65} \\\\ Z_{D65} \\end{bmatrix}\n" }, { "math_id": 18, "text": "C_\\text{sRGB} = \\begin{cases}\n12.92 C_\\text{linear}, & C_\\text{linear} \\le 0.0031308 \\\\[5mu]\n1.055 (C_\\text{linear}^{1/2.4})-0.055, & C_\\text{linear} > 0.0031308\n\\end{cases}" }, { "math_id": 19, "text": "\n\\begin{bmatrix} R_\\text{linear} \\\\ G_\\text{linear} \\\\ B_\\text{linear} \\end{bmatrix}\n= \\begin{bmatrix}\n +3.2406255 & -1.5372080 & -0.4986286 \\\\\n -0.9689307 & +1.8757561 & +0.0415175 \\\\\n +0.0557101 & -0.2040211 & +1.0569959\n\\end{bmatrix}\n\\begin{bmatrix} X_{D65} \\\\ Y_{D65} \\\\ Z_{D65} \\end{bmatrix}\n" } ]
https://en.wikipedia.org/wiki?curid=765972
76598831
Pharmacology of ethanol
Pharmacodynamics and pharmacokinetics of ethanol The pharmacology of ethanol involves both pharmacodynamics (how it affects the body) and pharmacokinetics (how the body processes it). In the body, ethanol primarily affects the central nervous system, acting as a depressant and causing sedation, relaxation, and decreased anxiety. The complete list of mechanisms remains an area of research, but ethanol has been shown to affect ligand-gated ion channels, particularly the GABAA receptor. After oral ingestion, ethanol is absorbed via the stomach and intestines into the bloodstream. Ethanol is highly water-soluble and diffuses passively throughout the entire body, including the brain. Soon after ingestion, it begins to be metabolized, 90% or more by the liver. One standard drink is sufficient to almost completely saturate the liver's capacity to metabolize alcohol. The main metabolite is acetaldehyde, a toxic carcinogen. Acetaldehyde is then further metabolized into ionic acetate by the enzyme aldehyde dehydrogenase (ALDH). Acetate is not carcinogenic and has low toxicity, but has been implicated in causing hangovers. Acetate is further broken down into carbon dioxide and water and eventually eliminated from the body through urine and breath. 5 to 10% of ethanol is excreted unchanged in the breath, urine, and sweat. History. Beginning with the Gin Craze, excessive drinking and drunkenness developed into a major problem for public health. In 1874, Francis E. Anstie's experiments showed that the amounts of alcohol eliminated unchanged in breath, urine, sweat, and feces were negligible compared to the amount ingested, suggesting it was oxidized within the body. In 1902, Atwater and Benedict estimated that alcohol yielded 7.1 kcal of energy per gram consumed and 98% was metabolized. In 1922, Widmark published his method for analyzing the alcohol content of fingertip samples of blood. Through the 1930s, Widmark conducted numerous studies and formulated the basic principles of ethanol pharmacokinetics for forensic purposes, including the eponymous Widmark equation. In 1980, Watson et al. proposed updated equations based on total body water instead of body weight. The TBW equations have been found to be significantly more accurate due to rising levels of obesity worldwide. Pharmacodynamics. The principal mechanism of action for ethanol has proven elusive and remains not fully understood. Identifying molecular targets for ethanol is unusually difficult, in large part due to its unique biochemical properties. Specifically, ethanol is a very low molecular weight compound and is of exceptionally low potency in its actions, causing effects only at very high (millimolar "mM") concentrations. For these reasons, it is not possible to employ traditional biochemical techniques to directly assess the binding of ethanol to receptors or ion channels. Instead, researchers have had to rely on functional studies to elucidate the actions of ethanol. Even at present, no binding sites have been unambiguously identified and established for ethanol. Studies have published strong evidence for certain functions of ethanol in specific systems, but other laboratories have found that these findings do not replicate with different neuronal types and heterologously expressed receptors. Thus, there remains lingering doubt about the mechanisms of ethanol listed here, even for the GABAA receptor, the most-studied mechanism. In the past, alcohol was believed to be a non-specific pharmacological agent affecting many neurotransmitter systems in the brain, but progress has been made over the last few decades. It appears that it affects ion channels, in particular ligand-gated ion channels, to mediate its effects in the CNS. In some systems, these effects are facilitatory, and in others inhibitory. Moreover, although it has been established that ethanol modulates ion channels to mediate its effects, ion channels are complex proteins, and their interactions and functions are complicated by diverse subunit compositions and regulation by conserved cellular signals (e.g. signaling lipids). Alcohol is also converted into phosphatidylethanol (PEth, an unnatural lipid metabolite) by phospholipase D2. This metabolite competes with PIP2 agonist sites on lipid-gated ion channels. The result of these direct effects is a wave of further indirect effects involving a variety of other neurotransmitter and neuropeptide systems. This presents a novel indirect mechanism and suggests that a metabolite, not the ethanol itself, could cause the behavioural or symptomatic effects of alcohol intoxication. Many of the primary targets of ethanol are known to bind PIP2 including GABAA receptors, but the role of PEth needs to be investigated further. List of known actions in the central nervous system. Ethanol has been reported to possess the following actions in functional assays at varying concentrations: Many of these actions have been found to occur only at very high concentrations that may not be pharmacologically significant at recreational doses of ethanol, and it is unclear how or to what extent each of the individual actions is involved in the effects of ethanol. Some of the actions of ethanol on ligand-gated ion channels, specifically the nicotinic acetylcholine receptors and the glycine receptor, are dose-dependent, with potentiation "or" inhibition occurring dependent on ethanol concentration. This seems to be because the effects of ethanol on these channels are a summation of positive and negative allosteric modulatory actions. GABAA receptors. Ethanol has been found to enhance GABAA receptor-mediated currents in functional assays. Ethanol has long shown a similarity in its effects to positive allosteric modulators of the GABAA receptor like benzodiazepines, barbiturates, and various general anesthetics. Some of these effects include anxiolytic, anticonvulsant, sedative, and hypnotic effects, cognitive impairment, and motor incoordination. In accordance, it was theorized and widely believed that the primary mechanism of action of ethanol is GABAA receptor positive allosteric modulation. However, other ion channels are involved in its effects as well. Although ethanol exhibits positive allosteric binding properties to GABAA receptors, its effects are limited to pentamers containing the δ-subunit rather than the γ-subunit. Ethanol potentiates extrasynaptic δ subunit-containing GABAA receptors at behaviorally relevant (as low as 3 mM) concentrations, but γ subunit receptors are enhanced only at far higher concentrations (&gt; 100 mM) that are in excess of recreational concentrations (up to 50 mM). GABAA receptors containing the δ-subunit have been shown to be located exterior to the synapse and are involved with tonic inhibition rather than its γ-subunit counterpart, which is involved in phasic inhibition. The δ-subunit has been shown to be able to form the allosteric binding site which makes GABAA receptors containing the δ-subunit more sensitive to ethanol concentrations, even to moderate social ethanol consumption levels (30mM). While it has been shown by Santhakumar et al. that GABAA receptors containing the δ-subunit are sensitive to ethanol modulation, depending on subunit combinations receptors could be more or less sensitive to ethanol. It has been shown that GABAA receptors that contain both δ and β3-subunits display increased sensitivity to ethanol. One such receptor that exhibits ethanol insensitivity is α3-β6-δ GABAA. It has also been shown that subunit combination is not the only thing that contributes to ethanol sensitivity. Location of GABAA receptors within the synapse may also contribute to ethanol sensitivity. Ro15-4513, a close analogue of the benzodiazepine antagonist flumazenil (Ro15-1788), has been found to bind to the same site as ethanol and to competitively displace it in a saturable manner. In addition, Ro15-4513 blocked the enhancement of δ subunit-containing GABAA receptor currents by ethanol "in vitro". In accordance, the drug has been found to reverse many of the behavioral effects of low-to-moderate doses of ethanol in rodents, including its effects on anxiety, memory, motor behavior, and self-administration. Taken together, these findings suggest a binding site for ethanol on subpopulations of the GABAA receptor with specific subunit compositions via which it interacts with and potentiates the receptor. Calcium channel blocking. Research indicates ethanol is involved in the inhibition of L-type calcium channels. One study showed the nature of ethanol binding to L-type calcium channels is according to first-order kinetics with a Hill coefficient around 1. This indicates ethanol binds independently to the channel, expressing noncooperative binding. Early studies showed a link between calcium and the release of vasopressin by the secondary messenger system. Vasopressin levels are reduced after the ingestion of alcohol. The lower levels of vasopressin from the consumption of alcohol have been linked to ethanol acting as an antagonist to voltage-gated calcium channels (VGCCs). Studies conducted by Treistman et al. in the aplysia confirm inhibition of VGCC by ethanol. Voltage clamp recordings have been done on the aplysia neuron. VGCCs were isolated and calcium current was recorded using patch clamp technique having ethanol as a treatment. Recordings were replicated at varying concentrations (0, 10, 25, 50, and 100 mM) at a voltage clamp of +30 mV. Results showed calcium current decreased as concentration of ethanol increased. Similar results have shown to be true in single-channel recordings from isolated nerve terminal of rats that ethanol does in fact block VGCCs. Studies done by Katsura et al. in 2006 on mouse cerebral cortical neurons, show the effects of prolonged ethanol exposure. Neurons were exposed to sustained ethanol concentrations of 50 mM for 3 days "in vitro". Western blot and protein analysis were conducted to determine the relative amounts of VGCC subunit expression. α1C, α1D, and α2/δ1 subunits showed an increase of expression after sustained ethanol exposure. However, the β4 subunit showed a decrease. Furthermore, α1A, α1B, and α1F subunits did not alter in their relative expression. Thus, sustained ethanol exposure may participate in the development of ethanol dependence in neurons. Other experiments done by Malysz et al. have looked into ethanol effects on voltage-gated calcium channels on detrusor smooth muscle cells in guinea pigs. Perforated patch clamp technique was used having intracellular fluid inside the pipette and extracellular fluid in the bath with added 0.3% vol/vol (about 50-mM) ethanol. Ethanol decreased the Ca2+ current in DSM cells and induced muscle relaxation. Ethanol inhibits VGCCs and is involved in alcohol-induced relaxation of the urinary bladder. Rewarding and reinforcing actions. The reinforcing effects of alcohol consumption are mediated by acetaldehyde generated by catalase and other oxidizing enzymes such as cytochrome P-4502E1 in the brain. Although acetaldehyde has been associated with some of the adverse and toxic effects of ethanol, it appears to play a central role in the activation of the mesolimbic dopamine system. Ethanol's rewarding and reinforcing (i.e., addictive) properties are mediated through its effects on dopamine neurons in the mesolimbic reward pathway, which connects the ventral tegmental area to the nucleus accumbens (NAcc). One of ethanol's primary effects is the allosteric inhibition of NMDA receptors and facilitation of GABAA receptors (e.g., enhanced GABAA receptor-mediated chloride flux through allosteric regulation of the receptor). At high doses, ethanol inhibits most ligand-gated ion channels and voltage-gated ion channels in neurons as well. With acute alcohol consumption, dopamine is released in the synapses of the mesolimbic pathway, in turn heightening activation of postsynaptic D1 receptors. The activation of these receptors triggers postsynaptic internal signaling events through protein kinase A, which ultimately phosphorylate cAMP response element binding protein (CREB), inducing CREB-mediated changes in gene expression. With chronic alcohol intake, consumption of ethanol similarly induces CREB phosphorylation through the D1 receptor pathway, but it also alters NMDA receptor function through phosphorylation mechanisms; an adaptive downregulation of the D1 receptor pathway and CREB function occurs as well. Chronic consumption is also associated with an effect on CREB phosphorylation and function via postsynaptic NMDA receptor signaling cascades through a MAPK/ERK pathway and CAMK-mediated pathway. These modifications to CREB function in the mesolimbic pathway induce expression (i.e., increase gene expression) of ΔFosB in the NAcc, where ΔFosB is the "master control protein" that, when overexpressed in the NAcc, is necessary and sufficient for the development and maintenance of an addictive state (i.e., its overexpression in the nucleus accumbens produces and then directly modulates compulsive alcohol consumption). Relationship between concentrations and effects. Recreational concentrations of ethanol are typically in the range of 1 to 50 mM. Very low concentrations of 1 to 2 mM ethanol produce zero or undetectable effects except in alcohol-naive individuals. Slightly higher levels of 5 to 10 mM, which are associated with light social drinking, produce measurable effects including changes in visual acuity, decreased anxiety, and modest behavioral disinhibition. Further higher levels of 15 to 20 mM result in a degree of sedation and motor incoordination that is contraindicated with the operation of motor vehicles. In jurisdictions in the U.S., maximum blood alcohol levels for legal driving are about 17 to 22 mM. In the upper range of recreational ethanol concentrations of 20 to 50 mM, depression of the central nervous system is more marked, with effects including complete drunkenness, profound sedation, amnesia, emesis, hypnosis, and eventually unconsciousness. Levels of ethanol above 50 mM are not typically experienced by normal individuals and hence are not usually physiologically relevant; however, such levels – ranging from 50 to 100 mM – may be experienced by alcoholics with high tolerance to ethanol. Concentrations above this range, specifically in the range of 100 to 200 mM, would cause death in all people except alcoholics. As drinking increases, people become sleepy or fall into a stupor. After a very high level of consumption, the respiratory system becomes depressed and the person will stop breathing. Comatose patients may aspirate their vomit (resulting in vomitus in the lungs, which may cause "drowning" and later pneumonia if survived). CNS depression and impaired motor coordination along with poor judgment increase the likelihood of accidental injury occurring. It is estimated that about one-third of alcohol-related deaths are due to accidents and another 14% are from intentional injury. In addition to respiratory failure and accidents caused by its effects on the central nervous system, alcohol causes significant metabolic derangements. Hypoglycaemia occurs due to ethanol's inhibition of gluconeogenesis, especially in children, and may cause lactic acidosis, ketoacidosis, and acute kidney injury. Metabolic acidosis is compounded by respiratory failure. Patients may also present with hypothermia. Pharmacokinetics. The pharmacokinetics of ethanol are well characterized by the ADME acronym (absorption, distribution, metabolism, excretion). Besides the dose ingested, factors such as the person's total body water, speed of drinking, the drink's nutritional content, and the contents of the stomach all influence the profile of blood alcohol content (BAC) over time. Breath alcohol content (BrAC) and BAC have similar profile shapes, so most forensic pharmacokinetic calculations can be done with either. Relatively few studies directly compare BrAC and BAC within subjects and characterize the difference in pharmacokinetic parameters. Comparing arterial and venous BAC, arterial BAC is higher during the absorption phase and lower in the postabsorptive declining phase. Endogenous production. All organisms produce alcohol in small amounts by several pathways, primarily through fatty acid synthesis, glycerolipid metabolism, and bile acid biosynthesis pathways. Fermentation is a biochemical process during which yeast and certain bacteria convert sugars to ethanol, carbon dioxide, as well as other metabolic byproducts. The average human digestive system produces approximately 3g of ethanol per day through fermentation of its contents. Such production generally does not have any forensic significance because the ethanol is broken down before significant intoxication ensues. These trace amounts of alcohol range from 0.1 to in the blood of healthy humans, with some measurements as high as . Auto-brewery syndrome is a condition characterized by significant fermentation of ingested carbohydrates within the body. In rare cases, intoxicating quantities of ethanol may be produced, especially after eating meals. Claims of endogenous fermentation have been attempted as a defense against drunk driving charges, some of which have been successful, but the condition is under-researched. Absorption. Ethanol is most commonly ingested by mouth, but other routes of administration are possible, such as inhalation, enema, or by intravenous injection. With oral administration, the ethanol is absorbed into the portal venous blood through the mucosa of the gastrointestinal tract, such as in the oral cavity, stomach, duodenum, and jejunum. The oral bioavailability of ethanol is quite high, with estimates ranging from 80% at a minimum to 94%-96%. The ethanol molecule is small and uncharged, and easily crosses biological membranes by passive diffusion. The absorption rate of ethanol is typically modeled as a first-order kinetic process depending on the concentration gradient and specific membrane. The rate of absorption is fastest in the duodenum and jejunum, owing to the larger absorption surface area provided by the villi and microvilli of the small intestines. Gastric emptying is therefore an important consideration when estimating the overall rate of absorption in most scenarios; the presence of a meal in the stomach delays gastric emptying, and absorption of ethanol into the blood is consequently slower. Due to irregular gastric emptying patterns, the rate of absorption of ethanol is unpredictable, varying significantly even between drinking occasions. In experiments, aqueous ethanol solutions have been given intravenously or rectally to avoid this variation. The delay in ethanol absorption caused by food is similar regardless of whether food is consumed just before, at the same time, or just after ingestion of ethanol. The type of food, whether fat, carbohydrates, or protein, also is of little importance. Not only does food slow the absorption of ethanol, but it also reduces the bioavailability of ethanol, resulting in lower circulating concentrations. Regarding inhalation, early experiments with animals showed that it was possible to produce significant BAC levels comparable to those obtained by injection, by forcing the animal to breathe alcohol vapor. In humans, concentrations of ethanol in air above 10 mg/L caused initial coughing and smarting of the eyes and nose, which went away after adaptation. 20 mg/L was just barely tolerable. Concentrations above 30 mg/L caused continuous coughing and tears, and concentrations above 40 mg/L were described as intolerable, suffocating, and impossible to bear for even short periods. Breathing air with concentration of 15 mg/L ethanol for 3 hours resulted in BACs from 0.2 to 4.5 g/L, depending on breathing rate. It is not a particularly efficient or enjoyable method of becoming intoxicated. Ethanol is not absorbed significantly through intact skin. The steady state flux is . Applying a 70% ethanol solution to a skin area of for 1 hr would result in approximately of ethanol being absorbed. The substantially increased levels of ethanol in the blood reported for some experiments are likely due to inadvertent inhalation. A study that did not prevent respiratory uptake found that applying 200 mL of hand disinfectant containing 95% w/w ethanol (150 g ethanol total) over the course of 80 minutes in a 3-minutes-on 5-minutes-off pattern resulted in the median BAC among volunteers peaking 30 minutes after the last application at 17.5 mg/L (0.00175%). This BAC roughly corresponds to drinking one gram of pure ethanol. Ethanol is rapidly absorbed through cut or damaged skin, with reports of ethanol intoxication and fatal poisoning. The timing of peak blood concentration varies depends on the type of alcoholic drink: Also, carbonated alcoholic drinks seem to have a shorter onset compare to flat drinks in the same volume. One theory is that carbon dioxide in the bubbles somehow speeds the flow of alcohol into the intestines. Absorption is reduced by a large meal. Stress speeds up absorption. Distribution. After absorption, the alcohol goes through the portal vein to the liver, then through the hepatic veins to the heart, then the pulmonary arteries to the lungs, then the pulmonary veins to the heart again, and then enters systemic circulation. Once in systematic circulation, ethanol distributes throughout the body, diffusing passively and crossing all biological membranes including the blood-brain barrier. At equilibrium, ethanol is present in all body fluids and tissues in proportion to their water content. Ethanol does not bind to plasma proteins or other biomolecules. The rate of distribution depends on blood supply, specifically the cross-sectional area of the local capillary bed and the blood flow per gram of tissue. As such, ethanol rapidly affects the brain, liver, and kidneys, which have high blood flow. Other tissues with lower circulation, such as skeletal muscles and bone, require more time for ethanol to distribute into. In rats, it takes around 10–15 minutes for tissue and venous blood to reach equilibrium. Peak circulating levels of ethanol are usually reached within a range of 30 to 90 minutes of ingestion, with an average of 45 to 60 minutes. People who have fasted overnight have been found to reach peak ethanol concentrations more rapidly, at within 30 minutes of ingestion. The volume of distribution Vd contributes about 15% of the uncertainty to Widmark's equation and has been the subject of much research. Widmark originally used units of mass (g/kg) for EBAC, thus he calculated the apparent mass of distribution Md or mass of blood in kilograms. He fitted an equation formula_0 of the body weight W in kg, finding an average rho-factor of 0.68 for men and 0.55 for women. This ρm has units of dose per body weight (g/kg) divided by concentration (g/kg) and is therefore dimensionless. However, modern calculations use weight/volume concentrations (g/L) for EBAC, so Widmark's rho-factors must be adjusted for the density of blood, 1.055 g/mL. This formula_1 has units of dose per body weight (g/kg) divided by concentration (g/L blood) - calculation gives values of 0.64 L/kg for men and 0.52 L/kg for women, lower than the original. Newer studies have updated these values to population-average ρv of 0.71 L/kg for men and 0.58 L/kg for women. But individual Vd values may vary significantly - the 95% range for ρv is 0.58-0.83 L/kg for males and 0.43-0.73 L/kg for females. A more accurate method for calculating Vd is to use total body water (TBW) - experiments have confirmed that alcohol distributes almost exactly in proportion to TBW within the Widmark model. TBW may be calculated using body composition analysis or estimated using anthropometric formulas based on age, height, and weight. Vd is then given by formula_2, where formula_3 is the water content of blood, approximately 0.825 w/v for men and 0.838 w/v for women. These calculations assume Widmark's zero-order model for the effects of metabolization, and assume that TBW is almost exactly the volume of distribution of ethanol. Using a more complex model that accounts for non-linear metabolism, Norberg found that Vd was only 84-87% of TBW. This finding was not reproduced in a newer study which found volumes of distribution similar to those in the literature. Metabolism. Several metabolic pathways exist: Detailed ADH pathway. The reaction from ethanol to carbon dioxide and water proceeds in at least 11 steps in humans. C2H6O (ethanol) is converted to C2H4O (acetaldehyde), then to C2H4O2 (acetic acid), then to acetyl-CoA. Once acetyl-CoA is formed, it is free to enter directly into the citric acid cycle (TCA) and is converted to 2 CO2 molecules in 8 reactions. The equations: C2H6O(ethanol) + NAD+ → C2H4O(acetaldehyde) + NADH + H+ C2H4O(acetaldehyde) + NAD+ + H2O → C2H4O2(acetic acid) + NADH + H+ C2H4O2(acetic acid) + CoA + ATP → Acetyl-CoA + AMP + PPi The Gibbs free energy is simply calculated from the free energy of formation of the product and reactants. If catabolism of alcohol goes all the way to completion, then we have a very exothermic event yielding some of energy. If the reaction stops part way through the metabolic pathways, which happens because acetic acid is excreted in the urine after drinking, then not nearly as much energy can be derived from alcohol, indeed, only . At the very least, the theoretical limits on energy yield are determined to be to . The first with NADH is endothermic, requiring of alcohol, or about 3 molecules of adenosine triphosphate (ATP) per molecule of ethanol. Variation. Variations in genes influence alcohol metabolism and drinking behavior. Certain amino acid sequences in the enzymes used to oxidize ethanol are conserved (unchanged) going back to the last common ancestor over 3.5bya. Evidence suggests that humans evolved the ability to metabolize dietary ethanol between 7 and 21 million years ago, in a common ancestor shared with chimpanzees and gorillas but not orangutans. Gene variation in these enzymes can lead to variation in catalytic efficiency between individuals. Some individuals have less effective metabolizing enzymes of ethanol, and can experience more marked symptoms from ethanol consumption than others. However, those having acquired alcohol tolerance have a greater quantity of these enzymes, and metabolize ethanol more rapidly. Specifically, ethanol has been observed to be cleared more quickly by regular drinkers than non-drinkers. Falsely high BAC readings may be seen in patients with kidney or liver disease or failure. Such persons also have impaired acetaldehyde dehydrogenase, which causes acetaldehyde levels to peak higher, producing more severe hangovers and other effects such as flushing and tachycardia. Conversely, members of certain ethnicities that traditionally did not use alcoholic beverages have lower levels of alcohol dehydrogenases and thus "sober up" very slowly but reach lower aldehyde concentrations and have milder hangovers. The rate of detoxification of alcohol can also be slowed by certain drugs which interfere with the action of alcohol dehydrogenases, notably aspirin, furfural (which may be found in fusel alcohol), fumes of certain solvents, many heavy metals, and some pyrazole compounds. Also suspected of having this effect are cimetidine, ranitidine, and acetaminophen (paracetamol). An "abnormal" liver with conditions such as hepatitis, cirrhosis, gall bladder disease, and cancer is likely to result in a slower rate of metabolism. People under 25 and women may process alcohol more slowly. Food such as fructose can increase the rate of alcohol metabolism. The effect can vary significantly from person to person, but a 100 g dose of fructose has been shown to increase alcohol metabolism by an average of 80%. In people with proteinuria and hematuria, fructose can cause falsely high BAC readings, due to kidney-liver metabolism. First-pass metabolism. During a typical drinking session, approximately 90% of the metabolism of ethanol occurs in the liver. Alcohol dehydrogenase and aldehyde dehydrogenase are present at their highest concentrations (in liver mitochondria). But these enzymes are widely expressed throughout the body, such as in the stomach and small intestine. Some alcohol undergoes a first pass of metabolism in these areas, before it ever enters the bloodstream. In alcoholics. Under alcoholic conditions, the citric acid cycle is stalled by the oversupply of NADH derived from ethanol oxidation. The resulting backup of acetate shifts the reaction equilibrium for acetaldehyde dehydrogenase back towards acetaldehyde. Acetaldehyde subsequently accumulates and begins to form covalent bonds with cellular macromolecules, forming toxic adducts that, eventually, lead to death of the cell. This same excess of NADH from ethanol oxidation causes the liver to move away from fatty acid oxidation, which produces NADH, towards fatty acid synthesis, which consumes NADH. This consequent lipogenesis is believed to account largely for the pathogenesis of alcoholic fatty liver disease. In human fetuses. In human embryos and fetuses, ethanol is not metabolized via ADH as ADH enzymes are not yet expressed to any significant quantity in human fetal liver (the induction of ADH only starts after birth, and requires years to reach adult levels). Accordingly, the fetal liver cannot metabolize ethanol or other low molecular weight xenobiotics. In fetuses, ethanol is instead metabolized at much slower rates by different enzymes from the cytochrome P-450 superfamily (CYP), in particular by CYP2E1. The low fetal rate of ethanol clearance is responsible for the important observation that the fetal compartment retains high levels of ethanol long after ethanol has been cleared from the maternal circulation by the adult ADH activity in the maternal liver. CYP2E1 expression and activity have been detected in various human fetal tissues after the onset of organogenesis (ca 50 days of gestation). Exposure to ethanol is known to promote further induction of this enzyme in fetal and adult tissues. CYP2E1 is a major contributor to the so-called Microsomal Ethanol Oxidizing System (MEOS) and its activity in fetal tissues is thought to contribute significantly to the toxicity of maternal ethanol consumption. In presence of ethanol and oxygen, CYP2E1 is known to release superoxide radicals and induce the oxidation of polyunsaturated fatty acids to toxic aldehyde products like 4-hydroxynonenal (HNE). The concentration of alcohol in breast milk produced during lactation is closely correlated to the individual's blood alcohol content. Elimination. Alcohol is removed from the bloodstream by a combination of metabolism, excretion, and evaporation. 90-98% of ingested ethanol is metabolized into carbon dioxide and water. Around 5 to 10% of ethanol that is ingested is excreted unchanged in urine, breath, and sweat. Transdermal alcohol that diffuses through the skin as insensible perspiration or is exuded as sweat (sensible perspiration) can be detected using wearable sensor technology such as SCRAM ankle bracelet or the more discreet ION Wearable. Ethanol or its metabolites may be detectable in urine for up to 96 hours (3–5 days) after ingestion. Unlike most physiologically active materials, in typical recreational use, ethanol is removed from the bloodstream at an approximately constant rate (linear decay or zero-order kinetics), rather than at a rate proportional to the current concentration (exponential decay with a characteristic elimination half-life). This is because typical doses of alcohol saturate the enzymes' capacity. In Widmark's model, the elimination rate from the blood, β, contributes 60% of the uncertainty. Similarly to ρ, its value depends on the units used for blood. β varies 58% by occasion and 42% between subjects; it is thus difficult to determine β precisely, and more practical to use a mean and a range of values. Typical elimination rates range from 10 to 34 mg/dL per hour, with Jones recommending the range 0.10 - 0.25 g/L/h for forensic purposes, for all subjects. Earlier studies found mean elimination rates of 15 mg/dL per hour for men and 18 mg/dL per hour for women, but Jones found 0.148 g/L/h and 0.156 g/L/h respectively. Although the difference between sexes is statistically significant, it is small compared to the overall uncertainty, so Jones recommends using the value 0.15 for the mean for all subjects. This mean rate is very roughly 8 grams of pure ethanol per hour (one British unit). Explanations for the gender difference are quite varied and include liver size, secondary effects of the volume of distribution, and sex-specific hormones. A 2023 study using a more complex two-compartment model with M-M elimination kinetics, with data from 60 men and 12 women, found statistically small effects of gender on maximal elimination rate and excluded them from the final model. At concentrations below 0.15-0.20 g/L, alcohol is eliminated more slowly and the elimination rate more closely follows first-order kinetics. The overall behavior of the elimination rate is described well by Michaelis–Menten kinetics. This change in behavior was not noticed by Widmark because he could not analyze low BAC levels. The rate of elimination of ethanol is also increased at very high concentrations, such as in overdose, again more closely following first-order kinetics, with an elimination half-life of about 4 or 4.5 hours (a clearance rate of approximately 6 L/hour/70 kg). This is thought to be due to increased activity of CYP2E1. Eating food in proximity to drinking increases elimination rate significantly, mainly due to increased metabolism. Modeling. In fasting volunteers, blood levels of ethanol increase proportionally with the dose of ethanol administered. Peak blood alcohol concentrations may be estimated by dividing the amount of ethanol ingested by the body weight of the individual and correcting for water dilution. For time-dependent calculations, Swedish professor Erik Widmark developed a model of alcohol pharmacokinetics in the 1920s. The model corresponds to a single-compartment model with instantaneous absorption and zero-order kinetics for elimination. The model is most accurate when used to estimate BAC a few hours after drinking a single dose of alcohol in a fasted state, and can be within 20% CV of the true value. It is less accurate for BAC levels below 0.2 g/L (alcohol is not eliminated as quickly as predicted) and consumption with food (overestimating the peak BAC and time to return to zero). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M_d=\\rho_m W" }, { "math_id": 1, "text": "\\rho_v = V_d / W" }, { "math_id": 2, "text": "TBW_\\text{kg} / F_{\\text{water}}" }, { "math_id": 3, "text": "F_{\\text{water}}" } ]
https://en.wikipedia.org/wiki?curid=76598831
76599128
Agglomerate (steel industry)
Steel Industry agglomerate Agglomerate is a material composed of iron oxides and gangue, roasted and sintered in an agglomeration plant. This product is obtained by burning coal previously mixed with iron ore and oxides. This conditioning of iron ore optimizes its use in the blast furnace. History. The advantages of agglomeration were identified very early on, but the processes used at the time were not continuous. The primitive method, which consisted of a grindstone grate, was abandoned towards the end of the 19th century because it was too fuel-intensive. Shaft furnaces then replaced them, their much higher efficiency being due both to the confinement of the reaction and to counter-current operation (the solids sink and the gases rise). In these furnaces, iron ores were roasted to obtain the opposite result to the one we're looking for now: in 1895, roasting was carried out at low temperatures to avoid aggregation, and to obtain friable ore. At the time, ore roasting furnaces were tanks inspired by blast furnaces and lime kilns, and were not very productive tools. Around 1910, the Greenawald process, which automated the principle3, saw some development, enabling the production of 300,000 tonnes a year. In June 1906, A.S. Dwight and R. L. Lloyd built the first agglomerating machine on a chain (also known as a grate), which began agglomerating copper and lead ores. The first agglomeration line for iron ores was built in 1910 in Birdsboro, Pennsylvania. It took some thirty years for the sintering of ores on chains to become widespread in the steel industry. Whereas before the Second World War, it was mainly used for reconditioning ore fines, after 1945 it became widespread for processing raw ores. Today, it plays an essential role in the blending of different ores and, above all, in the incorporation of mineral wastes of varying iron content. This recycling role improves profitability and limits the amount of waste generated by steel complexes, which generate numerous iron-rich residues (slag, sludge, dust, etc.). Interests and limitations. Interests. Chipboard is a product optimized for use in blast furnaces. To do so, it must meet several conditions: Another advantage is the elimination of undesirable elements: the chain agglomeration process eliminates 80-95% of the sulfur present in the ore and its additives. It's also a way of getting rid of zinc, the element that "poisons" blast furnaces, as its vaporization temperature of 907°C corresponds to that of a well-conducted roast. Limitations. On the other hand, agglomerate is an abrasive product that damages blast furnace vessels, especially if these are not designed for absorber, and is above all fragile. Repeated handling degrades its grain size and generates fines, making it unsuitable for packaging at sites far from blast furnaces: pellets are therefore preferable. Cold resistance, particularly to crushing, can be improved by increasing the energy input during sintering. Improving mechanical strength also improves the performance of agglomerates in the processes that use them. The reduction of hematite (Fe2O3) to magnetite (Fe3O4) creates internal stresses. However, in addition to increasing the cost of agglomerate production, reducibility deteriorates when mechanical strength is sought. Composition. Agglomerates are generally classified as acidic or basic. The complete basicity index ic is calculated by the following ratio of mass concentrations: formula_0 It is often simplified by simply calculating a simplified basicity index noted i (or sometimes ia), equal to the ratio CaO / SiO2. An agglomerate with an index ic of less than 1 is said to be acidic; above 1, it is generally said to be basic; equal to 1, it is said to be self-melting (ic=1 being equivalent to ia=1.40). Before the 1950s, agglomerates with an ic value of less than 0.5 were in the majority. Then, when it was realized that agglomerate could incorporate limestone, which was then charged into the blast furnace separately, basic indices became widespread: in 1965, indices below 0.5 represented less than 15% of the tonnage of agglomerate produced, while basic agglomerates accounted for 45%. Again, we find the relationship: formula_1 k being an empirically determined constant (sometimes equal, for simplicity, to 1). Iron reduction is, in itself, favored by a basic environment, and peaks at 2b&lt;2.5. It is also in this range that mechanical strength is best (and also, the slag's fusibility is the worst, which complicates its removal from the blast furnace). Above an ib value of 2.6, the proportion of molten agglomerate increases, clogging the pores and slowing down chemical reactions between gases and oxides. As for acid agglomerates with an ib index of less than 1, softening begins as soon as only around 15% of the ore has been reduced. The optimum basicity index is therefore determined according to the ore used, the technical characteristics of the blast furnace, the intended use of the cast iron and the desired qualities. For example:
[ { "math_id": 0, "text": "I_c = \\frac{[CaO]+[MgO]} {[SiO_2]+[Al_2O_3]} \n" }, { "math_id": 1, "text": "\\scriptstyle\\ i_b = \\frac{[CaO] + k \\cdot [MgO]} {[SiO_2]}" }, { "math_id": 2, "text": "\\scriptstyle\\ i = \\frac{[CaO] + [MgO] + [BaO]} {[SiO_2]}\n" } ]
https://en.wikipedia.org/wiki?curid=76599128
765992
Adobe RGB color space
Color space developed by Adobe The Adobe RGB (1998) color space or opRGB is a color space developed by Adobe Inc. in 1998. It was designed to encompass most of the colors achievable on CMYK color printers, but by using RGB primary colors on a device such as a computer display. The Adobe RGB (1998) color space encompasses roughly 30% of the visible colors specified by the CIELAB color space – improving upon the gamut of the sRGB color space, primarily in cyan-green hues. It was subsequently standardized by the IEC as IEC 61966-2-5:1999 with a name opRGB (optional RGB color space) and is used in HDMI. Historical background. Beginning in 1997, Adobe Systems was looking into creating ICC profiles that its consumers could use in conjunction with Photoshop's new color management features. Since not many applications at the time had any ICC color management, most operating systems did not ship with useful profiles. Lead developer of Photoshop, Thomas Knoll decided to build an ICC profile around specifications he found in the documentation for the SMPTE 240M standard, the precursor to Rec. 709 (but not in primaries: 240M also defined EOTF and thus was display referred, sRGB was created by connecting BT.470 PAL and SMPTE C). SMPTE 240M's gamut is wider than that of the BT.709 gamut and the same as BT.470 NTSC (System B, G). However, with the release of Photoshop 5.0 nearing, Adobe made the decision to include the profile within the software. Although users loved the wider range of reproducible colors, those familiar with the SMPTE 240M specifications contacted Adobe, informing the company that it had copied the values that described idealized primaries, not actual standard ones (in a special annex to the standard). The real values were much closer to sRGB's, which avid Photoshop consumers did not enjoy as a working environment. To make matters worse, an engineer had made an error when copying the red primary chromaticity coordinates, resulting in an even more inaccurate representation of the SMPTE standard. On the other hand red and blue primary are the same as in PAL and green is the same as in NTSC 1953 (blue is the same as in BT.709 and sRGB). Adobe tried numerous tactics to correct the profile, such as correcting the red primary and changing the white point to match that of the CIE Standard Illuminant D50 (though that will also change the primaries and is thus pointless), yet all of the adjustments made CMYK conversion worse than before. In the end, Adobe decided to keep the "incorrect" profile, but changed the name to "Adobe RGB (1998)" in order to avoid a trademark search or infringement. Specifications. Reference viewing conditions. In Adobe RGB (1998), colors are specified as ["R","G","B"] triplets, where each of the "R", "G", and "B" components have values ranging between 0 and 1. When displayed on a monitor, the exact chromaticities of the reference white point [1,1,1], the reference black point [0,0,0], and the primaries ([1,0,0], [0,1,0], and [0,0,1]) are specified. To meet the color appearance requirements of the color space, the luminance of the monitor must be 160.00 cd/m2 at the white point, and 0.5557 cd/m2 at the black point, which implies a contrast ratio of 287.9. Moreover, the black point shall have the same chromaticity as the white point, yet with a luminance equal to 0.34731% of the white point luminance. The ambient illumination level at the monitor faceplate when the monitor is turned off must be 32 lx. As with sRGB, the "RGB" component values in Adobe RGB (1998) are not proportional to the luminances. Rather, a gamma of approximately 2.2 is assumed, without the linear segment near zero that is present in sRGB. The precise gamma value is 563/256, or 2.19921875. In coverage of the CIE 1931 color space the Adobe RGB (1998) color space covers 52.1%. The chromaticities of the primary colors and the white point, both of which correspond to the CIE Standard Illuminant D65, are as follows: The corresponding absolute "XYZ" tristimulus values for the reference display white and black points are as follows: Normalized "XYZ" tristimulus values can be obtained from absolute luminance "XaYaZa" tristimulus values as follows: formula_0 formula_1 formula_2 where "XKYKZK" and "XWYWZW" are reference display black and white points in the table above. The conversion between normalized XYZ to and from Adobe RGB tristimulus values can be done as follows: formula_3 formula_4 As was later defined in the IEC standard opYCC uses BT.601 matrix for conversion to YCbCr, that can be full range matrix and limited range matrix. Display can signal YCC quantization range support and sink can send either one. ICC PCS color image encoding. An image in the ICC Profile Connection Space (PCS) is encoded in 24-bit Adobe RGB (1998) color image encoding. Through the application of the 3x3 matrix below (derived from the inversion of the color space chromaticity coordinates and a chromatic adaptation to CIE Standard Illuminant D50 using the Bradford transformation matrix), the input image's normalized "XYZ" tristimulus values are transformed into "RGB" tristimulus values. The component values would be clipped to the range [0, 1]. formula_5 The "RGB" tristimulus values are then converted to Adobe RGB "R'G'B"' component values through the use of the following component transfer functions: formula_6 formula_7 formula_8 The resulting component values would be then represented in floating point or integer encodings. If it is necessary to encode values from the PCS back to the input device space, the following matrix can be implemented: formula_9 Comparison to sRGB. Gamut. sRGB is an RGB color space proposed by HP and Microsoft in 1996 to approximate the color gamut of the (then) most common computer display devices (CRTs). Since sRGB serves as a "best guess" metric for how another person's monitor produces color, it has become the standard color space for displaying images on the Internet. sRGB's color gamut encompasses just 35% of the visible colors specified by CIE, whereas Adobe RGB (1998) encompasses slightly more than 50% of all visible colors. Adobe RGB (1998) extends into richer cyans and greens than does sRGB – for all levels of luminance. The two gamuts are often compared in mid-tone values (~50% luminance), but clear differences are evident in shadows (~25% luminance) and highlights (~75% luminance) as well. In fact, Adobe RGB (1998) expands its advantages to areas of intense orange, yellow, and magenta regions. Although there is a significant difference between gamut ranges in the CIE "xy" chromaticity diagram, if the coordinates were to be transformed to fit on the CIE "u′v′" chromaticity diagram, which illustrates the eye's perceived variance in hue more closely, the difference in the green region is far less exaggerated. Also, although Adobe RGB (1998) can "theoretically" represent a wider gamut of colors, the color space requires special software and a complex workflow in order to utilize its full range. Otherwise, the produced colors would be squeezed into a smaller range (making them appear duller) in order to match sRGB's more widely used gamut. Bit depth distribution. Although the Adobe RGB (1998) working space clearly provides more colors to utilize, another factor to consider when choosing between color spaces is how each space influences the distribution of the image's bit depth. Color spaces with larger gamuts "stretch" the bits over a broader region of colors, whereas smaller gamuts concentrate these bits within a narrow region. A similar, yet not as dramatic concentration of bit depth occurs with Adobe RGB (1998) versus sRGB, except in three dimensions rather than one. The Adobe RGB (1998) color space occupies roughly 40% more volume than the sRGB color space, which concludes that one would only be exploiting 70% of the available bit depth if the colors in Adobe RGB (1998) are unnecessary. On the contrary, one may have plenty of "spare" bits if using a 16-bit image, thus negating any reduction due to the choice of working space. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X=\\frac{X_a-X_K}{X_W-X_K} \\frac{X_W}{Y_W}" }, { "math_id": 1, "text": "Y=\\frac{Y_a-Y_K}{Y_W-Y_K}" }, { "math_id": 2, "text": "Z=\\frac{Z_a-Z_K}{Z_W-Z_K} \\frac{Z_W}{Y_W}" }, { "math_id": 3, "text": "\n\\begin{bmatrix}R\\\\G\\\\B\\end{bmatrix}=\n\\begin{bmatrix}\n2.04159&-0.56501&-0.34473\\\\\n-0.96924&1.87597&0.04156\\\\\n0.01344&-0.11836&1.01517\n\\end{bmatrix}\n\\begin{bmatrix}X\\\\Y\\\\Z\\end{bmatrix}\n" }, { "math_id": 4, "text": "\n\\begin{bmatrix}X\\\\Y\\\\Z\\end{bmatrix}=\n\\begin{bmatrix}\n0.57667&0.18556&0.18823\\\\\n0.29734&0.62736&0.07529\\\\\n0.02703&0.07069&0.99134\n\\end{bmatrix}\n\\begin{bmatrix}R\\\\G\\\\B\\end{bmatrix}\n" }, { "math_id": 5, "text": "\n\\begin{bmatrix}R\\\\G\\\\B\\end{bmatrix}=\n\\begin{bmatrix}\n1.96253&-0.61068&-0.34137\\\\\n-0.97876&1.91615&0.03342\\\\\n0.02869&-0.14067&1.34926\n\\end{bmatrix}\n\\begin{bmatrix}X\\\\Y\\\\Z\\end{bmatrix}\n" }, { "math_id": 6, "text": "R'=R^\\frac{256}{563}," }, { "math_id": 7, "text": "G'=G^\\frac{256}{563}," }, { "math_id": 8, "text": "B'=B^\\frac{256}{563}" }, { "math_id": 9, "text": "\n\\begin{bmatrix}X\\\\Y\\\\Z\\end{bmatrix}=\n\\begin{bmatrix}\n0.60974&0.20528&0.14919\\\\\n0.31111&0.62567&0.06322\\\\\n0.01947&0.06087&0.74457\n\\end{bmatrix}\n\\begin{bmatrix}R\\\\G\\\\B\\end{bmatrix}\n" } ]
https://en.wikipedia.org/wiki?curid=765992
76599980
Bierlein's measure extension theorem
Bierlein's measure extension theorem is a result from measure theory and probability theory on extensions of probability measures. The theorem makes a statement about when one can extend a probability measure to a larger σ-algebra. It is of particular interest for infinite dimensional spaces. The theorem is named after the German mathematician Dietrich Bierlein, who proved the statement for countable families in 1962. The general case was shown by Albert Ascherl and Jürgen Lehn in 1977. A measure extension theorem of Bierlein. Let formula_0 be a probability space and formula_1 be a σ-algebra, then in general formula_2 can not be extended to formula_3. For instance when formula_4 is countably infinite, this is not always possible. Bierlein's extension theorem says, that it is always possible for disjoint families. Statement of the theorem. Bierlein's measure extension theorem is "Let formula_0 be a probability space, formula_5 an arbitrary index set and formula_6 a family of disjoint sets from formula_7. Then there exists a extension formula_8 of formula_2 on formula_9." Related results and generalizations. Bierlein gave a result which stated an implication for uniqueness of the extension. Ascherl and Lehn gave a condition for equivalence. Zbigniew Lipecki proved in 1979 a variant of the statement for group-valued measures (i.e. for "topological hausdorff group"-valued measures).
[ { "math_id": 0, "text": "(X,\\mathcal{A},\\mu)" }, { "math_id": 1, "text": "\\mathcal{S}\\subset \\mathcal{P}(X)" }, { "math_id": 2, "text": "\\mu" }, { "math_id": 3, "text": "\\sigma(\\mathcal{A}\\cup \\mathcal{S})" }, { "math_id": 4, "text": "\\mathcal{S}" }, { "math_id": 5, "text": "I" }, { "math_id": 6, "text": "(A_i)_{i\\in I}" }, { "math_id": 7, "text": "X" }, { "math_id": 8, "text": "\\nu" }, { "math_id": 9, "text": "\\sigma(\\mathcal{A}\\cup\\{A_i\\colon i\\in I\\} )" } ]
https://en.wikipedia.org/wiki?curid=76599980
766014
Ruffini's rule
Polynomial division computation method In mathematics, Ruffini's rule is a method for computation of the Euclidean division of a polynomial by a binomial of the form "x – r". It was described by Paolo Ruffini in 1809. The rule is a special case of synthetic division in which the divisor is a linear factor. Algorithm. The rule establishes a method for dividing the polynomial: formula_0 by the binomial: formula_1 to obtain the quotient polynomial: formula_2 The algorithm is in fact the long division of "P"("x") by "Q"("x"). To divide "P"("x") by "Q"("x"): The "b" values are the coefficients of the result ("R"("x")) polynomial, the degree of which is one less than that of "P"("x"). The final value obtained, "s", is the remainder. The polynomial remainder theorem asserts that the remainder is equal to "P"("r"), the value of the polynomial at "r". Example. Here is an example of polynomial division as described above. Let: formula_8 formula_9 "P"("x") will be divided by "Q"("x") using Ruffini's rule. The main problem is that "Q"("x") is not a binomial of the form "x" − "r", but rather "x" + "r". "Q"("x") must be rewritten as formula_10 Now the algorithm is applied: So, if "original number" = "divisor" × "quotient" + "remainder", then formula_11, where formula_12 and formula_13 Application to polynomial factorization. Ruffini's rule can be used when one needs the quotient of a polynomial P by a binomial of the form formula_14 (When one needs only the remainder, the polynomial remainder theorem provides a simpler method.) A typical example, where one needs the quotient, is the factorization of a polynomial formula_15 for which one knows a root r: The remainder of the Euclidean division of formula_15 by r is 0, and, if the quotient is formula_16 the Euclidean division is written as formula_17 This gives a (possibly partial) factorization of formula_18 which can be computed with Ruffini's rule. Then, formula_15 can be further factored by factoring formula_19 The fundamental theorem of algebra states that every polynomial of positive degree has at least one complex root. The above process shows the fundamental theorem of algebra implies that every polynomial "p"("x") = "a""n""x""n" + "a""n"−1"x""n"−1 + ⋯ + "a"1"x" + "a"0 can be factored as formula_20 where formula_21 are complex numbers. History. The method was invented by Paolo Ruffini, who took part in a competition organized by the Italian Scientific Society (of Forty). The challenge was to devise a method to find the roots of any polynomial. Five submissions were received. In 1804 Ruffini's was awarded first place and his method was published. He later published refinements of his work in 1807 and again in 1813.
[ { "math_id": 0, "text": "P(x)=a_nx^n+a_{n-1}x^{n-1}+\\cdots+a_1x+a_0" }, { "math_id": 1, "text": "Q(x)=x-r" }, { "math_id": 2, "text": "R(x)=b_{n-1}x^{n-1}+b_{n-2}x^{n-2}+\\cdots+b_1x+b_0." }, { "math_id": 3, "text": "\n\\begin{array}{c|c c c c|c}\n & a_n & a_{n-1} & \\dots & a_1 & a_0\\\\\nr & & & & & \\\\\n\\hline\n & & & & & \\\\\n\\end{array}\n" }, { "math_id": 4, "text": "\n\\begin{array}{c|c c c c|c}\n & a_n & a_{n-1} & \\dots & a_1 & a_0\\\\\nr & & & & & \\\\\n\\hline\n & a_n & & & & \\\\\n & =b_{n-1} & & & &\n\\end{array}\n" }, { "math_id": 5, "text": "\n\\begin{array}{c|c c c c|c}\n & a_n & a_{n-1} & \\dots & a_1 & a_0\\\\\nr & & b_{n-1} \\cdot r & & & \\\\\n\\hline\n & a_n & & & & \\\\\n & =b_{n-1} & & & &\n\\end{array}\n" }, { "math_id": 6, "text": "\n\\begin{array}{c|c c c c|c}\n & a_n & a_{n-1} & \\dots & a_1 & a_0\\\\\nr & & b_{n-1}\\cdot r & & & \\\\\n\\hline\n & a_n & b_{n-1}\\cdot r+a_{n-1} & & & \\\\\n & =b_{n-1} & =b_{n-2} & & &\n\\end{array}\n" }, { "math_id": 7, "text": "\n\\begin{array}{c|c c c c|c}\n & a_n & a_{n-1} & \\dots & a_1 & a_0 \\\\\nr & & b_{n-1}\\cdot r & \\dots & b_1\\cdot r & b_0 \\cdot r \\\\\n\\hline\n & a_n & b_{n-1} \\cdot r+a_{n-1} & \\dots & b_1 \\cdot r+a_1 & a_0+b_0 \\cdot r \\\\\n & =b_{n-1} & =b_{n-2} & \\dots & =b_0 & =s \\\\\n\\end{array}\n" }, { "math_id": 8, "text": "P(x)=2x^3+3x^2-4\\,\\!" }, { "math_id": 9, "text": "Q(x)=x+1.\\,\\!" }, { "math_id": 10, "text": "Q(x)=x+1=x-(-1).\\,\\!" }, { "math_id": 11, "text": "P(x)=Q(x)R(x)+s\\,\\!" }, { "math_id": 12, "text": "R(x) = 2x^2+x-1\\,\\!" }, { "math_id": 13, "text": "s=-3; \\quad \\Rightarrow 2x^3+3x^2-4 = (2x^2+x-1)(x+1) - 3\\!" }, { "math_id": 14, "text": "x-r." }, { "math_id": 15, "text": "p(x)" }, { "math_id": 16, "text": "q(x)," }, { "math_id": 17, "text": "p(x)=q(x)\\,(x-r)." }, { "math_id": 18, "text": "p(x)," }, { "math_id": 19, "text": "q(x)." }, { "math_id": 20, "text": "p(x)=a_n(x-r_1)\\cdots(x-r_n)," }, { "math_id": 21, "text": "r_1,\\ldots,r_n" } ]
https://en.wikipedia.org/wiki?curid=766014
7661576
Disgregation
1862 formulation of the concept of entropy In the history of thermodynamics, disgregation is an early formulation of the concept of entropy. It was defined in 1862 by Rudolf Clausius as the magnitude of the degree in which the molecules of a body are separated from each other. Disgregation was the stepping stone for Clausius to create the mathematical expression for the Second Law of Thermodynamics. Clausius modeled the concept on certain passages in French physicist Sadi Carnot's 1824 paper "On the Motive Power of Fire" which characterized the "transformations" of "working substances" (particles of a thermodynamic system) of an engine cycle, namely "mode of aggregation". The concept was later extended by Clausius in 1865 in the formulation of entropy, and in Ludwig Boltzmann's 1870s developments including the diversities of the motions of the microscopic constituents of matter, described in terms of order and disorder. In 1949, Edward Armand Guggenheim developed the concept of energy dispersal. The terms "disgregation" and "dispersal" are near in meaning. Historical context. In 1824, French physicist Sadi Carnot assumed that heat, like a substance, cannot be diminished in quantity and that it cannot increase. Specifically, he states that in a complete engine cycle ‘that when a body has experienced any changes, and when after a certain number of transformations it returns to precisely its original state, that is, to that state considered in respect to density, to temperature, to mode of aggregation, let us suppose, I say that this body is found to contain the same quantity of heat that it contained at first, or else that the quantities of heat absorbed or set free in these different transformations are exactly compensated.’ Furthermore, he states that ‘this fact has never been called into question’ and ‘to deny this would overthrow the whole theory of heat to which it serves as a basis.’ This famous sentence, which Carnot spent fifteen years thinking about, marks the start of thermodynamics and signals the slow transition from the older caloric theory to the newer kinetic theory, in which heat is a type of energy in transit. In 1862, Clausius defined what is now known as "entropy" or the energetic effects related to irreversibility as the “equivalence-values of transformations” in a thermodynamic cycle. Clausius then signifies the difference between “reversible” (ideal) and “irreversible” (real) processes: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;If the cyclical process is reversible, the transformations which occur therein must be partly positive and partly negative, and the equivalence-values of the positive transformations must be together equal to those of the negative transformations, so that the algebraic sum of all the equivalence-values become equal to 0. If the cyclical process is not reversible, the equivalence values of the positive and negative transformations are not necessarily equal, but they can only differ in such a way that the positive transformations predominate. Definition. In 1862, Clausius labelled the quantity of disgregation with the letter Z, and defined its change as the sum of changes in heat Q and enthalpy H divided by the temperature T of the system: formula_0 Clausius introduced disgregation in the following passage: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;In the cases first mentioned, the arrangements of the molecules is altered. Since, even while a body remains in the same state of aggregation, its molecules do not retain fixed unvarying positions, but are constantly in a state of more of less extended motion, we may, when speaking of the "arrangement of the molecules" at any particular time, understand either the arrangement which would result from the molecules being fixed in the actual position they occupy at the instant in question, or we may suppose such an arrangement that each molecule occupies its mean position. Now the effect of heat always tends to loosen the connexion between the molecules, and so to increase their mean distances from one another. In order to be able to represent this mathematically, we will express the degree in which the molecules of a body are separated from each other, by introducing a new magnitude, which we will call the "disgregation" of the body, and by help of which we can define the effect of heat as simply "tending to increase the disgregation". The way in which a definite measure of this magnitude can be arrived at will appear from the sequel. Equivalence-values of transformations. Clausius states what he calls the “theorem respecting the equivalence-values of the transformations” or what is now known as the second law of thermodynamics, as such: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The algebraic sum of all the transformations occurring in a cyclical process can only be positive, or, as an extreme case, equal to nothing. Quantitatively, Clausius states the mathematical expression for this theorem is as follows. Let "dQ" be an element of the heat given up by the body to any reservoir of heat during its own changes, heat which it may absorb from a reservoir being here reckoned as negative, and "T" the absolute temperature of the body at the moment of giving up this heat, then the equation: formula_1 must be true for every reversible cyclical process, and the relation: formula_2 must hold good for every cyclical process which is in any way possible. Verbal justification. Clausius then points out the inherent difficulty in the mental comprehension of this law by stating: "although the necessity of this theorem admits of strict mathematical proof if we start from the fundamental proposition above quoted, it thereby nevertheless retains an abstract form, in which it is with difficulty embraced by the mind, and we feel compelled to seek for the precise physical cause, of which this theorem is a consequence." The justification for this law, according to Clausius, is based on the following argument: To elaborate on this, Clausius states that in all cases in which heat can perform mechanical work, these processes always admit to being reduced to the “alteration in some way or another of the arrangement of the constituent parts of the body.” To exemplify this, Clausius moves into a discussion of change of state of a body, i.e. solid, liquid, gas. For instance, he states, “when bodies are expanded by heat, their molecules being thus separated from each other: in this case the mutual attractions of the molecules on the one hand, and external opposing forces on the other, insofar as any such are in operation, have to be overcome. Again, the state of aggregation of bodies is altered by heat, solid bodies rendered liquid, and both solid and liquid bodies being rendered aeriform: here likewise internal forces, and in general external forces also, have to be overcome.” Ice melting. Clausius discusses the example of the melting of ice, a classic example which is used in almost all chemistry books to this day, and explains a representation of the mechanical equivalent of work related to this energetic change mathematically: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The forces exerted upon one another by the molecules are not of so simple a kind that each molecule can be replaced by a mere point; for many cases occur in which it can be easily seen that we have not merely to consider the distances of the molecules, but also their relative positions. If we take, for example, the melting of ice, there is no doubt that interior forces, exerted by the molecules upon each other, are overcome, and accordingly increase of disgregation takes place; nevertheless the centers of gravity of the molecules are on the average not so far removed from each other in the liquid water as they were in the ice, for the water is the denser of the two. Again, the peculiar behaviour of water in contracting when heated above 0°C., and only beginning to expand when its temperature exceeds 4°, shows that likewise in liquid water, in the neighbourhood of its melting-point, increase of disgregation is not accompanied by increase of the mean distances of its molecules. Measurement. As it is difficult to obtain direct measures of the interior forces that the molecules of the body exert on each other, Clausius states that an indirect way to obtain quantitative measures of what is now called entropy is to calculate the work done in overcoming internal forces: In the case of the interior forces, it would accordingly be difficult—even if we did not want to measure them, but only to represent them mathematically—to find a fitting expression for them which would admit of a simple determination of the magnitude. This difficulty, however, disappears if we take into calculation, not the forces themselves, but the mechanical work which, in any change of arrangement, is required to overcome them. The expressions for the quantities of work are simpler than those for the corresponding forces; for the quantities of work can be all expressed, without further secondary statements, by the numbers which, having reference to the same unit, can be added together, or subtracted from one another, however various the forces may be to which they refer. It is therefore convenient to alter the form of the above law by introducing, instead of the forces themselves, the work done in overcoming them. In this form it reads as follows: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The mechanical work which can be done by heat during any change of the "arrangement of a body" is proportional to the absolute temperature at which this change occurs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "dZ = \\frac{dQ+dH}{T}," }, { "math_id": 1, "text": "\\int \\frac{dQ}{T} = 0" }, { "math_id": 2, "text": "\\int \\frac{dQ}{T} \\ge 0" } ]
https://en.wikipedia.org/wiki?curid=7661576
7661871
Propeller (aeronautics)
Aircraft propulsion component In aeronautics, an aircraft propeller, also called an airscrew, converts rotary motion from an engine or other power source into a swirling slipstream which pushes the propeller forwards or backwards. It comprises a rotating power-driven hub, to which are attached several radial airfoil-section blades such that the whole assembly rotates about a longitudinal axis. The blade pitch may be fixed, manually variable to a few set positions, or of the automatically variable "constant-speed" type. The propeller attaches to the power source's driveshaft either directly or through reduction gearing. Propellers can be made from wood, metal or composite materials. Propellers are most suitable for use at subsonic airspeeds generally below about , although supersonic speeds were achieved in the McDonnell XF-88B experimental propeller-equipped aircraft. Supersonic tip-speeds are used in some aircraft like the Tupolev Tu-95, which can reach . History. The earliest references for vertical flight came from China. Since around 400 BC, Chinese children have played with bamboo flying toys. This bamboo-copter is spun by rolling a stick attached to a rotor between one's hands. The spinning creates lift, and the toy flies when released. The 4th-century AD Daoist book "Baopuzi" by Ge Hong (抱朴子 "Master who Embraces Simplicity") reportedly describes some of the ideas inherent to rotary wing aircraft. Designs similar to the Chinese helicopter toy appeared in Renaissance paintings and other works. It was not until the early 1480s, when Leonardo da Vinci created a design for a machine that could be described as an "aerial screw", that any recorded advancement was made towards vertical flight. His notes suggested that he built small flying models, but there were no indications for any provision to stop the rotor from making the craft rotate. As scientific knowledge increased and became more accepted, man continued to pursue the idea of vertical flight. Many of these later models and machines would more closely resemble the ancient bamboo flying top with spinning wings, rather than Leonardo's screw. In July 1754, Russian Mikhail Lomonosov had developed a small coaxial modeled after the Chinese top but powered by a wound-up spring device and demonstrated it to the Russian Academy of Sciences. It was powered by a spring, and was suggested as a method to lift meteorological instruments. In 1783, Christian de Launoy, and his mechanic, Bienvenu, used a coaxial version of the Chinese top in a model consisting of contrarotating turkey flight feathers as rotor blades, and in 1784, demonstrated it to the French Academy of Sciences. A dirigible airship was described by Jean Baptiste Marie Meusnier presented in 1783. The drawings depict a streamlined envelope with internal ballonets that could be used for regulating lift. The airship was designed to be driven by three propellers. In 1784 Jean-Pierre Blanchard fitted a hand-powered propeller to a balloon, the first recorded means of propulsion carried aloft. Sir George Cayley, influenced by a childhood fascination with the Chinese flying top, developed a model of feathers, similar to that of Launoy and Bienvenu, but powered by rubber bands. By the end of the century, he had progressed to using sheets of tin for rotor blades and springs for power. His writings on his experiments and models would become influential on future aviation pioneers. William Bland sent designs for his "Atmotic Airship" to the Great Exhibition held in London in 1851, where a model was displayed. This was an elongated balloon with a steam engine driving twin propellers suspended underneath. Alphonse Pénaud developed coaxial rotor model helicopter toys in 1870, also powered by rubber bands. In 1872 Dupuy de Lome launched a large navigable balloon, which was driven by a large propeller turned by eight men. Hiram Maxim built a craft that weighed , with a wingspan that was powered by two steam engines driving two propellers. In 1894, his machine was tested with overhead rails to prevent it from rising. The test showed that it had enough lift to take off. One of Pénaud's toys, given as a gift by their father, inspired the Wright brothers to pursue the dream of flight. The twisted airfoil (aerofoil) shape of an aircraft propeller was pioneered by the Wright brothers. While some earlier engineers had attempted to model air propellers on marine propellers, the Wright Brothers realized that a propeller is essentially the same as a wing, and were able to use data from their earlier wind tunnel experiments on wings, introducing a twist along the length of the blades. This was necessary to maintain a more uniform angle of attack of the blade along its length. Their original propeller blades had an efficiency of about 82%, compared to 90% for a modern (2010) small general aviation propeller, the 3-blade McCauley used on a Beechcraft Bonanza aircraft. Roper quotes 90% for a propeller for a human-powered aircraft. Mahogany was the wood preferred for propellers through World War I, but wartime shortages encouraged use of walnut, oak, cherry and ash. Alberto Santos Dumont was another early pioneer, having designed propellers before the Wright Brothers for his airships. He applied the knowledge he gained from experiences with airships to make a propeller with a steel shaft and aluminium blades for his 14 bis biplane in 1906. Some of his designs used a bent aluminium sheet for blades, thus creating an airfoil shape. They were heavily undercambered, and this plus the absence of lengthwise twist made them less efficient than the Wright propellers. Even so, this was perhaps the first use of aluminium in the construction of an airscrew. Originally, a rotating airfoil behind the aircraft, which pushes it, was called a propeller, while one which pulled from the front was a tractor. Later the term 'pusher' became adopted for the rear-mounted device in contrast to the tractor configuration and both became referred to as 'propellers' or 'airscrews'. The understanding of low speed propeller aerodynamics was fairly complete by the 1920s, but later requirements to handle more power in a smaller diameter have made the problem more complex. Propeller research for National Advisory Committee for Aeronautics (NACA) was directed by William F. Durand from 1916. Parameters measured included propeller efficiency, thrust developed, and power absorbed. While a propeller may be tested in a wind tunnel, its performance in free-flight might differ. At the Langley Memorial Aeronautical Laboratory, E. P. Leslie used Vought VE-7s with Wright E-4 engines for data on free-flight, while Durand used reduced size, with similar shape, for wind tunnel data. Their results were published in 1926 as NACA report #220. Theory and design. Lowry quotes a propeller efficiency of about 73.5% at cruise for a Cessna 172. This is derived from his "Bootstrap approach" for analyzing the performance of light general aviation aircraft using fixed pitch or constant speed propellers. The efficiency of the propeller is influenced by the angle of attack (α). This is defined as α = Φ - θ, where θ is the helix angle (the angle between the resultant relative velocity and the blade rotation direction) and Φ is the blade pitch angle. Very small pitch and helix angles give a good performance against resistance but provide little thrust, while larger angles have the opposite effect. The best helix angle is when the blade is acting as a wing producing much more lift than drag. However, 'lift-and-drag' is only one way to express the aerodynamic force on the blades. To explain aircraft and engine performance the same force is expressed slightly differently in terms of thrust and torque since the required output of the propeller is thrust. Thrust and torque are the basis of the definition for the efficiency of the propeller as shown below. The advance ratio of a propeller is similar to the angle of attack of a wing. A propeller's efficiency is determined by formula_0 Propellers are similar in aerofoil section to a low-drag wing and as such are poor in operation when at other than their optimum angle of attack. Therefore, most propellers use a variable pitch mechanism to alter the blades' pitch angle as engine speed and aircraft velocity are changed. A further consideration is the number and the shape of the blades used. Increasing the aspect ratio of the blades reduces drag but the amount of thrust produced depends on blade area, so using high-aspect blades can result in an excessive propeller diameter. A further balance is that using a smaller number of blades reduces interference effects between the blades, but to have sufficient blade area to transmit the available power within a set diameter means a compromise is needed. Increasing the number of blades also decreases the amount of work each blade is required to perform, limiting the local Mach number – a significant performance limit on propellers. The performance of a propeller suffers when transonic flow first appears on the tips of the blades. As the relative air speed at any section of a propeller is a vector sum of the aircraft speed and the tangential speed due to rotation, the flow over the blade tip will reach transonic speed well before the aircraft does. When the airflow over the tip of the blade reaches its critical speed, drag and torque resistance increase rapidly and shock waves form creating a sharp increase in noise. Aircraft with conventional propellers, therefore, do not usually fly faster than Mach 0.6. There have been propeller aircraft which attained up to the Mach 0.8 range, but the low propeller efficiency at this speed makes such applications rare. Blade twist. The tip of a propeller blade travels faster than the hub. Therefore, it is necessary for the blade to be twisted so as to decrease the angle of attack of the blade gradually and therefore produce uniform lift from the hub to the tip. The greatest angle of incidence, or the highest pitch, is at the hub while the smallest angle of incidence or smallest pitch is at the tip. A propeller blade designed with the same angle of incidence throughout its entire length would be inefficient because as airspeed increases in flight, the portion near the hub would have a negative AOA while the blade tip would be stalled. High speed. There have been efforts to develop propellers and propfans for aircraft at high subsonic speeds. The 'fix' is similar to that of transonic wing design. Thin blade sections are used and the blades are swept back in a scimitar shape (scimitar propeller) in a manner similar to wing sweepback, so as to delay the onset of shockwaves as the blade tips approach the speed of sound. The maximum relative velocity is kept as low as possible by careful control of pitch to allow the blades to have large helix angles. A large number of blades are used to reduce work per blade and so circulation strength. Contra-rotating propellers are used. The propellers designed are more efficient than turbo-fans and their cruising speed (Mach 0.7–0.85) is suitable for airliners, but the noise generated is tremendous (see the Antonov An-70 and Tupolev Tu-95 for examples of such a design). Physics. Forces acting on the blades of an aircraft propeller include the following. Some of these forces can be arranged to counteract each other, reducing the overall mechanical stresses imposed. Thrust loads on the blades, in reaction to the force pushing the air backwards, act to bend the blades forward. Blades are therefore often "raked" forwards, such that the outward centrifugal force of rotation acts to bend them backwards, thus balancing out the bending effects. A centrifugal twisting force is experienced by any asymmetrical spinning object. In the propeller it acts to twist the blades to a fine pitch. The aerodynamic centre of pressure is therefore usually arranged to be slightly forward of its mechanical centreline, creating a twisting moment towards coarse pitch and counteracting the centrifugal moment. However in a high-speed dive the aerodynamic force can change significantly and the moments can become unbalanced. The force felt by the blades acting to pull them away from the hub when turning. It can be arranged to help counteract the thrust bending force, as described above. Air resistance acting against the blades, combined with inertial effects causes propeller blades to bend away from the direction of rotation. Many types of disturbance set up vibratory forces in blades. These include aerodynamic excitation as the blades pass close to the wing and fuselage. Piston engines introduce torque impulses which may excite vibratory modes of the blades and cause fatigue failures. Torque impulses are not present when driven by a gas turbine engine. Variable pitch. The purpose of varying pitch angle is to maintain an optimal angle of attack for the propeller blades, giving maximum efficiency throughout the flight regime. This reduces fuel usage. Only by maximising propeller efficiency at high speeds can the highest possible speed be achieved. Effective angle of attack decreases as airspeed increases, so a coarser pitch is required at high airspeeds. The requirement for pitch variation is shown by the propeller performance during the Schneider Trophy competition in 1931. The Fairey Aviation Company fixed-pitch propeller used was partially stalled on take-off and up to on its way up to a top speed of . The very wide speed range was achieved because some of the usual requirements for aircraft performance did not apply. There was no compromise on top-speed efficiency, the take-off distance was not restricted to available runway length and there was no climb requirement. The variable pitch blades used on the Tupolev Tu-95 propel it at a speed exceeding the maximum once considered possible for a propeller-driven aircraft using an exceptionally coarse pitch. Mechanisms. Early pitch control settings were pilot operated, either with a small number of preset positions or continuously variable. The simplest mechanism is the ground-adjustable propeller, which may be adjusted on the ground, but is effectively a fixed-pitch prop once airborne. The spring-loaded "two-speed" VP prop is set to fine for takeoff, and then triggered to coarse once in cruise, the propeller remaining coarse for the remainder of the flight. After World War I, automatic propellers were developed to maintain an optimum angle of attack. This was done by balancing the centripetal twisting moment on the blades and a set of counterweights against a spring and the aerodynamic forces on the blade. Automatic props had the advantage of being simple, lightweight, and requiring no external control, but a particular propeller's performance was difficult to match with that of the aircraft's power plant. The most common variable pitch propeller is the constant-speed propeller. This is controlled by a hydraulic constant speed unit (CSU). It automatically adjusts the blade pitch in order to maintain a constant engine speed for any given power control setting. Constant-speed propellers allow the pilot to set a rotational speed according to the need for maximum engine power or maximum efficiency, and a propeller governor acts as a closed-loop controller to vary propeller pitch angle as required to maintain the selected engine speed. In most aircraft this system is hydraulic, with engine oil serving as the hydraulic fluid. However, electrically controlled propellers were developed during World War II and saw extensive use on military aircraft, and have recently seen a revival in use on home-built aircraft. Another design is the V-Prop, which is self-powering and self-governing. Feathering. On most variable-pitch propellers, the blades can be rotated parallel to the airflow to stop rotation of the propeller and reduce drag when the engine fails or is deliberately shut down. This is called "feathering", a term borrowed from rowing. On single-engined aircraft, whether a powered glider or turbine-powered aircraft, the effect is to increase the gliding distance. On a multi-engine aircraft, feathering the propeller on an inoperative engine reduces drag, and helps the aircraft maintain speed and altitude with the operative engines. Feathering also prevents "windmilling", the turning of engine components by the propeller rotation forced by the slipstream; windmilling can damage the engine, start a fire, or cause structural damage to the aircraft. Most feathering systems for reciprocating engines sense a drop in oil pressure and move the blades toward the feather position, and require the pilot to pull the propeller control back to disengage the high-pitch stop pins before the engine reaches idle RPM. Turboprop control systems usually use a "negative torque sensor" in the reduction gearbox, which moves the blades toward feather when the engine is no longer providing power to the propeller. Depending on design, the pilot may have to push a button to override the high-pitch stops and complete the feathering process or the feathering process may be automatic. Accidental feathering is dangerous and can result in an aerodynamic stall; as seen for example with Yeti Airlines Flight 691 which crashed during approach due to accidental feathering. Reverse pitch. The propellers on some aircraft can operate with a negative blade pitch angle, and thus reverse the thrust from the propeller. This is known as Beta Pitch. Reverse thrust is used to help slow the aircraft after landing and is particularly advantageous when landing on a wet runway as wheel braking suffers reduced effectiveness. In some cases reverse pitch allows the aircraft to taxi in reverse – this is particularly useful for getting floatplanes out of confined docks. Counter-rotation. Counter-rotating propellers are sometimes used on twin-engine and multi-engine aircraft with wing-mounted engines. These propellers turn in opposite directions from their counterpart on the other wing to balance out the torque and p-factor effects. They are sometimes referred to as "handed" propellers since there are left hand and right hand versions of each prop. Generally, the propellers on both engines of most conventional twin-engined aircraft spin clockwise (as viewed from the rear of the aircraft). To eliminate the critical engine problem, counter-rotating propellers usually turn "inwards" towards the fuselage – clockwise on the left engine and counterclockwise on the right – however, there are exceptions (especially during World War II) such as the P-38 Lightning which turned "outwards" (counterclockwise on the left engine and clockwise on the right) away from the fuselage from the WW II years, and the Airbus A400 whose inboard and outboard engines turn in opposite directions even on the same wing. Contra-rotation. A contra-rotating propeller or contra-prop places two counter-rotating propellers on concentric drive shafts so that one sits immediately 'downstream' of the other propeller. This provides the benefits of counter-rotating propellers for a single powerplant. The forward propeller provides the majority of the thrust, while the rear propeller also recovers energy lost in the swirling motion of the air in the propeller slipstream. Contra-rotation also increases the ability of a propeller to absorb power from a given engine, without increasing propeller diameter. However the added cost, complexity, weight and noise of the system rarely make it worthwhile and it is only used on high-performance types where ultimate performance is more important than efficiency. Aircraft fans. A fan is a propeller with a large number of blades. A fan therefore produces a lot of thrust for a given diameter but the closeness of the blades means that each strongly affects the flow around the others. If the flow is supersonic, this interference can be beneficial if the flow can be compressed through a series of shock waves rather than one. By placing the fan within a shaped duct, specific flow patterns can be created depending on flight speed and engine performance. As air enters the duct, its speed is reduced while its pressure and temperature increase. If the aircraft is at a high subsonic speed this creates two advantages: the air enters the fan at a lower Mach speed; and the higher temperature increases the local speed of sound. While there is a loss in efficiency as the fan is drawing on a smaller area of the free stream and so using less air, this is balanced by the ducted fan retaining efficiency at higher speeds where conventional propeller efficiency would be poor. A ducted fan or propeller also has certain benefits at lower speeds but the duct needs to be shaped in a different manner than one for higher speed flight. More air is taken in and the fan therefore operates at an efficiency equivalent to a larger un-ducted propeller. Noise is also reduced by the ducting and should a blade become detached the duct would help contain the damage. However the duct adds weight, cost, complexity and (to a certain degree) drag. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\eta = \\frac{\\hbox{propulsive power out}}{\\hbox{shaft power in}} = \\frac{\\hbox{thrust}\\cdot\\hbox{axial speed}}{\\hbox{resistance torque}\\cdot\\hbox{rotational speed}}." } ]
https://en.wikipedia.org/wiki?curid=7661871
766202
Anisometropia
Term used when two eyes have unequal refractive power Medical condition Medical condition Anisometropia is a condition in which a person's eyes have substantially differing refractive power. Generally, a difference in power of one diopter (1D) is the threshold for diagnosis of the condition. Patients may have up to 3D of anisometropia before the condition becomes clinically significant due to headache, eye strain, double vision or photophobia. In certain types of anisometropia, the visual cortex of the brain cannot process images from both eyes simultaneously (binocular summation), but will instead suppress the central vision of one of the eyes. If this occurs too often during the first 10 years of life, while the visual cortex is developing, it can result in amblyopia, a condition where, even when correcting the refractive error properly, the person's vision in the affected eye may still not be fully correctable to 20/20. The name of the condition comes from its four Greek components: "an-" "not", "iso-" "same", "metr-" "measure", "ops" "eye". Antimetropia is a rare sub-type of anisometropia in which one eye is myopic (nearsighted) and the other eye is hyperopic (farsighted). This condition occurs in about 0.1% of the population. Causes. Anisometropia is caused by common refractive errors, such as astigmatism, far-sightedness, and myopia, in one eye. Anisometropia is likely the result of both genetic and environmental influences. Some studies suggest, in older adults, developing asymmetric cataracts may cause worsen anisometropia. However, anisometropia is associated with age regardless of cataract development: a rapid decrease in anisometropia during the first years of life, an increase during the transition to adulthood, relatively unchanging levels during adulthood but significant increases in older age. Diagnosis. Anisometropia causes some people to have mild vision problems, or occasionally more serious symptoms like alternating vision or frequent squinting. However, since most people do not show any clear symptoms, the condition usually is found during a routine eye exam. For early detection in preverbal children, photoscreening can be used. In this brief vision test specialized cameras detect each eye's light reflexes, which the equipment's software or a test administrator then interprets. If photoscreening indicates the presence of risk factors, an ophthalmologist can then diagnose the condition after a complete eye exam, including dilating the pupils and measuring the focusing power of each eye. Treatment. Spectacle correction. For those with large degrees of anisometropia, the wearing of standard spectacles may cause the person to experience a difference in image magnification between the two eyes (aniseikonia) which could also prevent the development of good binocular vision. This can make it very difficult to wear glasses without symptoms such as headaches and eyestrain. However, the earlier the condition is treated, the easier it is to adjust to glasses. It is possible for spectacle lenses to be made which can adjust the image sizes presented to the eye to be approximately equal. These are called iseikonic lenses. In practice though, this is rarely ever done. The formula for iseikonic lenses (without cylinder) is: formula_0 where: &lt;br&gt; "t" = center thickness (in metres);&lt;br&gt; "n" = refractive index;&lt;br&gt; "P" = front base curve (in 1/metres);&lt;br&gt; "h" = vertex distance (in metres);&lt;br&gt; "F" = back vertex power (in 1/metres), (essentially, the prescription for the lens, quoted in diopters). If the difference between the eyes is up to 3 diopters, iseikonic lenses can compensate. At a difference of 3 diopters the lenses would however be very visibly different—one lens would need to be at least 3 mm thicker and have a base curve increased by 7.5 spheres. Example. Consider a pair of spectacles to correct for myopia with a prescription of −1.00 m−1 in one eye and −4.00 m−1 in the other. Suppose that for both eyes the other parameters are identical, namely "t" = 1 mm = 0.001 m, "n" = 1.6, "P" = 5 m−1, and "h" = 15 mm = 0.015 m. &lt;br&gt; Then for the first eye formula_1, &lt;br&gt; while for the second eye formula_2. Thus, in the first eye the size of the image formed on the retina will be 1.17% smaller than without spectacles (although it will be sharp, rather than blurry), whilst in the second eye the image formed on the retina will be 5.36% smaller. As alluded to above, one method of producing more iseikonic lenses would be to adjust the thickness and base curve of the second lens. For instance, theoretically it could be set to "t" = 5 mm = 0.005 m and "P" = 14.5 m−1, with all other parameters unchanged. Then for the second eye the magnification would become formula_3,&lt;br&gt; which is much closer to that of the first eye. In this example the first eye, with a −1.00 diopter prescription, is the stronger eye, needing only slight correction to sharpen the image formed, and hence a thin spectacle lens. The second eye, with a −4.00 diopter prescription, is the weaker eye, needing moderate correction to sharpen the image formed, and hence a moderately thick spectacle lens—if the aniseikonia is ignored. In order to avoid the aniseikonia (so that both magnifications will be practically the same, while retaining image sharpness in both eyes), the spectacle lens used for the second eye will have to be made even thicker. Contact lenses. The usual recommendation for those needing iseikonic correction is to wear contact lenses. The effect of vertex distance is removed and the effect of center thickness is also almost removed, meaning there is minimal and likely unnoticeable image size difference. This is a good solution for those who can tolerate contact lenses. Refractive surgery. Refractive surgery causes only minimal size differences, similar to contact lenses. In a study performed on 53 children who had amblyopia due to anisometropia, surgical correction of the anisometropia followed by strabismus surgery if required led to improved visual acuity and even to stereopsis in many of the children ("see:" Refractive surgery). Epidemiology. A determination of the prevalence of anisometropia has several difficulties. First of all, the measurement of refractive error may vary from one measurement to the next. Secondly, different criteria have been employed to define anisometropia, and the boundary between anisometropia and isometropia depend on their definition. Several studies have found that anisometropia occurs more frequently and tends to be more severe for persons with high ametropia, and that this is particularly true for myopes. Anisometropia follows a U-shape distribution according to age: it is frequent in infants aged only a few weeks, is more rare in young children, comparatively more frequent in teenagers and young adults, and more prevalent after presbyopia sets in, progressively increasing into old age. One study estimated that 6% of those between the ages of 6 and 18 have anisometropia. Notwithstanding research performed on the biomechanical, structural and optical characteristics of anisometropic eyes, the underlying reasons for anisometropia are still poorly understood. Anisometropic persons who have strabismus are mostly far-sighted, and almost all of these have (or have had) esotropia. However, there are indications that anisometropia influences the long-term outcome of a surgical correction of an inward squint, and vice versa. More specifically, for patients with esotropia who undergo strabismus surgery, anisometropia may be one of the risk factors for developing consecutive exotropia and poor binocular function may be a risk factor for anisometropia to develop or increase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\textrm{Magnification} = \\frac{1}{(1-(\\frac{t}{n})P)}\\cdot \\frac{1}{(1-hF)} " }, { "math_id": 1, "text": " \\textrm{Magnification} = \\frac{1}{(1-(0.001/1.6) \\times 5)}\\cdot \\frac{1}{(1-0.015 \\times -1)} = 1.0031 \\times 0.9852 = 0.9883 = 98.83\\,\\%" }, { "math_id": 2, "text": " \\textrm{Magnification} = \\frac{1}{(1-(0.001/1.6) \\times 5)}\\cdot \\frac{1}{(1-0.015 \\times -4)} = 1.0031 \\times 0.9434 = 0.9464 = 94.64\\,\\%" }, { "math_id": 3, "text": " \\textrm{Magnification} = \\frac{1}{(1-(0.005/1.6) \\times 14.5)}\\cdot \\frac{1}{(1-0.015 \\times -4)} = 1.0475 \\times 0.9434 = 0.9882 = 98.82\\,\\%" } ]
https://en.wikipedia.org/wiki?curid=766202
76629109
Tjøstheim's coefficient
Measure of spatial correlation Tjøstheim's coefficient is a measure of spatial association that attempts to quantify the degree to which two spatial data sets are related. Developed by Norwegian statistician Dag Tjøstheim. It is similar to rank correlation coefficients like Spearman's rank correlation coefficient and the Kendall rank correlation coefficient but also explicitly considers the spatial relationship between variables. Consider two variables, formula_0 and formula_1, observed at the same set of formula_2 spatial locations with co-ordinates formula_3 and formula_4. The Rank of formula_5 at formula_6 is formula_7 with a similar definition for formula_8. Here formula_9 is a step function and this formula counts how many values formula_10 are less than or equal to the value at the target point formula_11. Now define formula_12 where formula_13 is the Kronecker delta. This is the formula_14 coordinate of the formula_15 ranked formula_5 value. The quantities formula_16 and formula_17 can be defined similarly. Tjøstheim's coefficient is defined by formula_18 Under the assumptions that formula_5 and formula_8 are independent and identically distributed random variables and are independent of each other it can be shown that formula_19 and formula_20 The maximum variance of formula_21 occurs when all points are on a straight line and the minimum variance of formula_22 occurs for a symmetric cross pattern where formula_23 and formula_24. Tjøstheim's coefficient is implemented as "cor.spatial" in the R package SpatialPack. Numerical simulations suggest that formula_25 is an effective measure of correlation between variables but is sensitive to the degree of autocorrelation in formula_5 and formula_8. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F(x,y)" }, { "math_id": 1, "text": "G(x,y)" }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "x_i" }, { "math_id": 4, "text": "y_i" }, { "math_id": 5, "text": "F" }, { "math_id": 6, "text": "(x_i,y_i)" }, { "math_id": 7, "text": "R_F(x_i,y_i) = \\sum_i^N \\theta(F(x_i,y_i) - F(x_j,y_j) )" }, { "math_id": 8, "text": "G" }, { "math_id": 9, "text": "\\theta" }, { "math_id": 10, "text": "F(x_j,y_j)" }, { "math_id": 11, "text": "F(x_i,y_i)" }, { "math_id": 12, "text": "X_F(i) = \\sum_j^N x_j \\delta(i, R_F(x_j,y_j))" }, { "math_id": 13, "text": "\\delta" }, { "math_id": 14, "text": "x" }, { "math_id": 15, "text": "i^{\\text{th}}" }, { "math_id": 16, "text": "Y_F(i), X_G(i)" }, { "math_id": 17, "text": "Y_G(i)" }, { "math_id": 18, "text": " A = \\frac{\\sum_i^N (X_F(i) - \\bar{X}_F)(X_G(i) - \\bar{X}_G) + (Y_F(i) - \\bar{Y}_F)(Y_G(i) - \\bar{Y}_G) }{\\left(\\sum_i^N\\left[(X_F(i) - \\bar{X}_F)^2 + (Y_F(i) - \\bar{Y}_F)^2\\right] \\sum_i^N\\left[(X_G(i) - \\bar{X}_G)^2 + (Y_G(i) - \\bar{Y}_G)^2\\right] \\right)^{1/2}}" }, { "math_id": 19, "text": "E[A] = 0" }, { "math_id": 20, "text": "var(A) = \\frac{\\left(\\sum_i^N x_i^2\\right)^2 + 2 \\left(\\sum_i^N x_i y_i\\right)^2 + \\left(\\sum_i^N y_i^2\\right)^2 }{(N-1)\\left(\\sum_i^N x_i^2 + \\sum_i^N y_i^2\\right)^2 }" }, { "math_id": 21, "text": "1/(N-1)" }, { "math_id": 22, "text": "1/(2(N-1))" }, { "math_id": 23, "text": "x_i y_i = 0" }, { "math_id": 24, "text": "\\sum_i^N x_i^2 = \\sum_i^N y_i^2" }, { "math_id": 25, "text": "A" } ]
https://en.wikipedia.org/wiki?curid=76629109
76632501
Getis–Ord statistics
Spatial autocorrelation statistic Getis–Ord statistics, also known as Gi*, are used in spatial analysis to measure the local and global spatial autocorrelation. Developed by statisticians Arthur Getis and J. Keith Ord they are commonly used for "Hot Spot Analysis" to identify where features with high or low values are spatially clustered in a statistically significant way. Getis-Ord statistics are available in a number of software libraries such as CrimeStat, GeoDa, ArcGIS, PySAL and R. Local statistics. There are two different versions of the statistic, depending on whether the data point at the target location formula_0 is included or not formula_1 formula_2 Here formula_3 is the value observed at the formula_4 spatial site and formula_5 is the spatial weight matrix which constrains which sites are connected to one another. For formula_6 the denominator is constant across all observations. A value larger (or smaller) than the mean suggests a hot (or cold) spot corresponding to a high-high (or low-low) cluster. Statistical significance can be estimated using analytical approximations as in the original work however in practice permutation testing is used to obtain more reliable estimates of significance for statistical inference. Global statistics. The Getis-Ord statistics of overall spatial association are formula_7 formula_8 The local and global formula_9 statistics are related through the weighted average formula_10 The relationship of the formula_11 and formula_12 statistics is more complicated due to the dependence of the denominator of formula_12 on formula_0. Relation to Moran's I. Moran's I is another commonly used measure of spatial association defined by formula_13 where formula_14 is the number of spatial sites and formula_15 is the sum of the entries in the spatial weight matrix. Getis and Ord show that formula_16 Where formula_17, formula_18, formula_19 and formula_20. They are equal if formula_21 is constant, but not in general. Ord and Getis also show that Moran's "I" can be written in terms of formula_6 formula_22 where formula_23, formula_24 is the standard deviation of formula_25 and formula_26 is an estimate of the variance of formula_5. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i" }, { "math_id": 1, "text": " G_i = \\frac{ \\sum_{j \\neq i} w_{ij} x_j}{\\sum_{j \\neq i} x_j} " }, { "math_id": 2, "text": " G_i^* = \\frac{ \\sum_{j} w_{ij} x_j}{\\sum_{j} x_j} " }, { "math_id": 3, "text": "x_i" }, { "math_id": 4, "text": "i^{th}" }, { "math_id": 5, "text": "w_{ij}" }, { "math_id": 6, "text": "G_i^*" }, { "math_id": 7, "text": " G = \\frac{ \\sum_{ij, i\\neq j} w_{ij} x_i x_j}{\\sum_{ij, i\\neq j} x_i x_j}" }, { "math_id": 8, "text": " G^* = \\frac{ \\sum_{ij} w_{ij} x_i x_j}{\\sum_{ij} x_i x_j}" }, { "math_id": 9, "text": "G^*" }, { "math_id": 10, "text": " \\frac{ \\sum_i x_i G^*_i }{ \\sum_i x_i } = \\frac{ \\sum_{ij} x_i w_{ij} x_j }{ \\sum_i x_i \\sum_j x_j } = G^* " }, { "math_id": 11, "text": "G" }, { "math_id": 12, "text": "G_i" }, { "math_id": 13, "text": "\nI = \\frac{N}{W} \\frac{\\sum_{ij} w_{ij}(x_i - \\bar{x})(x_j - \\bar{x})}{\\sum_{i} (x_i - \\bar{x})^2}\n" }, { "math_id": 14, "text": "N" }, { "math_id": 15, "text": "W = \\sum_{ij} w_{ij}" }, { "math_id": 16, "text": " \nI = (K_1/K_2) G - K_2 \\bar{x} \\sum_i (w_{i \\cdot} + w_{\\cdot i}) x_i + K_2 \\bar{x}^2 W\n" }, { "math_id": 17, "text": "w_{i \\cdot} = \\sum_j w_{ij}" }, { "math_id": 18, "text": "w_{\\cdot i} = \\sum_j w_{ji}" }, { "math_id": 19, "text": "K_1 = \\left( \\sum_{ij, i\\neq j} x_i x_j \\right)^{-1}" }, { "math_id": 20, "text": "K_2 = \\frac{W}{N}\\left(\\sum_{i} (x_i - \\bar{x})^2\\right)^{-1}" }, { "math_id": 21, "text": "w_{ij}=w" }, { "math_id": 22, "text": " \nI = \\frac{1}{W} \\left( \\sum_i z_i V_i G_i^* - N\\right)\n" }, { "math_id": 23, "text": "z_i = (x_i - \\bar{x})/s" }, { "math_id": 24, "text": "s" }, { "math_id": 25, "text": "x" }, { "math_id": 26, "text": " \nV_i^2 = \\frac{1}{N-1}\\sum_j \\left( w_{ij} - \\frac{1}{N} \\sum_k w_{ik}\\right)^2\n" } ]
https://en.wikipedia.org/wiki?curid=76632501
76634757
Wartenberg's coefficient
Spatial correlation coefficient Wartenberg's coefficient is a measure of correlation developed by epidemiologist Daniel Wartenberg. This coefficient is a multivariate extension of spatial autocorrelation that aims to account for spatial dependence of data while studying their covariance. A modified version of this statistic is available in the R package "adespatial". For data formula_0 measured at formula_1 spatial sites Moran's I is a measure of the spatial autocorrelation of the data. By standardizing the observations formula_2 by subtracting the mean and dividing by the variance as well as normalising the spatial weight matrix such that formula_3 we can write Moran's "I" as formula_4 Wartenberg generalized this by letting formula_5 be a vector of formula_6 observations at formula_7 and defining where: formula_8 For two variables formula_17 and formula_18 the bivariate correlation is formula_19 For formula_20 this reduces to Moran's formula_15. For larger values of formula_6 the diagonals of formula_15 are the Moran indices for each of the variables and the off-diagonals give the corresponding Wartenberg correlation coefficients. formula_15 is an example of a Mantel statistic and so its significance can be evaluated using the Mantel test. Criticisms. Lee points out some problems with this coefficient namely: He suggests an alternative coefficient which has two factors of formula_9 in the numerator and is symmetric for any weight matrix. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_i" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "z_i = (x_i - \\bar{x})/s" }, { "math_id": 3, "text": "\\sum_{ij} w_{ij} = 1" }, { "math_id": 4, "text": "I = \\sum_{ij} w_{ij} z_i z_j" }, { "math_id": 5, "text": "z_i" }, { "math_id": 6, "text": "M" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "I = Z^T W Z" }, { "math_id": 9, "text": "W" }, { "math_id": 10, "text": "<math>N \\times N</math >" }, { "math_id": 11, "text": "Z" }, { "math_id": 12, "text": "<math>N \\times M</math >" }, { "math_id": 13, "text": "Z^T" }, { "math_id": 14, "text": "<math>Z</math >" }, { "math_id": 15, "text": "I" }, { "math_id": 16, "text": "<math>M \\times M</math >" }, { "math_id": 17, "text": "x" }, { "math_id": 18, "text": "y" }, { "math_id": 19, "text": "I_{xy} = \\frac{ \\sum_{ij} w_{ij} (x_i - \\bar{x}) (y_j - \\bar{y})}{ \\sqrt{ \\sum_i (x_i -\\bar{x})^2} \\sqrt{ \\sum_i (y_i -\\bar{y})^2} }" }, { "math_id": 20, "text": "M=1" }, { "math_id": 21, "text": "I_{xy} \\neq I_{yx}" } ]
https://en.wikipedia.org/wiki?curid=76634757
7663519
Generalized Maxwell model
The generalized Maxwell model also known as the Maxwell–Wiechert model (after James Clerk Maxwell and E Wiechert) is the most general form of the linear model for viscoelasticity. In this model, several Maxwell elements are assembled in parallel. It takes into account that the relaxation does not occur at a single time, but in a set of times. Due to the presence of molecular segments of different lengths, with shorter ones contributing less than longer ones, there is a varying time distribution. The Wiechert model shows this by having as many spring–dashpot Maxwell elements as are necessary to accurately represent the distribution. The figure on the right shows the generalised Wiechert model. General model form. Solids. Given formula_0 elements with moduli formula_1, viscosities formula_2, and relaxation times formula_3 The general form for the model for solids is given by : General Maxwell Solid Model (1) formula_4 formula_5 formula_6 formula_7 formula_8 &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;This may be more easily understood by showing the model in a slightly more expanded form: General Maxwell Solid Model (2) formula_4 formula_9 formula_10 formula_11 formula_12 formula_11 formula_13 formula_6 formula_7 formula_14 formula_15 formula_11 formula_16 formula_11 formula_17 Example: standard linear solid model. Following the above model with formula_18 elements yields the standard linear solid model: Standard Linear Solid Model (3) formula_19 Fluids. Given formula_0 elements with moduli formula_1, viscosities formula_2, and relaxation times formula_3 The general form for the model for fluids is given by: General Maxwell Fluid Model (4) formula_4 formula_5 formula_6 formula_20 &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;This may be more easily understood by showing the model in a slightly more expanded form: General Maxwell Fluid Model (5) formula_4 formula_9 formula_10 formula_11 formula_12 formula_11 formula_13 formula_6 formula_21 formula_22 formula_11 formula_23 formula_11 formula_24 Example: three parameter fluid. The analogous model to the standard linear solid model is the three parameter fluid, also known as the Jeffreys model: Three Parameter Maxwell Fluid Model (6) formula_25
[ { "math_id": 0, "text": "N+1" }, { "math_id": 1, "text": "E_i" }, { "math_id": 2, "text": "\\eta_i" }, { "math_id": 3, "text": "\\tau_i=\\frac{\\eta_i}{E_i}" }, { "math_id": 4, "text": "\\sigma+" }, { "math_id": 5, "text": "\n\\sum^{N}_{n=1}{\n\\left({\n\\sum^{N-n+1}_{i_1=1}{\n...\n\\left({\n\\sum^{N-\\left({n-a}\\right)+1}_{i_a=i_{a-1}+1}{\n...\n\\left({\n\\sum^{N}_{i_n=i_{n-1}+1}{\n\\left({\n\\prod_{j\\in\\left\\{{i_1,...,i_n}\\right\\}}{\n\\tau_j\n}\n}\\right)\n}\n}\\right)\n...\n}\n}\\right)\n...\n}\n}\\right)\n\\frac{\\partial^{n}{\\sigma}}{\\partial{t}^{n}}\n}\n" }, { "math_id": 6, "text": " = " }, { "math_id": 7, "text": "E_0\\epsilon+" }, { "math_id": 8, "text": "\n\\sum^{N}_{n=1}{\n\\left({\n\\sum^{N-n+1}_{i_1=1}{\n...\n\\left({\n\\sum^{N-\\left({n-a}\\right)+1}_{i_a=i_{a-1}+1}{\n...\n\\left({\n\\sum^{N}_{i_n=i_{n-1}+1}{\n\\left({\n\\left({\nE_0+\\sum_{j\\in\\left\\{{i_1,...,i_n}\\right\\}}{\nE_j\n}\n}\\right)\n\\left({\n\\prod_{k\\in\\left\\{{i_1,...,i_n}\\right\\}}{\n\\tau_k\n}\n}\\right)\n}\\right)\n}\n}\\right)\n...\n}\n}\\right)\n...\n}\n}\\right)\n\\frac{\\partial^{n}{\\epsilon}}{\\partial{t}^{n}}\n}\n" }, { "math_id": 9, "text": "\n{\\left({\\sum^{N}_{i=1}{\\tau_i}}\\right)}\n\\frac{\\partial{\\sigma}}{\\partial{t}}\n+\n" }, { "math_id": 10, "text": "\n{\\left({\\sum^{N-1}_{i=1}{\n\\left({\\sum^{N}_{j=i+1}{\n\\tau_i\\tau_j\n}}\\right)\n}}\\right)}\n\\frac{\\partial^{2}{\\sigma}}{\\partial{t}^{2}}\n" }, { "math_id": 11, "text": "+...+" }, { "math_id": 12, "text": "\n\\left({\n\\sum^{N-n+1}_{i_1=1}{\n...\n\\left({\n\\sum^{N-\\left({n-a}\\right)+1}_{i_a=i_{a-1}+1}{\n...\n\\left({\n\\sum^{N}_{i_n=i_{n-1}+1}{\n\\left({\n\\prod_{j\\in\\left\\{{i_1,...,i_n}\\right\\}}{\n\\tau_j\n}\n}\\right)\n}\n}\\right)\n...\n}\n}\\right)\n...\n}\n}\\right)\n\\frac{\\partial^{n}{\\sigma}}{\\partial{t}^{n}}\n" }, { "math_id": 13, "text": "\n\\left({\n\\prod^{N}_{i=1}{\n\\tau_i\n}\n}\\right)\n\\frac{\\partial^{N}{\\sigma}}{\\partial{t}^{N}}\n" }, { "math_id": 14, "text": "\n{\\left({\\sum^{N}_{i=1}{\\left({E_0+E_i}\\right)\\tau_i}}\\right)}\n\\frac{\\partial{\\epsilon}}{\\partial{t}}\n+\n" }, { "math_id": 15, "text": "\n{\\left({\\sum^{N-1}_{i=1}{\n\\left({\\sum^{N}_{j=i+1}{\n\\left({E_0+E_i+E_j}\\right)\n\\tau_i\\tau_j\n}}\\right)\n}}\\right)}\n\\frac{\\partial^{2}{\\epsilon}}{\\partial{t}^{2}}\n" }, { "math_id": 16, "text": "\n\\left({\n\\sum^{N-n+1}_{i_1=1}{\n...\n\\left({\n\\sum^{N-\\left({n-a}\\right)+1}_{i_a=i_{a-1}+1}{\n...\n\\left({\n\\sum^{N}_{i_n=i_{n-1}+1}{\n\\left({\n\\left({\nE_0+\\sum_{j\\in\\left\\{{i_1,...,i_n}\\right\\}}{\nE_j\n}\n}\\right)\n\\left({\n\\prod_{k\\in\\left\\{{i_1,...,i_n}\\right\\}}{\n\\tau_k\n}\n}\\right)\n}\\right)\n}\n}\\right)\n...\n}\n}\\right)\n...\n}\n}\\right)\n\\frac{\\partial^{n}{\\epsilon}}{\\partial{t}^{n}}\n" }, { "math_id": 17, "text": "\n\\left({\nE_0+\\sum_{j=1}^{N} E_j\n}\\right)\n\\left({\n\\prod^{N}_{i=1}{\n\\tau_i\n}\n}\\right)\n\\frac{\\partial^{N}{\\epsilon}}{\\partial{t}^{N}}\n" }, { "math_id": 18, "text": "N+1=2" }, { "math_id": 19, "text": "\n\\sigma+\\tau_1\\frac{\\partial{\\sigma}}{\\partial{t}}=E_0\\epsilon+\\tau_1\\left({E_0+E_1}\\right)\\frac{\\partial{\\epsilon}}{\\partial{t}}\n" }, { "math_id": 20, "text": "\n\\sum^{N}_{n=1}{\n\\left({\n\\eta_0+\\sum^{N-n+1}_{i_1=1}{\n...\n\\left({\n\\sum^{N-\\left({n-a}\\right)+1}_{i_a=i_{a-1}+1}{\n...\n\\left({\n\\sum^{N}_{i_n=i_{n-1}+1}{\n\\left({\n\\left({\n\\sum_{j\\in\\left\\{{i_1,...,i_n}\\right\\}}{\nE_j\n}\n}\\right)\n\\left({\n\\prod_{k\\in\\left\\{{i_1,...,i_n}\\right\\}}{\n\\tau_k\n}\n}\\right)\n}\\right)\n}\n}\\right)\n...\n}\n}\\right)\n...\n}\n}\\right)\n\\frac{\\partial^{n}{\\epsilon}}{\\partial{t}^{n}}\n}\n" }, { "math_id": 21, "text": "\n{\\left({\\eta_0+\\sum^{N}_{i=1}{E_i\\tau_i}}\\right)}\n\\frac{\\partial{\\epsilon}}{\\partial{t}}\n+\n" }, { "math_id": 22, "text": "\n{\\left({\\eta_0+\\sum^{N-1}_{i=1}{\n\\left({\\sum^{N}_{j=i+1}{\n\\left({E_i+E_j}\\right)\n\\tau_i\\tau_j\n}}\\right)\n}}\\right)}\n\\frac{\\partial^{2}{\\epsilon}}{\\partial{t}^{2}}\n" }, { "math_id": 23, "text": "\n\\left({\n\\eta_0+\n\\sum^{N-n+1}_{i_1=1}{\n...\n\\left({\n\\sum^{N-\\left({n-a}\\right)+1}_{i_a=i_{a-1}+1}{\n...\n\\left({\n\\sum^{N}_{i_n=i_{n-1}+1}{\n\\left({\n\\left({\n\\sum_{j\\in\\left\\{{i_1,...,i_n}\\right\\}}{\nE_j\n}\n}\\right)\n\\left({\n\\prod_{k\\in\\left\\{{i_1,...,i_n}\\right\\}}{\n\\tau_k\n}\n}\\right)\n}\\right)\n}\n}\\right)\n...\n}\n}\\right)\n...\n}\n}\\right)\n\\frac{\\partial^{n}{\\epsilon}}{\\partial{t}^{n}}\n" }, { "math_id": 24, "text": "\n\\left({\n\\eta_0+\n\\left({\n\\sum_{j=1}^{N} E_j\n}\\right)\n\\left({\n\\prod^{N}_{i=1}{\n\\tau_i\n}\n}\\right)\n}\\right)\n\\frac{\\partial^{N}{\\epsilon}}{\\partial{t}^{N}}\n" }, { "math_id": 25, "text": "\n\\sigma+\\tau_1\\frac{\\partial{\\sigma}}{\\partial{t}}=\\left({\\eta_0+\\tau_1 E_1\\frac{\\partial}{\\partial t}}\\right)\\frac{\\partial{\\epsilon}}{\\partial{t}}\n" } ]
https://en.wikipedia.org/wiki?curid=7663519
76636218
Lee's L
Spatial correlation measure Lee's "L" is a bivariate spatial correlation coefficient which measures the association between two sets of observations made at the same spatial sites. Standard measures of association such as the Pearson correlation coefficient do not account for the spatial dimension of data, in particular they are vulnerable to inflation due to spatial autocorrelation. Lee's "L" is available in numerous spatial analysis software libraries including "spdep" and "PySAL" (where it is called "Spatial_Pearson") and has been applied in diverse applications such as studying air pollution, viticulture and housing rent. For spatial data formula_0 and formula_1 measured at formula_2 locations connected with the spatial weight matrix formula_3 first define the spatially lagged vector formula_4 with a similar definition for formula_5. Then Lee's "L" is defined as formula_6 where formula_7 are the mean values of formula_8. When the spatial weight matrix is row normalized, such that formula_9, the first factor is 1. Lee also defines the "spatial smoothing scalar" formula_10 to measure the spatial autocorrelation of a variable. It is shown by Lee that the above definition is equivalent to formula_11 Where formula_12 is the Pearson correlation coefficient formula_13 This means Lee's L is equivalent to the Pearson correlation of the spatially lagged data, multiplied by a measure of each data set's spatial autocorrelation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_i" }, { "math_id": 1, "text": "y_i" }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "w_{ij}" }, { "math_id": 4, "text": "\\tilde{x}_i = \\sum_j w_{ij} x_j" }, { "math_id": 5, "text": "\\tilde{y}_i" }, { "math_id": 6, "text": "\nL_{x,y} = \\frac{N}{\\sum_i \\left( \\sum_j w_{ij} \\right)^2} \\frac{\\sum_{ij} (\\tilde{x}_i - \\bar{x})(\\tilde{y}_i - \\bar{y}) }{ \\sqrt{ \\sum_i (\\tilde{x}_i - \\bar{x})^2} \\sqrt{ \\sum_i (\\tilde{y}_i - \\bar{y})^2} }\n" }, { "math_id": 7, "text": "\\bar{x}, \\bar{y}" }, { "math_id": 8, "text": "x_i, y_i" }, { "math_id": 9, "text": "\\sum_j w_{ij} = 1" }, { "math_id": 10, "text": "\nSSS_{x} = \\frac{ \\sum_i (\\tilde{x}_i - \\bar{x})^2}{\\sum_i (x_i - \\bar{x})^2}\n" }, { "math_id": 11, "text": "\nL_{x,y} = \\sqrt{ SSS_{x} } \\sqrt{ SSS_{y} } r( \\tilde{x}, \\tilde{y} )\n" }, { "math_id": 12, "text": "r" }, { "math_id": 13, "text": "\nr(\\tilde{x}, \\tilde{y}) =\\frac{\\sum ^n _{i=1}(\\tilde{x}_i - \\bar{\\tilde{x}})(\\tilde{y}_i - \\bar{\\tilde{y}})}{\\sqrt{\\sum ^n _{i=1}(\\tilde{x}_i - \\bar{\\tilde{x}})^2} \\sqrt{\\sum ^n _{i=1}(\\tilde{y}_i - \\bar{\\tilde{y}})^2}}\n" } ]
https://en.wikipedia.org/wiki?curid=76636218
76640881
Codex Heidelbergensis 921
Parchment codex from the 8th–9th century The Codex Heidelbergensis 921 ("Heidelberg Codex") or Codex Palatinus Latinus 921 is a parchment codex dated to the 8th–9th century, containing a copy of the "Romana" and "Getica" of Jordanes. It was destroyed in a fire on the night of July 15–16, 1880. Physical description. Originally, this codex consisted of 15 quaternions, totaling 120 leaves. According to the inventory of Heidelberg codices after the Congress of Vienna in 1816, it contained only 110 leaves, or 220 pages of variously described formats: folio, "quadratae maioris", or large quarto. The entire first gathering was missing, and the last one had only 6 leaves. According to the description by Theodor Mommsen from 1882, 110 leaves were written on, the first two leaves and the last one (i.e., 1, 2, and 112) were blank, and the foliation resulted from duplicating leaf 17. This manuscript is identified under the signature of the library in the Heidelberg University – Palat. Lat. 921. According to the systematics of "Codices Latini Antiquiores", in which it was described in Part VIII, it is numbered as 1224. No facsimile of any fragment of this manuscript has survived. There is only an illustration of a copy of a fragment of the text of the "Romana" (part of caput 215) included in Friedrich Wilken's catalog from 1817, which Wilken described in the appendix to his "Geschichte der Bildung, Beraubung und Vernichtung der alten Heidelbergischen Büchersammlungen" from 1817, and Theodor Mommsen in the introduction to the edition of Jordanes' works from 1882. The description was also included in Part VIII of "Codices Latini Antiquiores" from 1959. Dating, place of origin and handwriting ductus. The writing of this codex is dated respectively: rather to the 8th than to the 9th century, to the 8th–9th century, or to the turn of the 8th and 9th centuries. The place of origin is considered to be Germany, in one of the scriptoria in the area of Mainz or (which cannot be proven) perhaps in the Princely Abbey of Fulda, from where it would likely have made its way to Mainz, probably through the agency of Marianus Scotus. German provenance is inferred from an interlinear gloss in Old High German: the word "suagur" (English: "brother-in-law") written next to the Latin "cognatus" in the "Romana" (c. 337). The codex was written in Insular minuscule, in a style used on the European continent. This script was characterized mainly by the large descenders of minuscule letters, including the letters "r" and "s". The letter "a" is often unclosed, and "i" is slightly larger. Corrections to grammar were made in the codex during the Middle Ages, but not very heavily, so the original script is still visible in most places. Content and meaning. The codex contained two works by Jordanes: the "Romana" and the "Getica". Already in the early 19th century, there were missing pages containing the beginning of the "Romana" (up to the last words of caput 56 – the text begins with the words "et finis") and the end of the "Getica" (the ending of caput 299 is missing – the text breaks off at the word "regi" – and the following 17 capita). The beginning of the "Getica" text was located on page 51. Mommsen considered the transmission of the codex to be the best among those known to him, although the text of this manuscript rarely spoke against others. He classified it as belonging to the first class of transmissions, in which it was the best. In the codex copy of Jordanes, a few words were omitted in two places in the "Getica": part of caput 200 ("fide et consilio ut diximus clarus") and part of 222 ("obicientes exemplo veriti regis"), and there were few errors (fourteen in total) where other manuscripts of this group transmit the correct script. History. The first definite information about the whereabouts of the manuscript is an ownership note from the year 1479 of the cathedral library in Mainz. This note was entered on the first leaf and read: "iste liber pertinet ad librariam sancti Martini ecclesie Maguntin. M(acarius) Sindicus s(ub)s(cripsi)t 1479", which translates to "this codex belongs to the library of the church of St. Martin in Mainz. M[akarius von Buseck], syndic, wrote [this] in 1479". Additionally, the volume was described with the following note: 1651 – formula_0 – h 13. Then the codex became part of the collection of the Bibliotheca Palatina in Heidelberg, where it was used by Jan Gruter. In 1622, along with the rest of the collection, the codex was in Rome, where it remained until the Napoleonic Wars. It was then transported to Paris, from where, as a result of the Vienna Congress agreements, it returned to Heidelberg. The manuscript, along with three other manuscripts (also containing texts of Jordanes' works), burned in the fire at the house of Theodor Mommsen, publisher of the "Romana" and "Getica" in the "Monumenta Germaniae Historica" series, on the night of July 15–16, 1880, in Charlottenburg. Lausanne fragment (Ms. 398). At the beginning of the 20th century, Marius Besson discovered a fragment of one leaf of an old medieval manuscript at the "Musée historiographie vaudoise". This fragment was transferred to the Cantonal and University Library in Lausanne and was assigned the signature Ms. 398, under which it is currently known. On both sides of the leaf, there are fragments of Chapter 60 of Jordanes' "Getica" written in Anglo-Saxon minuscule, which is why Besson considered this fragment to be one of the lost leaves of the Heidelberg Codex. However, this attribution was negated in the entry in the "Codices Latini Antiquiores" due to differences in the writing styles. Additionally, a meticulous comparative analysis of the script from the Lausanne relic by Sandra Bertelli with the fragment from Wilken's catalogue excluded the possibility of identifying the creators of both inscriptions. Given the impossibility of determining the number of copyists working on the Heidelberg Codex, any further conclusions are impossible to draw.
[ { "math_id": 0, "text": "\\tfrac{646}{566}" } ]
https://en.wikipedia.org/wiki?curid=76640881
766409
Network theory
Study of graphs as a representation of relations between discrete objects In mathematics, computer science and network science, network theory is a part of graph theory. It defines networks as graphs where the vertices or edges possess attributes. Network theory analyses these networks over the symmetric relations or asymmetric relations between their (discrete) components. Network theory has applications in many disciplines, including statistical physics, particle physics, computer science, electrical engineering, biology, archaeology, linguistics, economics, finance, operations research, climatology, ecology, public health, sociology, psychology, and neuroscience. Applications of network theory include logistical networks, the World Wide Web, Internet, gene regulatory networks, metabolic networks, social networks, epistemological networks, etc.; see List of network theory topics for more examples. Euler's solution of the Seven Bridges of Königsberg problem is considered to be the first true proof in the theory of networks. Network optimization. Network problems that involve finding an optimal way of doing something are studied as combinatorial optimization. Examples include network flow, shortest path problem, transport problem, transshipment problem, location problem, matching problem, assignment problem, packing problem, routing problem, critical path analysis, and program evaluation and review technique. Network analysis. Electric network analysis. The analysis of electric power systems could be conducted using network theory from two main points of view: Social network analysis. Social network analysis examines the structure of relationships between social entities. These entities are often persons, but may also be groups, organizations, nation states, web sites, or scholarly publications. Since the 1970s, the empirical study of networks has played a central role in social science, and many of the mathematical and statistical tools used for studying networks have been first developed in sociology. Amongst many other applications, social network analysis has been used to understand the diffusion of innovations, news and rumors. Similarly, it has been used to examine the spread of both diseases and health-related behaviors. It has also been applied to the study of markets, where it has been used to examine the role of trust in exchange relationships and of social mechanisms in setting prices. It has been used to study recruitment into political movements, armed groups, and other social organizations. It has also been used to conceptualize scientific disagreements as well as academic prestige. More recently, network analysis (and its close cousin traffic analysis) has gained a significant use in military intelligence, for uncovering insurgent networks of both hierarchical and leaderless nature. Biological network analysis. With the recent explosion of publicly available high throughput biological data, the analysis of molecular networks has gained significant interest. The type of analysis in this context is closely related to social network analysis, but often focusing on local patterns in the network. For example, network motifs are small subgraphs that are over-represented in the network. Similarly, activity motifs are patterns in the attributes of nodes and edges in the network that are over-represented given the network structure. Using networks to analyze patterns in biological systems, such as food-webs, allows us to visualize the nature and strength of interactions between species. The analysis of biological networks with respect to diseases has led to the development of the field of network medicine. Recent examples of application of network theory in biology include applications to understanding the cell cycle as well as a quantitative framework for developmental processes. Narrative network analysis. The automatic parsing of "textual corpora" has enabled the extraction of actors and their relational networks on a vast scale. The resulting narrative networks, which can contain thousands of nodes, are then analyzed by using tools from Network theory to identify the key actors, the key communities or parties, and general properties such as robustness or structural stability of the overall network, or centrality of certain nodes. This automates the approach introduced by Quantitative Narrative Analysis, whereby subject-verb-object triplets are identified with pairs of actors linked by an action, or pairs formed by actor-object. Link analysis. Link analysis is a subset of network analysis, exploring associations between objects. An example may be examining the addresses of suspects and victims, the telephone numbers they have dialed, and financial transactions that they have partaken in during a given timeframe, and the familial relationships between these subjects as a part of police investigation. Link analysis here provides the crucial relationships and associations between very many objects of different types that are not apparent from isolated pieces of information. Computer-assisted or fully automatic computer-based link analysis is increasingly employed by banks and insurance agencies in fraud detection, by telecommunication operators in telecommunication network analysis, by medical sector in epidemiology and pharmacology, in law enforcement investigations, by search engines for relevance rating (and conversely by the spammers for spamdexing and by business owners for search engine optimization), and everywhere else where relationships between many objects have to be analyzed. Links are also derived from similarity of time behavior in both nodes. Examples include climate networks where the links between two locations (nodes) are determined, for example, by the similarity of the rainfall or temperature fluctuations in both sites. Web link analysis. Several Web search ranking algorithms use link-based centrality metrics, including Google's PageRank, Kleinberg's HITS algorithm, the CheiRank and TrustRank algorithms. Link analysis is also conducted in information science and communication science in order to understand and extract information from the structure of collections of web pages. For example, the analysis might be of the interlinking between politicians' websites or blogs. Another use is for classifying pages according to their mention in other pages. Centrality measures. Information about the relative importance of nodes and edges in a graph can be obtained through centrality measures, widely used in disciplines like sociology. For example, eigenvector centrality uses the eigenvectors of the adjacency matrix corresponding to a network, to determine nodes that tend to be frequently visited. Formally established measures of centrality are degree centrality, closeness centrality, betweenness centrality, eigenvector centrality, subgraph centrality, and Katz centrality. The purpose or objective of analysis generally determines the type of centrality measure to be used. For example, if one is interested in dynamics on networks or the robustness of a network to node/link removal, often the dynamical importance of a node is the most relevant centrality measure. Assortative and disassortative mixing. These concepts are used to characterize the linking preferences of hubs in a network. Hubs are nodes which have a large number of links. Some hubs tend to link to other hubs while others avoid connecting to hubs and prefer to connect to nodes with low connectivity. We say a hub is assortative when it tends to connect to other hubs. A disassortative hub avoids connecting to other hubs. If hubs have connections with the expected random probabilities, they are said to be neutral. There are three methods to quantify degree correlations. Recurrence networks. The recurrence matrix of a recurrence plot can be considered as the adjacency matrix of an undirected and unweighted network. This allows for the analysis of time series by network measures. Applications range from detection of regime changes over characterizing dynamics to synchronization analysis. Spatial networks. Many real networks are embedded in space. Examples include, transportation and other infrastructure networks, brain neural networks. Several models for spatial networks have been developed. Spread. Content in a complex network can spread via two major methods: conserved spread and non-conserved spread. In conserved spread, the total amount of content that enters a complex network remains constant as it passes through. The model of conserved spread can best be represented by a pitcher containing a fixed amount of water being poured into a series of funnels connected by tubes. Here, the pitcher represents the original source and the water is the content being spread. The funnels and connecting tubing represent the nodes and the connections between nodes, respectively. As the water passes from one funnel into another, the water disappears instantly from the funnel that was previously exposed to the water. In non-conserved spread, the amount of content changes as it enters and passes through a complex network. The model of non-conserved spread can best be represented by a continuously running faucet running through a series of funnels connected by tubes. Here, the amount of water from the original source is infinite. Also, any funnels that have been exposed to the water continue to experience the water even as it passes into successive funnels. The non-conserved model is the most suitable for explaining the transmission of most infectious diseases, neural excitation, information and rumors, etc. Network immunization. The question of how to immunize efficiently scale free networks which represent realistic networks such as the Internet and social networks has been studied extensively. One such strategy is to immunize the largest degree nodes, i.e., targeted (intentional) attacks since for this case formula_0 is relatively high and fewer nodes are needed to be immunized. However, in most realistic networks the global structure is not available and the largest degree nodes are unknown. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Books. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "pc" } ]
https://en.wikipedia.org/wiki?curid=766409
7664719
Movable singularity
In the theory of ordinary differential equations, a movable singularity is a point where the solution of the equation behaves badly and which is "movable" in the sense that its location depends on the initial conditions of the differential equation. Suppose we have an ordinary differential equation in the complex domain. Any given solution "y"("x") of this equation may well have singularities at various points (i.e. points at which it is not a regular holomorphic function, such as branch points, essential singularities or poles). A singular point is said to be movable if its location depends on the particular solution we have chosen, rather than being fixed by the equation itself. For example the equation formula_0 has solution formula_1 for any constant "c". This solution has a branchpoint at formula_2, and so the equation has a movable branchpoint (since it depends on the choice of the solution, i.e. the choice of the constant "c"). It is a basic feature of linear ordinary differential equations that singularities of solutions occur only at singularities of the equation, and so linear equations do not have movable singularities. When attempting to look for 'good' nonlinear differential equations it is this property of linear equations that one would like to see: asking for no movable singularities is often too stringent, instead one often asks for the so-called Painlevé property: 'any movable singularity should be a pole', first used by Sofia Kovalevskaya. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{dy}{dx} = \\frac{1}{2y}" }, { "math_id": 1, "text": "y=\\sqrt{x-c}" }, { "math_id": 2, "text": "x=c" } ]
https://en.wikipedia.org/wiki?curid=7664719
76657864
Scott's rule
Rule for choosing histogram bins Scott's rule is a method to select the number of bins in a histogram. Scott's rule is widely employed in data analysis software including R, Python and Microsoft Excel where it is the default bin selection method. For a set of formula_0 observations formula_1 let formula_2 be the histogram approximation of some function formula_3. The integrated mean squared error (IMSE) is formula_4 Where formula_5 denotes the expectation across many independent draws of formula_0 data points. By Taylor expanding to first order in formula_6, the bin width, Scott showed that the optimal width is formula_7 This formula is also the basis for the Freedman–Diaconis rule. By taking a "normal reference" i.e. assuming that formula_3 is a normal distribution, the equation for formula_8 becomes formula_9 where formula_10 is the standard deviation of the normal distribution and is estimated from the data. With this value of bin width Scott demonstrates that formula_11 showing how quickly the histogram approximation approaches the true distribution as the number of samples increases. Terrell–Scott rule. Another approach developed by Terrell and Scott is based on the observation that, among all densities formula_12 defined on a compact interval, say formula_13, with derivatives which are absolutely continuous, the density which minimises formula_14 is formula_15 Using this with formula_16 in the expression for formula_8 gives an upper bound on the value of bin width which is formula_17 So, for functions satisfying the continuity conditions, at least formula_18 bins should be used. This rule is also called the "oversmoothed rule" or the "Rice rule", so called because both authors worked at Rice University. The Rice rule is often reported with the factor of 2 outside the cube root, formula_19, and may be considered a different rule. The key difference from Scott's rule is that this rule does not assume the data is normally distributed and the bin width only depends on the number of samples, not on any properties of the data. In general formula_20 is not an integer so formula_21 is used where formula_22 denotes the ceiling function. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "x_i" }, { "math_id": 2, "text": "\\hat{f}(x)" }, { "math_id": 3, "text": "f(x)" }, { "math_id": 4, "text": "\n\\text{IMSE} = E\\left[ \\int_{\\infty}^{\\infty} dx (\\hat{f}(x) - f(x))^2\\right]\n" }, { "math_id": 5, "text": "E[\\cdot]" }, { "math_id": 6, "text": "h" }, { "math_id": 7, "text": "\nh^* = \\left( 6 / \\int_{-\\infty}^{\\infty} f'(x)^2 dx \\right)^{1/3}n^{-1/3}\n" }, { "math_id": 8, "text": "h^*" }, { "math_id": 9, "text": "\nh^* = \\left( 24 \\sqrt{\\pi} \\right)^{1/3} \\sigma n^{-1/3} \\sim 3.5 \\sigma n^{-1/3}\n" }, { "math_id": 10, "text": "\\sigma" }, { "math_id": 11, "text": "\\text{IMSE} \\propto n^{-2/3}" }, { "math_id": 12, "text": "g(x)" }, { "math_id": 13, "text": "|x| < 1/2" }, { "math_id": 14, "text": "\\int_{\\infty}^{\\infty} dx (g^{(k)}(x))^2" }, { "math_id": 15, "text": "\nf_k(x) = \\begin{cases}\n\\frac{(2k+1)!}{2^{2k}(k!)^2}(1-4x^2)^k, \\quad &|x|\\leq1/2\\\\\n0 &|x|>1/2\n\\end{cases}\n" }, { "math_id": 16, "text": "k=1" }, { "math_id": 17, "text": "\nh^*_{TS} = \\left( \\frac{4}{n} \\right)^{1/3}.\n" }, { "math_id": 18, "text": "\nk_{TS} = \\frac{b-a}{h^*} = \\left( 2n \\right)^{1/3}\n" }, { "math_id": 19, "text": "2\\left(n \\right)^{1/3}" }, { "math_id": 20, "text": "\\left( 2n \\right)^{1/3}" }, { "math_id": 21, "text": "\\lceil \\left( 2n \\right)^{1/3} \\rceil" }, { "math_id": 22, "text": "\\lceil \\cdot \\rceil" } ]
https://en.wikipedia.org/wiki?curid=76657864
76658532
Sturges's rule
Statistical rule of thumb Sturges's rule is a method to choose the number of bins for a histogram. Given formula_0 observations, Sturges's rule suggests using formula_1 bins in the histogram. This rule is widely employed in data analysis software including Python and R, where it is the default bin selection method. Sturges's rule comes from the binomial distribution which is used as a discrete approximation to the normal distribution. If the function to be approximated formula_2 is binomially distributed then formula_3 where formula_4 is the number of trials and formula_5 is the probability of success and formula_6. Choosing formula_7 gives formula_8 In this form we can consider formula_9 as the normalisation factor and Sturges's rule is saying that the sample should result in a histogram with bin counts given by the binomial coefficients. Since the total sample size is fixed to formula_0 we must have formula_10 using the well-known formula for sums of the binomial coefficients. Solving this by taking logs of both sides gives formula_11 and finally using formula_12 (due to counting the 0 outcomes) gives Sturges's rule. In general Sturges's rule does not give an integer answer so the result is rounded up. Doane's formula. Doane proposed modifying Sturges's formula to add extra bins when the data is skewed. Using the method of moments estimator formula_13 along with its variance formula_14 Doane proposed adding formula_15 extra bins giving "Doane's formula" formula_16 For symmetric distributions formula_17 this is equivalent to Sturges's rule. For asymmetric distributions a number of additional bins will be used. Criticisms. Sturges's rule is not based on any sort of optimisation procedure, like the Freedman–Diaconis rule or Scott's rule. It is simply posited based on the approximation of a normal curve by a binomial distribution. Hyndman has pointed out that any multiple of the binomial coefficients would also converge to a normal distribution, so any number of bins could be obtained following the derivation above. Scott shows that Sturges's rule in general produces oversmoothed histograms i.e. too few bins, and advises against its use in favour of other rules such as Freedman-Diaconis or Scott's rule. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "\n\\hat{k} = 1 + \\log_2(n)\n" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "\nf(y) = \\binom{m}{y} p^y (1-p)^{m-y}\n" }, { "math_id": 4, "text": "m" }, { "math_id": 5, "text": "p" }, { "math_id": 6, "text": "y = 0,1,\\ldots,m" }, { "math_id": 7, "text": "p=1/2" }, { "math_id": 8, "text": "\nf(y) = \\binom{m}{y} 2^{-m}\n" }, { "math_id": 9, "text": "2^{-m}" }, { "math_id": 10, "text": "\nn = \\sum_y \\binom{m}{y} = 2^m\n" }, { "math_id": 11, "text": "m = \\log_2(n)" }, { "math_id": 12, "text": "k = m+1" }, { "math_id": 13, "text": "\n g_1 = \\frac{m_3}{m_2^{3/2}}\n = \\frac{\\tfrac{1}{n} \\sum_{i=1}^n (x_i-\\overline{x})^3}{\\left[\\tfrac{1}{n} \\sum_{i=1}^n (x_i-\\overline{x})^2 \\right]^{3/2}},\n " }, { "math_id": 14, "text": " \\sigma_{g_1}^2= \\frac { 6(n-2) }{ (n+1)(n+3) }" }, { "math_id": 15, "text": "\n\\log_2 \\left( 1 + \\frac { |g_1| }{\\sigma_{g_1}} \\right)\n" }, { "math_id": 16, "text": "\n\\hat{k} = 1 + \\log_2(n) + \\log_2 \\left( 1 + \\frac { |g_1| }{\\sigma_{g_1}} \\right)\n" }, { "math_id": 17, "text": "|g_1| \\simeq 0" } ]
https://en.wikipedia.org/wiki?curid=76658532
766619
Cyclonic separation
Method of removing particulates from a fluid stream through vortex separation Cyclonic separation is a method of removing particulates from an air, gas or liquid stream, without the use of filters, through vortex separation. When removing particulate matter from liquid, a hydrocyclone is used; while from gas, a gas cyclone is used. Rotational effects and gravity are used to separate mixtures of solids and fluids. The method can also be used to separate fine droplets of liquid from a gaseous stream. Operation. A high-speed rotating (air)flow is established within a cylindrical or conical container called a cyclone. Air flows in a helical pattern, beginning at the top (wide end) of the cyclone and ending at the bottom (narrow) end before exiting the cyclone in a straight stream through the center of the cyclone and out the top. Larger (denser) particles in the rotating stream have too much inertia to follow the tight curve of the stream, and thus strike the outside wall, then fall to the bottom of the cyclone where they can be removed. In a conical system, as the rotating flow moves towards the narrow end of the cyclone, the rotational radius of the stream is reduced, thus separating smaller and smaller particles. The cyclone geometry, together with volumetric flow rate, defines the "cut point" of the cyclone. This is the size of particle that will be removed from the stream with a 50% efficiency. Particles larger than the cut point will be removed with a greater efficiency, and smaller particles with a lower efficiency as they separate with more difficulty or can be subject to re-entrainment when the air vortex reverses direction to move in direction of the outlet. An alternative cyclone design uses a secondary air flow within the cyclone to keep the collected particles from striking the walls, to protect them from abrasion. The primary air flow containing the particulates enters from the bottom of the cyclone and is forced into spiral rotation by stationary spinner vanes. The secondary air flow enters from the top of the cyclone and moves downward toward the bottom, intercepting the particulate from the primary air. The secondary air flow also allows the collector to optionally be mounted horizontally, because it pushes the particulate toward the collection area, and does not rely solely on gravity to perform this function. Uses. Cyclone separators are found in all types of power and industrial applications, including pulp and paper plants, cement plants, steel mills, petroleum coke plants, metallurgical plants, saw mills and other kinds of facilities that process dust. Large scale cyclones are used in sawmills to remove sawdust from extracted air. Cyclones are also used in oil refineries to separate oils and gases, and in the cement industry as components of kiln preheaters. Cyclones are increasingly used in the household, as the core technology in bagless types of portable vacuum cleaners and central vacuum cleaners. Cyclones are also used in industrial and professional kitchen ventilation for separating the grease from the exhaust air in extraction hoods. Smaller cyclones are used to separate airborne particles for analysis. Some are small enough to be worn clipped to clothing, and are used to separate respirable particles for later analysis. Similar separators are used in the oil refining industry (e.g. for Fluid catalytic cracking) to achieve fast separation of the catalyst particles from the reacting gases and vapors. Analogous devices for separating particles or solids from liquids are called hydrocyclones or hydroclones. These may be used to separate solid waste from water in wastewater and sewage treatment. Types. The most common types of centrifugal, or inertial, collectors in use today are: Single-cyclone separators. Single-cyclone separators create a dual vortex to separate coarse from fine dust. The main vortex spirals downward and carries most of the coarser dust particles. The inner vortex, created near the bottom of the cyclone, spirals upward and carries finer dust particles. Multiple-cyclone separators. Multiple-cyclone separators consist of a number of small-diameter cyclones, operating in parallel and having a common gas inlet and outlet, as shown in the figure, and operate on the same principle as single cyclone separators—creating an outer downward vortex and an ascending inner vortex. Multiple-cyclone separators remove more dust than single cyclone separators because the individual cyclones have a greater length and smaller diameter. The longer length provides longer residence time while the smaller diameter creates greater centrifugal force. These two factors result in better separation of dust particulates. The pressure drop of multiple-cyclone separators collectors is higher than that of single-cyclone separators, requiring more energy to clean the same amount of air. A single-chamber cyclone separator of the same volume is more economical, but doesn't remove as much dust. Secondary-air-flow separators. This type of cyclone uses a secondary air flow, injected into the cyclone to accomplish several things. The secondary air flow increases the speed of the cyclonic action making the separator more efficient; it intercepts the particulate before it reaches the interior walls of the unit; and it forces the separated particulate toward the collection area. The secondary air flow protects the separator from particulate abrasion and allows the separator to be installed horizontally because gravity is not depended upon to move the separated particulate downward. Cyclone theory. As the cyclone is essentially a two phase particle-fluid system, fluid mechanics and particle transport equations can be used to describe the behaviour of a cyclone. The air in a cyclone is initially introduced tangentially into the cyclone with an inlet velocity formula_0. Assuming that the particle is spherical, a simple analysis to calculate critical separation particle sizes can be established. If one considers an isolated particle circling in the upper cylindrical component of the cyclone at a rotational radius of formula_1 from the cyclone's central axis, the particle is therefore subjected to drag, centrifugal, and buoyant forces. Given that the fluid velocity is moving in a spiral the gas velocity can be broken into two component velocities: a tangential component, formula_2, and an outward radial velocity component formula_3. Assuming Stokes' law, the drag force in the outward radial direction that is opposing the outward velocity on any particle in the inlet stream is: formula_4 Using formula_5 as the particle's density, the centrifugal component in the outward radial direction is: formula_6 formula_7 The buoyant force component is in the inward radial direction. It is in the opposite direction to the particle's centrifugal force because it is on a volume of fluid that is missing compared to the surrounding fluid. Using formula_8 for the density of the fluid, the buoyant force is: formula_9 formula_10 In this case, formula_11 is equal to the volume of the particle (as opposed to the velocity). Determining the outward radial motion of each particle is found by setting Newton's second law of motion equal to the sum of these forces: formula_12 To simplify this, we can assume the particle under consideration has reached "terminal velocity", i.e., that its acceleration formula_13 is zero. This occurs when the radial velocity has caused enough drag force to counter the centrifugal and buoyancy forces. This simplification changes our equation to: formula_14 Which expands to: formula_15 Solving for formula_3 we have formula_16. Notice that if the density of the fluid is greater than the density of the particle, the motion is (-), toward the center of rotation and if the particle is denser than the fluid, the motion is (+), away from the center. In most cases, this solution is used as guidance in designing a separator, while actual performance is evaluated and modified empirically. In non-equilibrium conditions when radial acceleration is not zero, the general equation from above must be solved. Rearranging terms we obtain formula_17 Since formula_3 is distance per time, this is a 2nd order differential equation of the form formula_18. Experimentally it is found that the velocity component of rotational flow is proportional to formula_19, therefore: formula_20 This means that the established feed velocity controls the vortex rate inside the cyclone, and the velocity at an arbitrary radius is therefore: formula_21 Subsequently, given a value for formula_2, possibly based upon the injection angle, and a cutoff radius, a characteristic particle filtering radius can be estimated, above which particles will be removed from the gas stream. Alternative models. The above equations are limited in many regards. For example, the geometry of the separator is not considered, the particles are assumed to achieve a steady state and the effect of the vortex inversion at the base of the cyclone is also ignored, all behaviours which are unlikely to be achieved in a cyclone at real operating conditions. More complete models exist, as many authors have studied the behaviour of cyclone separators., simplified models allowing a quick calculation of the cyclone, with some limitations, have been developed for common applications in process industries. Numerical modelling using computational fluid dynamics has also been used extensively in the study of cyclonic behaviour. A major limitation of any fluid mechanics model for cyclone separators is the inability to predict the agglomeration of fine particles with larger particles, which has a great impact on cyclone collection efficiency. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_{in}" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "V_t" }, { "math_id": 3, "text": "V_r" }, { "math_id": 4, "text": " F_d = -6 \\pi r_p \\mu V_{r} ." }, { "math_id": 5, "text": "\\rho_p" }, { "math_id": 6, "text": " F_c= m \\frac{V_t^2}{r} " }, { "math_id": 7, "text": " = \\frac{4}{3} \\pi \\rho_p r_p^3 \\frac{V_t^2 }{r} ." }, { "math_id": 8, "text": "\\rho_f" }, { "math_id": 9, "text": " F_b = -V_p\\rho_f \\frac{V_t^2}{r} " }, { "math_id": 10, "text": " = -\\frac{4 \\pi r_p^3}{3} \\frac{V_t^2}{r}\\rho_f ." }, { "math_id": 11, "text": " V_p " }, { "math_id": 12, "text": " m \\frac{dV_r}{dt} = F_d + F_c + F_b " }, { "math_id": 13, "text": "\\frac{dV_r}{dt}" }, { "math_id": 14, "text": "F_d + F_c + F_b = 0 " }, { "math_id": 15, "text": " -6\\pi r_p \\mu V_r + \\frac{4}{3}\\pi r_p^3 \\frac{V_t^2}{r}\\rho_p -\\frac{4}{3}\\pi r_p^3 \\frac{V_t^2}{r}\\rho_f =0 " }, { "math_id": 16, "text": " V_r = \\frac{2}{9} \\frac{r_p^2}{\\mu} \\frac{V_t^2}{r} (\\rho _p - \\rho _f)" }, { "math_id": 17, "text": " \\frac{dV_r}{dt} + \\frac{9}{2} \\frac{\\mu}{\\rho_p r_p^2}V_r - \\left(1-\\frac{\\rho_f}{\\rho_p}\\right) \\frac{V_t^2}{r} = 0" }, { "math_id": 18, "text": "x''+c_1 x'+c_2=0" }, { "math_id": 19, "text": "r^2" }, { "math_id": 20, "text": "V_t \\propto r^2 ." }, { "math_id": 21, "text": " U_r = U_{in}\\frac{r}{R_{in}} ." } ]
https://en.wikipedia.org/wiki?curid=766619
7667736
Digital sundial
A digital sundial is a clock that indicates the current time with numerals formed by the sunlight striking it. Like a classical sundial, the device contains no moving parts. It uses no electricity nor other manufactured sources of energy. The digital display changes as the sun advances in its daily course. Technique. There are two basic types of digital sundials. One type uses optical waveguides, while the other is inspired by fractal geometry. Optical fiber sundial. Sunlight enters into the device through a slit and moves as the sun advances. The sun's rays shine on ten linearly distributed sockets of optical waveguides that transport the light to a seven-segment display. Each socket fiber is connected to a few segments forming the digit corresponding to the position of the sun. Fractal sundial. The theoretical basis for the other construction comes from fractal geometry. For the sake of simplicity, we describe a two-dimensional (planar) version. Let "L""θ" denote a straight line passing through the origin of a Cartesian coordinate system and making angle "θ" ∈ [0,π) with the "x"-axis. For any "F" ⊂ ℝ2 define proj"θ" "F" to be the perpendicular projection of "F" on the line "L""θ". Theorem. Let "G""θ" ⊂ "L""θ", "θ" ∈ [0,π) be a family of any sets such that formula_0 "G""θ" is a measurable set in the plane. Then there exists a set "F" ⊂ ℝ2 such that There exists a set with prescribed projections in "almost" all directions. This theorem can be generalized to three-dimensional space. For a non-trivial choice of the family "G""θ", the set "F" described above is a fractal. Application. Theoretically, it is possible to build a set of masks that produce shadows in the form of digits, such that the display changes as the sun moves. This is the fractal sundial. The theorem was proved in 1987 by Kenneth Falconer. Four years later it was described in "Scientific American" by Ian Stewart. The first prototype of a digital sundial was constructed in 1994; it writes the numbers with light instead of shadow, as Falconer proved. In 1998 a digital sundial was installed for the first time in a public place (Genk, Belgium). There exist window and tabletop versions as well. Julldozer in October 2015 published an open-source 3D printed model sundial. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\bigcup_\\theta" } ]
https://en.wikipedia.org/wiki?curid=7667736
7667996
Triangle-free graph
Graph without triples of adjacent vertices In the mathematical area of graph theory, a triangle-free graph is an undirected graph in which no three vertices form a triangle of edges. Triangle-free graphs may be equivalently defined as graphs with clique number ≤ 2, graphs with girth ≥ 4, graphs with no induced 3-cycle, or locally independent graphs. By Turán's theorem, the "n"-vertex triangle-free graph with the maximum number of edges is a complete bipartite graph in which the numbers of vertices on each side of the bipartition are as equal as possible. Triangle finding problem. The triangle finding or triangle detection problem is the problem of determining whether a graph is triangle-free or not. When the graph does contain a triangle, algorithms are often required to output three vertices which form a triangle in the graph. It is possible to test whether a graph with formula_0 edges is triangle-free in time formula_1 where the formula_2 hides sub-polynomial factors. Here formula_3 is the exponent of fast matrix multiplication; formula_4 from which it follows that triangle detection can be solved in time formula_5. Another approach is to find the trace of "A"3, where A is the adjacency matrix of the graph. The trace is zero if and only if the graph is triangle-free. For dense graphs, it is more efficient to use this simple algorithm which again relies on matrix multiplication, since it gets the time complexity down to formula_6, where formula_7 is the number of vertices. Even if matrix multiplication algorithms with time formula_8 were discovered, the best time bounds that could be hoped for from these approaches are formula_9 or formula_8. In fine-grained complexity, the "sparse triangle hypothesis" is an unproven computational hardness assumption asserting that no time bound of the form formula_10 is possible, for any formula_11, regardness of what algorithmic techniques are used. It, and the corresponding "dense triangle hypothesis" that no time bound of the form formula_12 is possible, imply lower bounds for several other computational problems in combinatorial optimization and computational geometry. As showed, triangle-free graph recognition is equivalent in complexity to median graph recognition; however, the current best algorithms for median graph recognition use triangle detection as a subroutine rather than vice versa. The decision tree complexity or query complexity of the problem, where the queries are to an oracle which stores the adjacency matrix of a graph, is Θ("n"2). However, for quantum algorithms, the best known lower bound is Ω("n"), but the best known algorithm is "O"("n"5/4). Independence number and Ramsey theory. An independent set of formula_13 vertices (where formula_14 is the floor function) in an "n"-vertex triangle-free graph is easy to find: either there is a vertex with at least formula_13 neighbors (in which case those neighbors are an independent set) or all vertices have strictly less than formula_13 neighbors (in which case any maximal independent set must have at least formula_13 vertices). This bound can be tightened slightly: in every triangle-free graph there exists an independent set of formula_15 vertices, and in some triangle-free graphs every independent set has formula_16 vertices. One way to generate triangle-free graphs in which all independent sets are small is the "triangle-free process" in which one generates a maximal triangle-free graph by repeatedly adding randomly chosen edges that do not complete a triangle. With high probability, this process produces a graph with independence number formula_16. It is also possible to find regular graphs with the same properties. These results may also be interpreted as giving asymptotic bounds on the Ramsey numbers R(3,"t") of the form formula_17: if the edges of a complete graph on formula_18 vertices are colored red and blue, then either the red graph contains a triangle or, if it is triangle-free, then it must have an independent set of size "t" corresponding to a clique of the same size in the blue graph. Coloring triangle-free graphs. Much research about triangle-free graphs has focused on graph coloring. Every bipartite graph (that is, every 2-colorable graph) is triangle-free, and Grötzsch's theorem states that every triangle-free planar graph may be 3-colored. However, nonplanar triangle-free graphs may require many more than three colors. The first construction of triangle free graphs with arbitrarily high chromatic number is due to Tutte (writing as Blanche Descartes). This construction started from the graph with a single vertex say formula_19 and inductively constructed formula_20 from formula_21 as follows: let formula_21 have formula_7 vertices, then take a set formula_22 of formula_23 vertices and for each subset formula_24 of formula_22 of size formula_7 add a disjoint copy of formula_21 and join it to formula_24 with a matching. From the pigeonhole principle it follows inductively that formula_20 is not formula_25 colourable, since at least one of the sets formula_24 must be coloured monochromatically if we are only allowed to use k colours. defined a construction, now called the Mycielskian, for forming a new triangle-free graph from another triangle-free graph. If a graph has chromatic number "k", its Mycielskian has chromatic number "k" + 1, so this construction may be used to show that arbitrarily large numbers of colors may be needed to color nonplanar triangle-free graphs. In particular the Grötzsch graph, an 11-vertex graph formed by repeated application of Mycielski's construction, is a triangle-free graph that cannot be colored with fewer than four colors, and is the smallest graph with this property. and showed that the number of colors needed to color any "m"-edge triangle-free graph is formula_26 and that there exist triangle-free graphs that have chromatic numbers proportional to this bound. There have also been several results relating coloring to minimum degree in triangle-free graphs. proved that any "n"-vertex triangle-free graph in which each vertex has more than 2"n"/5 neighbors must be bipartite. This is the best possible result of this type, as the 5-cycle requires three colors but has exactly 2"n"/5 neighbors per vertex. Motivated by this result, conjectured that any "n"-vertex triangle-free graph in which each vertex has at least "n"/3 neighbors can be colored with only three colors; however, disproved this conjecture by finding a counterexample in which each vertex of the Grötzsch graph is replaced by an independent set of a carefully chosen size. showed that any "n"-vertex triangle-free graph in which each vertex has more than 10"n"/29 neighbors must be 3-colorable; this is the best possible result of this type, because Häggkvist's graph requires four colors and has exactly 10"n"/29 neighbors per vertex. Finally, proved that any "n"-vertex triangle-free graph in which each vertex has more than "n"/3 neighbors must be 4-colorable. Additional results of this type are not possible, as Hajnal found examples of triangle-free graphs with arbitrarily large chromatic number and minimum degree (1/3 − ε)"n" for any ε &gt; 0. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "m" }, { "math_id": 1, "text": "\\tilde O\\bigl(m^{2\\omega/(\\omega+1)}\\bigr)" }, { "math_id": 2, "text": "\\tilde O" }, { "math_id": 3, "text": "\\omega" }, { "math_id": 4, "text": "\\omega<2.372" }, { "math_id": 5, "text": "O(m^{1.407})" }, { "math_id": 6, "text": "O(n^{\\omega})" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "O(n^2)" }, { "math_id": 9, "text": "O(m^{4/3})" }, { "math_id": 10, "text": "O(m^{4/3-\\delta})" }, { "math_id": 11, "text": "\\delta>0" }, { "math_id": 12, "text": "O(n^{\\omega-\\delta})" }, { "math_id": 13, "text": "\\lfloor\\sqrt{n}\\rfloor" }, { "math_id": 14, "text": "\\lfloor\\cdot\\rfloor" }, { "math_id": 15, "text": "\\Omega(\\sqrt{n\\log n})" }, { "math_id": 16, "text": "O(\\sqrt{n\\log n})" }, { "math_id": 17, "text": "\\Theta(\\tfrac{t^2}{\\log t})" }, { "math_id": 18, "text": "\\Omega(\\tfrac{t^2}{\\log t})" }, { "math_id": 19, "text": "G_1" }, { "math_id": 20, "text": "G_{k+1}" }, { "math_id": 21, "text": "G_{k}" }, { "math_id": 22, "text": "Y" }, { "math_id": 23, "text": "k(n-1)+1" }, { "math_id": 24, "text": "X" }, { "math_id": 25, "text": "k" }, { "math_id": 26, "text": "O \\left(\\frac{m^{1/3}}{(\\log m)^{2/3}} \\right)" }, { "math_id": 27, "text": "KG_{3k-1, k}" }, { "math_id": 28, "text": "k + 1" } ]
https://en.wikipedia.org/wiki?curid=7667996
76686234
Join count statistic
Statistics of spatial association Join count statistics are a method of spatial analysis used to assess the degree of association, in particular the autocorrelation, of categorical variables distributed over a spatial map. They were originally introduced by Australian statistician P. A. P. Moran. Join count statistics have found widespread use in econometrics, remote sensing and ecology. Join count statistics can be computed in a number of software packages including PASSaGE, GeoDA, PySAL and spdep. Binary data. Given binary data formula_1 distributed over formula_2 spatial sites, where the neighbour relations between regions formula_3 and formula_4 are encoded in the spatial weight matrix formula_5 the join count statistics are defined as formula_6 Where formula_7 formula_8 formula_9 formula_10 The formula_11 subscripts refer to 'black'=1 and 'white'=0 sites. The relation formula_6 implies only three of the four numbers are independent. Generally speaking, large values of formula_12 and formula_13 relative to formula_0 imply autocorrelation and relatively large values of formula_0 imply anti-correlation. To assess the statistical significance of these statistics, the expectation under various null models has been computed. For example, if the null hypothesis is that each sample is chosen at random according to a Bernoulli process with probability formula_14 then Cliff and Ord show that formula_15 formula_16 formula_17 formula_18 where formula_19 formula_20 formula_21 However in practice an approach based on random permutations is preferred, since it requires fewer assumptions. Local join count statistic. Anselin and Li introduced the idea of the local join count statistic, following Anselin's general idea of a Local Indicator of Spatial Association (LISA). Local Join Count is defined by e.g. formula_22 with similar definitions for formula_23 and formula_24. This is equivalent to the Getis-Ord statistics computed with binary data. Some analytic results for the expectation of the local statistics are available based on the hypergeometric distribution but due to the multiple comparisons problem a permutation based approach is again preferred in practice. Extension to multiple categories. When there are formula_25 categories join count statistics have been generalised formula_26 Where formula_27 is an indicator function for the variable formula_28 belonging to the category formula_29. Analytic results are available or a permutation approach can be used to test for significance as in the binary case. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "J_{BW}" }, { "math_id": 1, "text": "x_i \\in \\{0,1\\}" }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "j" }, { "math_id": 5, "text": "w_{ij} = \\begin{cases}\n1 \\qquad &i\\text{ neighbor of }j\\\\\n0 &\\text{otherwise}\n\\end{cases}" }, { "math_id": 6, "text": "\nJ = J_{BB} + J_{BW} + J_{WW}\n" }, { "math_id": 7, "text": "\nJ_{BB} = \\frac{1}{2}\\sum_{ij, i\\neq j} w_{ij} x_i x_j\n" }, { "math_id": 8, "text": "\nJ_{BW} = \\frac{1}{2}\\sum_{ij, i\\neq j} w_{ij} (x_i-x_j)^2\n" }, { "math_id": 9, "text": "\nJ_{WW} = \\frac{1}{2}\\sum_{ij, i\\neq j} w_{ij} (1-x_i) (1-x_j)\n" }, { "math_id": 10, "text": "\nJ = \\frac{1}{2}\\sum_{ij, i\\neq j} w_{ij}\n" }, { "math_id": 11, "text": "B,W" }, { "math_id": 12, "text": "J_{BB}" }, { "math_id": 13, "text": "J_{WW}" }, { "math_id": 14, "text": "p = \\frac{ \\text{number of black cells} }{ N } = \\frac{N_1}{N}" }, { "math_id": 15, "text": "\nE(J_{BB}) = \\frac{1}{2} S_0 p^2\n" }, { "math_id": 16, "text": "\nvar(J_{BB}) = \\frac{p^2(1-p)}{4} ([ S_1(1-p) + S_2p]) \n" }, { "math_id": 17, "text": "\nE(J_{BW}) = S_0 p(1-p)\n" }, { "math_id": 18, "text": "\nvar(J_{BW}) = \\frac{p(1-p)}{4} [ 4 S_1 + S_2(1-4p(1-p))] \n" }, { "math_id": 19, "text": "S_0 = \\sum_{ij} w_{ij}" }, { "math_id": 20, "text": "S_1 = \\frac{1}{2}\\sum_{ij}(w_{ji} + w_{ij})^2" }, { "math_id": 21, "text": "S_2 = \\sum_{i}( \\sum_j w_{ji} + \\sum_j w_{ij})^2" }, { "math_id": 22, "text": "\nJ_{BBi} = x_i \\sum_j w_{ij} x_j\n" }, { "math_id": 23, "text": "BW" }, { "math_id": 24, "text": "WW" }, { "math_id": 25, "text": "k \\geq 2" }, { "math_id": 26, "text": "\nJ_{rs} = \\frac{1}{2} \\sum_{ij} I_r(x_i) I_s(x_j)\n" }, { "math_id": 27, "text": "I_r(x_i) = \\delta_{r,x_i}" }, { "math_id": 28, "text": "x_i" }, { "math_id": 29, "text": "r" } ]
https://en.wikipedia.org/wiki?curid=76686234
76689273
Spatial weight matrix
Neighbor Matrix The concept of a spatial weight is used in spatial analysis to describe neighbor relations between regions on a map. If location formula_0 is a neighbor of location formula_1 then formula_2 otherwise formula_3. Usually (though not always) we do not consider a site to be a neighbor of itself so formula_4. These coefficients are encoded in the spatial weight matrix formula_5 Where formula_6 is the number of sites under consideration. The spatial weight matrix is a key quantity in the computation of many spatial indices like Moran's I, Geary's C, Getis-Ord statistics and Join Count Statistics. Contiguity-Based Weights. This approach considers spatial sites as nodes in a graph with links determined by a shared boundary or vertex. The elements of the spatial weight matrix are determined by setting formula_7 for all connected pairs of nodes formula_8 with all the other elements set to 0. This makes the spatial weight matrix equivalent to the adjacency matrix of the corresponding network. It is common to row-normalize the matrix formula_9, formula_10 In this case the sum of all the elements of formula_9 equals formula_6 the number of sites. There are three common methods for linking sites named after the chess pieces which make similar moves: In some cases statistics can be quite different depending on the definition used, especially for discrete data on a grid. There are also other cases where the choice of neighbors is not obvious and can affect the outcome of the analysis. Bivand and Wong describe a situation where the value of spatial indices of association (like Moran's I) depend on the inclusion or exclusion of a ferry crossing between counties. There are also cases where regions meet in a tripoint or quadripoint where Rook and Queen neighborhoods can differ. Distance-Based Weights. Another way to define spatial neighbors is based on the distance between sites. One simple choice is to set formula_7 for every pair formula_11 separated by a distance less than some threshold formula_12. Cliff and Ord suggest the general form formula_13 Where formula_14 is some function of formula_15 the distance between formula_0 and formula_1 and formula_16 is the proportion of the perimeter of formula_0 in contact with formula_1. The function formula_17 is then suggested. Often the formula_18 term is not included and the most common values for formula_19 are 1 and 2. Another common choice for the distance decay function is formula_20 though a number of different Kernel functions can be used. The exponential and other Kernel functions typically set formula_21 which must be considered in applications. It is possible to make the spatial weight matrix a function of 'distance class': formula_22 where formula_23 denotes the 'distance class', for example formula_24 corresponding to first, second, third etc. neighbors. In this case, functions of the spatial weight matrix become distance class dependent. For example, Moran's I is formula_25 This defines a type of spatial correlogram, in this case, since Moran's "I" measures spatial autocorrelation, formula_26 measures how the autocorrelation of the data changes as a function of distance class. Remembering Tobler's first law of geography, "everything is related to everything else, but near things are more related than distant things" it usually decreases with distance. Common distance functions include Euclidean distance, Manhattan distance and Great-circle distance. Spatial Lag. One application of the spatial weight matrix is to compute the spatial lag formula_27 For row-standardised weights initially set to formula_7 and with formula_4, formula_28 is simply the average value observed at the neighbors of formula_0. These lagged variables can then be used in regression analysis to incorporate the dependence of the outcome variable on the values at neighboring sites. The standard regression equation is formula_29 The "spatial lag model" adds the spatial lag vector to this formula_30 where formula_31 is a parameter which controls the degree of autocorrelation of formula_32. This is similar to an autoregressive model in the analysis of time series. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i" }, { "math_id": 1, "text": "j" }, { "math_id": 2, "text": "w_{ij} \\neq 0" }, { "math_id": 3, "text": "w_{ij} = 0" }, { "math_id": 4, "text": "w_{ii} = 0" }, { "math_id": 5, "text": "\nW = \\begin{pmatrix}\nw_{11} & w_{12} & \\ldots & w_{1N} \\\\\nw_{21} & w_{22} & \\ldots & w_{2N} \\\\\n\\vdots & \\vdots & \\vdots & \\vdots \\\\\nw_{N1} & w_{N2} & \\ldots & w_{NN} \\\\\n\\end{pmatrix}\n" }, { "math_id": 6, "text": "N" }, { "math_id": 7, "text": "w_{ij} = 1" }, { "math_id": 8, "text": "ij" }, { "math_id": 9, "text": "W" }, { "math_id": 10, "text": "w_{ij} \\rightarrow w_{ij}/\\sum_j w_{ij}" }, { "math_id": 11, "text": "(i,j)" }, { "math_id": 12, "text": "\\delta" }, { "math_id": 13, "text": "\nw_{ij} = g(d_{ij}, \\beta_{ij})\n" }, { "math_id": 14, "text": "g" }, { "math_id": 15, "text": "d_{ij}" }, { "math_id": 16, "text": "\\beta_{ij}" }, { "math_id": 17, "text": "\nw_{ij} = d_{ij}^{-\\alpha} \\beta_{ij}^{b}\n" }, { "math_id": 18, "text": "\\beta" }, { "math_id": 19, "text": "\\alpha" }, { "math_id": 20, "text": "\nw_{ij} = \\exp( - d_{ij} )\n" }, { "math_id": 21, "text": "w_{ii} = 1" }, { "math_id": 22, "text": "w_{ij} \\rightarrow w_{ij}(d)" }, { "math_id": 23, "text": "d" }, { "math_id": 24, "text": "d=1,2,3,\\ldots" }, { "math_id": 25, "text": " I(d) = \\frac{ N }{|W(d)|} \\frac {\\sum_{i=1}^N \\sum_{j=1}^N w_{ij}(d)(x_i-\\bar x) (x_j-\\bar x)} {\\sum_{i=1}^N (x_i-\\bar x)^2} " }, { "math_id": 26, "text": "I(d)" }, { "math_id": 27, "text": "\n[Wx]_i = \\sum_j w_{ij} x_j\n" }, { "math_id": 28, "text": "[Wx]_i" }, { "math_id": 29, "text": "\ny_i = \\sum_k x_{ik} \\beta_k + \\epsilon_i\n" }, { "math_id": 30, "text": "\ny_i = \\rho\\sum_j w_{ij}y_j + \\sum_k x_{ik} \\beta_k + \\epsilon_i\n" }, { "math_id": 31, "text": "\\rho" }, { "math_id": 32, "text": "y" } ]
https://en.wikipedia.org/wiki?curid=76689273
76690741
Strict Fibonacci heap
Optimal data structure for priority queues In computer science, a strict Fibonacci heap is a priority queue data structure with low worst case time bounds. It matches the amortized time bounds of the Fibonacci heap in the worst case. To achieve these time bounds, strict Fibonacci heaps maintain several invariants by performing restoring transformations after every operation. These transformations can be done in constant time by using auxiliary data structures to track invariant violations, and the pigeonhole principle guarantees that these can be fixed. Strict Fibonacci heaps were invented in 2012 by Gerth S. Brodal, George Lagogiannis, and Robert E. Tarjan. Along with Brodal queues, strict Fibonacci heaps belong to a class of asymptotically optimal data structures for priority queues. All operations on strict Fibonacci heaps run in worst case constant time except "delete-min", which is necessarily logarithmic. This is optimal, because any priority queue can be used to sort a list of formula_0 elements by performing formula_0 insertions and formula_0 "delete-min" operations. However, strict Fibonacci heaps are simpler than Brodal queues, which make use of dynamic arrays and redundant counters, whereas the strict Fibonacci heap is pointer based only. Structure. A strict Fibonacci heap is a single tree satisfying the minimum-heap property. That is, the key of a node is always smaller than or equal to its children. As a direct consequence, the node with the minimum key always lies at the root. Like ordinary Fibonacci heaps, strict Fibonacci heaps possess substructures similar to binomial heaps. To identify these structures, we label every node with one of two types. We thus introduce the following definitions and rules: The formula_1th rightmost active child formula_2 of an active node satisfies formula_3. Invariants. Thus, the loss of an active node can be viewed as a generalisation of Fibonacci heap 'marks'. For example, a subtree consisting of only active nodes with loss zero is a binomial tree. In addition, several invariants which impose logarithmic bounds on three main quantities: the number of active roots, the total loss, and the degrees of nodes. This is in contrast to the ordinary Fibonacci heap, which is more flexible and allows structural violations to grow on the order of formula_4 to be cleaned up later, as it is a lazy data structure. To assist in keeping the degrees of nodes logarithmic, every non-root node also participates in a queue formula_5. In the following section, and for rest of this article, we define the real number formula_6, where formula_0 is the number of nodes in the heap, and formula_7 denotes the binary logarithm. The total number of active roots is at most formula_8. The total loss in the heap is at most formula_8. The degree of the root is at most formula_9. For an active node with zero loss, the degree is at most formula_10, where formula_11 is its position in formula_5 (with 1 as the first element). For all other non-root nodes, the degree is at most formula_12. The degree of any non-root node is at most formula_13. "Proof": This follows immediately from invariant 5. Letting formula_14, we have formula_15 If invariant 1 holds, the maximum rank formula_16 or any active node is at most formula_17, where formula_18 is the total loss. "Proof": We proceed by contradiction. Let formula_19 be an active node with maximal rank formula_16 in a heap with formula_0 nodes and total loss formula_18, and assume that formula_20, where formula_21 is the smallest integer such that formula_22. Our goal is to show that the subtree formula_23 rooted at formula_19 contains formula_24 nodes, which is a contradiction because there are only formula_0 nodes in the heap. Discard all subtrees rooted at passive nodes from formula_23, leaving it with only active nodes. Cut off all the grandchildren of formula_19 whose subtrees contain any node of positive loss, and increase the loss of the children of formula_19 accordingly, once for each grandchild lost. The quantity formula_25 is unchanged for the remaining nodes, preserving invariant 1. Furthermore, the total loss is still at most formula_18. The children of formula_19 now consists of loss-free subtrees and leaf nodes with positive loss. Currently, formula_23 satisfies formula_3 for the formula_1th rightmost child formula_2 of formula_19. We make this an exact equality by first reducing the loss of each formula_2, and pruning any grandchildren if necessary. Afterwards, formula_26 exactly. All other descendants of formula_19 are also converted into binomial subtrees by pruning children as necessary. We now attempt to reconstruct a minimal version of formula_23 by starting with a binomial tree of degree formula_16, containing formula_27 active nodes. We wish to increase the loss to formula_18, but keep the rank of formula_19 as formula_16 and the number of nodes as low as possible. For a binomial tree of degree formula_16, there is one child of each degree from formula_28 to formula_29. Hence, there are formula_30 grandchildren of order formula_31. If we cut all the grandchildren whose degree formula_32, then we have cut formula_33 grandchildren, which is sufficient to bring the loss up to formula_18. All grandchildren with degree formula_34 survive. Let formula_35 be the child of formula_19 with degree formula_36 and loss 0. By assumption, formula_37, and formula_35 is a complete binomial tree, so it has at least formula_38 nodes. Since this would mean formula_23 has at least formula_39 nodes, we have reached a contradiction, and therefore formula_40. Noting that formula_41, we obtain formula_42. If invariants 1 and 3 both hold, then the maximum rank is formula_43. "Proof": From invariant 3, we have formula_44. By substituting this into lemma 1, we calculate as follows: formula_45 Transformations. The following transformations restore the above invariants after a priority queue operation has been performed. There are three main quantities we wish to minimize: the number of active roots, the total loss in the heap, and the degree of the root. All transformations can be performed in formula_46 time, which is possible by maintaining auxiliary data structures to track candidate nodes (described in the section on implementation). Active root reduction. Let formula_19 and formula_47 be active roots with equal rank formula_16, and assume formula_48. Link formula_47 as the leftmost child of formula_19 and increase the rank of formula_19 by 1. If the rightmost child formula_49 of formula_19 is passive, link formula_49 to the root. As a result, formula_47 is no longer an active root, so the number of active roots decreases by 1. However, the degree of the root node may increase by 1, Since formula_47 becomes the formula_50th rightmost child of formula_19, and formula_47 has rank formula_16, invariant 1 is preserved. If invariant 2 is violated, but invariants 1 and 3 hold, then active root reduction is possible. "Proof": Because invariant 2 is broken, there are more than formula_8 active roots present. From corollary 2, the maximum rank of a node is formula_43. By the pigeonhole principle, there exists a pair of active roots with the same rank. Loss reduction. One node loss reduction. Let formula_19 be an active non-root with loss at least 2. Link formula_19 to the root, thus turning it into an active root, and resetting its loss to 0. Let the original parent of formula_19 be formula_47. formula_47 must be active, since otherwise formula_19 would have previously been an active root, and thus could not have had positive loss. The rank of formula_47 is decreased by 1. If formula_47 is not an active root, increase its loss by 1. Overall, the total loss decreases by 1 or 2. As a side effect, the root degree and number of active roots increase by 1, making it less preferable to two node loss reduction, but still a necessary operation. Two node loss reduction. Let formula_19 and formula_47 be active nodes with equal rank formula_16 and loss equal to 1, and let formula_49 be the parent of formula_47. Without loss of generality, assume that formula_48. Detach formula_47 from formula_49, and link formula_47 to formula_19. Increase the rank of formula_19 by 1 and reset the loss of formula_19 and formula_47 from 1 to 0. formula_49 must be active, since formula_47 had positive loss and could not have been an active root. Hence, the rank of formula_49 is decreased by 1. If formula_49 is not an active root, increase its loss by 1. Overall, the total loss decreases by either 1 or 2, with no side effects. If invariant 3 is violated by 1, but invariant 2 holds, then loss reduction is possible. "Proof": We apply the pigeonhole principle again. If invariant 3 is violated by 1, the total loss is formula_51. Lemma 1 can be reformulated to also work with formula_51. Thus, corollary 2 holds. Since the maximum rank is formula_43, either there either exists a pair of active nodes with equal rank and loss 1, or an active node with formula_52. Both cases present an opportunity for loss reduction. Root degree reduction. Let formula_19, formula_47, and formula_49 be the three rightmost passive linkable children of the root. Detach them all from the root and sort them such that formula_53. Change formula_19 and formula_47 to be active. Link formula_49 to formula_47, link formula_47 to formula_19, and link formula_19 as the leftmost child of the root. As a result, formula_19 becomes an active root with rank 1 and loss 0. The rank and loss of formula_47 is set to 0. The net change of this transformation is that the degree of the root node decreases by 2. As a side effect, the number of active roots increases by 1. If invariant 4 is violated, but invariant 2 holds, then root degree reduction is possible. "Proof": If invariant 4 is broken, then the degree of the root is at least formula_54. The children of the root fall into three categories: active roots, passive non-linkable nodes, and passive linkable nodes. Each passive non-linkable node subsumes an active root, since its subtree contains at least one active node. Because the number of active roots is at most formula_8, the rightmost three children of the root must therefore be passive linkable. Summary. The following table summarises the effect of each transformation on the three important quantities. Individually, each transformation may violate invariants, but we are only interested in certain combinations of transformations which do not increase any of these quantities. When deciding which transformations to perform, we consider only the worst case effect of these operations, for simplicity. The two types of loss reduction are also considered to be the same operation. As such, we define 'performing a loss reduction' to mean attempting each type of loss reduction in turn. Implementation. Linking nodes. To ensure active nodes lie to the left of passive nodes, and preserve invariant 1, the linking operation should place active nodes on the left, and passive nodes on the right. It is necessary for active and passive nodes to coexist in the same list, because the merge operation changes all nodes in the smaller heap to be passive. If they existed in two separate lists, the lists would have to be concatenated, which cannot be done in constant time for all nodes. For the root, we also pose the requirement that passive linkable children lie to the right of the passive non-linkable children. Since we wish to be able link nodes to the root in constant time, a pointer to the first passive linkable child of the root must be maintained. Finding candidate nodes. The invariant restoring transformations rely on being able to find candidate nodes in formula_46 time. This means that we must keep track of active roots with the same rank, nodes with loss 1 of the same rank, and nodes with loss at least 2. The original paper by Brodal et al. described a "fix-list" and a "rank-list" as a way of tracking candidate nodes. Fix-list. The fix-list is divided into four parts: To check if active root reduction is possible, we simply check if part 1 is non-empty. If it is non-empty, the first two nodes can be popped off and transformed. Similarly, to check if loss reduction is possible, we check the end of part 4. If it contains a node with loss at least 2, one node loss reduction is performed. Otherwise, if the last two nodes both have loss 1, and are of the same rank, two node loss reduction is performed. Rank-list. The rank-list is a doubly linked list containing information about each rank, to allow nodes of the same rank to be partnered together in the fix-list. For each node representing rank formula_16 in the rank-list, we maintain: The fix-list and rank-list require extensive bookkeeping, which must be done whenever a new active node arises, or when the rank or loss of a node is changed. Shared flag. The merge operation changes all of the active nodes of the smaller heap into passive nodes. This can be done in formula_46 time by introducing a level of indirection. Instead of a boolean flag, each active node has a pointer towards an "active flag" object containing a boolean value. For passive nodes, it does not matter which active flag object they point to, as long as the flag object is set to passive, because it is not required to change many passive nodes into active nodes simultaneously. Storing keys. The decrease-key operation requires a reference to the node we wish to decrease the key of. However, the decrease-key operation itself sometimes swaps the key of a node and the key root. Assume that the insert operation returns some opaque reference that we can call decrease-key on, as part of the public API. If these references are internal heap nodes, then by swapping keys we have mutated these references, causing other references to become undefined. To ensure a key is always stays with the same reference, it is necessary to 'box' the key. Each heap node now contains a pointer to a box containing a key, and the box also has a pointer to the heap node. When inserting an item, we create a box to store the key in, link the heap node to the box both ways, and return the box object. To swap the keys between two nodes, we re-link the pointers between the boxes and nodes instead. Operations. Merge. Let formula_56 and formula_57 be strict Fibonacci heaps. If either is empty, return the other. Otherwise, let formula_58 and formula_59 be their corresponding sizes. Without loss of generality, assume that formula_60. Since the sizes of the fix-list and rank-list of each heap are logarithmic with respect to the heap size, it is not possible to merge these auxiliary structures in constant time. Instead, we throw away the structure of the smaller heap formula_56 by discarding its fix-list and rank-list, and converting all of its nodes into passive nodes. This can be done in constant time, using a shared flag, as shown above. Link formula_56 and formula_57, letting the root with the smaller key become the parent of the other. Let formula_61 and formula_62 be the queues of formula_56 and formula_57 respectively. The queue of resulting heap is set toformula_63, where formula_64 is the root with the larger key. The only possible structural violation is the root degree. This is solved by performing 1 active root reduction, and 1 root degree reduction, if each transformation is possible. Proof of correctness. Invariants 1, 2, and 3 hold automatically, since the structure of the heap is discarded. As calculated above, any violations of invariant 4 are solved by the root degree reduction transformation. To verify invariant 5, we consider the final positions of nodes in formula_5. Each node has its degree bounded by formula_10 or formula_12. For the smaller heap formula_56 the positions in formula_61 are unchanged. However, all nodes in formula_56 are now passive, which means that their constraint may change from the formula_65 case to the formula_66 case. But noting that formula_60, the resulting size formula_67 is at least double formula_58. This results in an increase of at least 1 on each constraint, which eliminates the previous concern. The root with the larger key between formula_56 and formula_57 becomes a non-root, and is placed between formula_61 and formula_62 at position formula_58. By invariant 4, its degree was bounded by either formula_68 or formula_69, depending on which heap it came from. It is easy to see that this is less than formula_70 in any case. For the larger heap, the positions increase by formula_58. But since the resulting size is formula_67, the value formula_71 actually increases, weakening the constraint. Insert. Insertion can be considered a special case of the merge operation. To insert a single key, create a new heap containing a single passive node and an empty queue, and merge it with the main heap. Find-min. Due to the minimum-heap property, the node with the minimum key is always at the root, if it exists. Delete-min. If the root is the only node in the heap, we are done by simply removing it. Otherwise, search the children of the root to find the node formula_19 with minimum key, and set the new root to formula_19. If formula_19 is active, make it passive, causing all active children of formula_19 to implicitly become active roots. Link the children of the old root to formula_19. Since formula_19 is now the root, move all of its passive linkable children to the right, and remove formula_19 from formula_5. The degree of the root approximately doubles, because we have linked all the children of the old root to formula_19. We perform the following restorative transformations: To see how step 3 is bounded, consider the state after step 3: Observe that, 3 active root reductions and 2 root reductions decreases the root degree and active roots by 1: Since formula_6, step 3 never executes more than formula_72 times. Proof of correctness. Invariant 1 holds trivially, since no active roots are created. The size of the heap formula_0 decreases by one, causing formula_6 decreases by at most one. Thus, invariant 3 is violated by at most 1. By lemma 3, loss reduction is possible, which has been done by step 2. Invariants 1 and 3 now hold. If invariants 2 and 4 were still violated after step 3, it would be possible to apply active root reduction and root degree reduction, by lemmas 2 and 4. However, active root reduction and root degree reduction have already been exhaustively applied. Therefore, invariants 2 and 4 also hold. To show that invariant 5 is satisfied, we first note that the heap size formula_0 has decreased by 1. Because the first 2 nodes in formula_5 are popped in step 1, the positions of the other elements in formula_5 decrease by 2. Therefore, the degree constraints formula_10 and formula_12 remain constant for these nodes. The two nodes which were popped previously had positions 1 and 2 in formula_5, and now have positions formula_73 and formula_74 respectively. The effect is that their degree constraints have strengthened by 2, however, we cut off two passive children for each of these nodes, which is sufficient to satisfy the constraint again. Decrease-key. Let formula_19 be the node whose key has been decreased. If formula_19 is the root, we are done. Otherwise, detach the subtree rooted at formula_19, and link it to the root. If the key of formula_19 is smaller than the key of the root, swap their keys. Up to three structural violations may have occurred. Unless formula_19 was already a child of the root, the degree of the root increases by 1. When formula_19 was detached from its original parent formula_47, we have the following cases: In the worst case, all three quantities (root degree, total loss, active roots) increase by 1. After performing 1 loss reduction, the worst case result is still that the root degree and number of active roots have both increased by 2. To fix these violations, we use the fact that 3 active root reductions and 2 root reductions decrease both of these quantities by 1. Hence, applying these transformations 6 and 4 times respectively is sufficient to eliminate all violations. Proof of correctness. The nodes which were previously the left siblings of formula_19 move to fill the gap left by formula_19, decreasing their index. Since their constraint has weakened, invariant 1 is unaffected. Invariant 5 trivially holds as formula_5 is unchanged. Lemmas 2, 3 and 4 guarantee the availability of active root reduction, loss reduction, and root degree reduction. Therefore, invariants 2, 3 and 4 hold. Performance. Although theoretically optimal, strict Fibonacci heaps are not useful in practical applications. They are extremely complicated to implement, requiring management of more than 10 pointers per node. While most operations run in formula_46 time, the constant factors may be very high, making them up to 20 times slower than their more common counterparts such as binary heaps or pairing heaps. Despite being relatively simpler, experiments show that in practice the strict Fibonacci heap performs slower than the Brodal queue. Summary of running times. Here are time complexities of various heap data structures. The abbreviation am. indicates that the given complexity is amortized, otherwise it is a worst-case complexity. For the meaning of "O"("f")" and ""Θ"("f")" see Big O notation. Names of operations assume a min-heap. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "c_i" }, { "math_id": 3, "text": "c_i.\\mathrm{rank} + c_i.\\mathrm{loss} \\ge i - 1" }, { "math_id": 4, "text": "O(n)" }, { "math_id": 5, "text": "Q" }, { "math_id": 6, "text": "R = 2 \\lg n + 6" }, { "math_id": 7, "text": "\\lg" }, { "math_id": 8, "text": "R+1" }, { "math_id": 9, "text": "R+3" }, { "math_id": 10, "text": "2 \\lg (2n - p) + 10" }, { "math_id": 11, "text": "p" }, { "math_id": 12, "text": "2 \\lg (2n - p) + 9" }, { "math_id": 13, "text": "R+6" }, { "math_id": 14, "text": "p = 0" }, { "math_id": 15, "text": "2 \\lg (2n - 0) + 10 = 2 \\lg n + 12 = R + 6" }, { "math_id": 16, "text": "r" }, { "math_id": 17, "text": "\\lg n + \\sqrt{2L} + 2" }, { "math_id": 18, "text": "L" }, { "math_id": 19, "text": "x" }, { "math_id": 20, "text": "r \\ge \\lg n + k + 1" }, { "math_id": 21, "text": "k" }, { "math_id": 22, "text": "k(k+1)/2 \\ge L" }, { "math_id": 23, "text": "T_x" }, { "math_id": 24, "text": "n + 1" }, { "math_id": 25, "text": "\\mathrm{rank} + \\mathrm{loss}" }, { "math_id": 26, "text": "c_i.\\mathrm{rank} + c_i.\\mathrm{loss} = i - 1" }, { "math_id": 27, "text": "2^r" }, { "math_id": 28, "text": "0" }, { "math_id": 29, "text": "r-1" }, { "math_id": 30, "text": "j" }, { "math_id": 31, "text": "r-j-1" }, { "math_id": 32, "text": "\\le r-k-1" }, { "math_id": 33, "text": "\\sum_{j=1}^k j = k(k+1)/2" }, { "math_id": 34, "text": "\\le r-k-2" }, { "math_id": 35, "text": "w" }, { "math_id": 36, "text": "r-k-1" }, { "math_id": 37, "text": "r-k-1 \\ge \\lg n" }, { "math_id": 38, "text": "2^{\\lg n} = n" }, { "math_id": 39, "text": "n+1" }, { "math_id": 40, "text": "r < \\lg n + k + 1" }, { "math_id": 41, "text": "k < \\sqrt{k(k+1)} = \\sqrt{2L}" }, { "math_id": 42, "text": "r < \\lg n + \n\\sqrt{2L} + 2" }, { "math_id": 43, "text": "R" }, { "math_id": 44, "text": "L \\le R + 1" }, { "math_id": 45, "text": "\n\\begin{align}\nr &\\le \\lg n + \\sqrt{2L} + 2 \\\\\n &\\le \\lg n + \\sqrt{2(R + 1)} + 2 \\\\\n & = \\lg n + \\sqrt{4 \\lg n + 14} + 2 \\\\\n &\\le \\lg n + \\sqrt{(\\lg n)^2 + 2 \\cdot 4 \\lg n + 4^2} + 2 \\\\\n & = \\lg n + (\\lg n + 4) + 2 \\\\\n & = R\n\\end{align}\n" }, { "math_id": 46, "text": "O(1)" }, { "math_id": 47, "text": "y" }, { "math_id": 48, "text": "x.\\mathrm{key} \\le y.\\mathrm{key}" }, { "math_id": 49, "text": "z" }, { "math_id": 50, "text": "(r+1)" }, { "math_id": 51, "text": "L=R+2" }, { "math_id": 52, "text": "\\mathrm{loss} \\ge 2" }, { "math_id": 53, "text": "x.\\mathrm{key} \\le y.\\mathrm{key} \\le z.\\mathrm{key}" }, { "math_id": 54, "text": "R+4" }, { "math_id": 55, "text": "r+1" }, { "math_id": 56, "text": "h_1" }, { "math_id": 57, "text": "h_2" }, { "math_id": 58, "text": "n_1" }, { "math_id": 59, "text": "n_2" }, { "math_id": 60, "text": "n_1 \\le n_2" }, { "math_id": 61, "text": "Q_1" }, { "math_id": 62, "text": "Q_2" }, { "math_id": 63, "text": "Q_{\\mathrm{merged}} = Q_1 + \\{r_\\mathrm{larger}\\} + Q_2" }, { "math_id": 64, "text": "r_\\mathrm{larger}" }, { "math_id": 65, "text": "+10" }, { "math_id": 66, "text": "+9" }, { "math_id": 67, "text": "n = n_1 + n_2" }, { "math_id": 68, "text": "R_1+3 = 2 \\lg n_1 + 9" }, { "math_id": 69, "text": "R_2+3 = 2 \\lg n_2 + 9" }, { "math_id": 70, "text": "2 \\lg {(2n_1 + 2n_2 - n_1)} + 9" }, { "math_id": 71, "text": "2n - p" }, { "math_id": 72, "text": "O(\\log n)" }, { "math_id": 73, "text": "n-2" }, { "math_id": 74, "text": "n-1" } ]
https://en.wikipedia.org/wiki?curid=76690741
76693494
Abel–Dini–Pringsheim theorem
In calculus, the Abel–Dini–Pringsheim theorem is a convergence test which constructs from a divergent series a series that diverges more slowly, and from convergent series one that converges more slowly. Consequently, for every convergence test based on a particular series there is a series about which the test is inconclusive.299 For example, the Raabe test is essentially a comparison test based on the family of series whose formula_0th term is formula_1 (with formula_2) and is therefore inconclusive about the series of terms formula_3 which diverges more slowly than the harmonic series. Definitions. The Abel–Dini–Pringsheim theorem can be given for divergent series or convergent series. Helpfully, these definitions are equivalent, and it suffices to prove only one case. This is because applying the Abel–Dini–Pringsheim theorem for divergent series to the series with partial sum formula_4 yields the Abel–Dini–Pringsheim theorem for convergent series. For divergent series. Suppose that formula_5 is a sequence of positive real numbers such that the series formula_6 diverges to infinity. Let formula_7 denote the formula_0th partial sum. The Abel–Dini–Pringsheim theorem for divergent series states that the following conditions hold. Consequently, the series formula_13 converges if formula_14 and diverges if formula_15. When formula_15, this series diverges less rapidy than formula_16. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof Proof of the first part. By the assumption formula_17 is nondecreasing and diverges to infinity. So, for all formula_18 there is formula_19 such that formula_20 Therefore formula_21 and hence formula_22 is not a Cauchy sequence. This implies that the series formula_23 is divergent. Proof of the second part. If formula_24, we have formula_25 for sufficiently large formula_0 and thus formula_26. So, it suffices to consider the case formula_27. For all formula_28 we have the inequality formula_29 This is because, letting formula_30 we have formula_31 formula_32 formula_33 Therefore, formula_37 Proof of the third part. The sequence formula_38 is nondecreasing and diverges to infinity. By the Stolz-Cesaro theorem, formula_39 For convergent series. Suppose that formula_5 is a sequence of positive real numbers such that the series formula_40 converges to a finite number. Let formula_41 denote the formula_42th remainder of the series. According to the Abel–Dini–Pringsheim theorem for convergent series, the following conditions hold. In particular, the series formula_47 is convergent when formula_48, and divergent when formula_49. When formula_48, this series converges more slowly than formula_16. Examples. The series formula_50 is divergent with the formula_0th partial sum being formula_0. By the Abel–Dini–Pringsheim theorem, the series formula_51 converges when formula_14 and diverges when formula_15. Since formula_52 converges to 0, we have the asymptotic approximation formula_53 Now, consider the divergent series formula_54 thus found. Apply the Abel–Dini–Pringsheim theorem but with partial sum replaced by asymptotically equivalent sequence formula_55. (It is not hard to verify that this can always be done.) Then we may conclude that the series formula_56 converges when formula_14 and diverges when formula_15. Since formula_3 converges to 0, we have formula_57 Historical notes. The theorem was proved in three parts. Niels Henrik Abel proved a weak form of the first part of the theorem (for divergent series). Ulisse Dini proved the complete form and a weak form of the second part. Alfred Pringsheim proved the second part of the theorem. The third part is due to Ernesto Cesàro. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "1/n^t" }, { "math_id": 2, "text": "t\\in\\mathbb R" }, { "math_id": 3, "text": "1/(n\\ln n)" }, { "math_id": 4, "text": "S_n'=\\frac1{r_n}" }, { "math_id": 5, "text": "(a_n)_{n=0}^\\infty\\subset(0,\\infty)" }, { "math_id": 6, "text": "\\sum_{n=0}^\\infty a_n=\\infty" }, { "math_id": 7, "text": "S_n=a_0+a_1+\\cdots+a_n" }, { "math_id": 8, "text": "\\sum_{n=0}^\\infty\\frac{a_n}{S_n}=\\infty" }, { "math_id": 9, "text": "\\epsilon>0" }, { "math_id": 10, "text": "\\sum_{n=1}^\\infty\\frac{a_n}{S_nS_{n-1}^\\epsilon}<\\infty" }, { "math_id": 11, "text": "\\lim_{n\\to\\infty}\\frac{a_n}{S_n}=0" }, { "math_id": 12, "text": "\\lim_{n\\to\\infty}\\frac{a_0/S_0+a_1/S_1+\\cdots+a_n/S_n}{\\ln S_n}=1" }, { "math_id": 13, "text": "\\sum_{n=0}^\\infty\\frac{a_n}{S_n^t}" }, { "math_id": 14, "text": "t>1" }, { "math_id": 15, "text": "t\\le1" }, { "math_id": 16, "text": "a_n" }, { "math_id": 17, "text": "S_n" }, { "math_id": 18, "text": "n\\in\\{0,1,2,\\dots\\}" }, { "math_id": 19, "text": "k_n\\geq 1" }, { "math_id": 20, "text": "\\frac{S_n}{S_{n+k_n}}<\\frac12" }, { "math_id": 21, "text": "\\frac{a_{n+1}}{S_{n+1}}+\\cdots+\\frac{a_{n+k_n}}{S_{n+k_n}}\\ge\\frac{a_{n+1}+\\cdots+a_{n+k_n}}{S_{n+k_n}}=\\frac{S_{n+k_n}-S_{n}}{S_{n+k_n}}=1-\\frac{S_n}{S_{n+k_n}}>\\frac12" }, { "math_id": 22, "text": "a_0/S_0+\\cdots+a_n/S_n" }, { "math_id": 23, "text": "\\sum_{n=0}^\\infty\\frac{a_n}{S_n}" }, { "math_id": 24, "text": "0<\\epsilon\\le\\epsilon'" }, { "math_id": 25, "text": "S_n\\ge1" }, { "math_id": 26, "text": "a_n/(S_nS_{n-1}^\\epsilon)\\ge a_n/(S_nS_{n-1}^{\\epsilon'})" }, { "math_id": 27, "text": "0<\\epsilon\\le1" }, { "math_id": 28, "text": "x\\in(0,\\infty)" }, { "math_id": 29, "text": "\\epsilon(1-x)\\le1-x^\\epsilon." }, { "math_id": 30, "text": "f(x)=\\epsilon(1-x)-1+x^\\epsilon" }, { "math_id": 31, "text": "f(1)=0" }, { "math_id": 32, "text": "f'(x)=\\epsilon(x^{\\epsilon-1}-1)\\ge0\\qquad(\\forall x\\in(0,1])" }, { "math_id": 33, "text": "f'(x)=\\epsilon(x^{\\epsilon-1}-1)\\le0\\qquad(\\forall x\\in[1,\\infty))." }, { "math_id": 34, "text": "g(x)=1-x^\\epsilon" }, { "math_id": 35, "text": "1" }, { "math_id": 36, "text": "y=g'(1)(x-1)=\\epsilon(1-x)" }, { "math_id": 37, "text": "\\begin{align}\n\\sum_{n=1}^\\infty\\frac{a_n}{S_nS_{n-1}^\\epsilon}\n&=\\sum_{n=1}^\\infty\\frac{S_n-S_{n-1}}{S_nS_{n-1}^\\epsilon}\\\\\n&=\\sum_{n=1}^\\infty\\frac1{S_{n-1}^\\epsilon}\\left(1-\\frac{S_{n-1}}{S_n}\\right)\\\\\n&\\le\\sum_{n=1}^\\infty\\frac1{\\epsilon S_{n-1}^\\epsilon}\\left(1-\\left(\\frac{S_{n-1}}{S_n}\\right)^\\epsilon\\right)\\\\\n&=\\sum_{n=1}^\\infty\\frac1\\epsilon\\left(\\frac1{S_{n-1}^\\epsilon}-\\frac1{S_n^\\epsilon}\\right)\\\\\n&=\\frac1{\\epsilon S_0^\\epsilon}\\\\\n&<\\infty\n\\end{align}\n" }, { "math_id": 38, "text": "\\ln S_n" }, { "math_id": 39, "text": "\\begin{align}\n\\lim_{n\\to\\infty}\\frac{a_0/S_0+a_1/S_1+\\cdots+a_n/S_n}{\\ln S_n}\n&=\\lim_{n\\to\\infty}\\frac{a_0/S_0+a_1/S_1+\\cdots+a_n/S_n}{\\ln S_0+\\ln(S_1/S_0)+\\cdots+\\ln(S_n/S_{n-1})}\\\\\n&=\\lim_{n\\to\\infty}\\frac{a_n/S_n}{\\ln(S_n/S_{n-1})}\\\\\n&=\\lim_{n\\to\\infty}\\frac{a_n/S_n}{\\ln(S_n/(S_n-a_n))}\\\\\n&=\\lim_{n\\to\\infty}\\frac{a_n/S_n}{\\ln(1/(1-a_n/S_n))}\\\\\n&=\\lim_{n\\to\\infty}\\left(-\\frac{a_n/S_n}{\\ln(1-a_n/S_n)}\\right)\\\\\n&=1\n\\end{align}\n" }, { "math_id": 40, "text": "\\sum_{n=0}^\\infty a_n<\\infty" }, { "math_id": 41, "text": "r_n=a_n+a_{n+1}+a_{n+2}+\\cdots" }, { "math_id": 42, "text": "(n-1)" }, { "math_id": 43, "text": "\\sum_{n=0}^\\infty\\frac{a_n}{r_n}=\\infty" }, { "math_id": 44, "text": "\\sum_{n=0}^\\infty\\frac{a_n}{r_n^{1-\\epsilon}}<\\infty" }, { "math_id": 45, "text": "\\lim_{n\\to\\infty}\\frac{a_n}{r_n}=0" }, { "math_id": 46, "text": "\\lim_{n\\to\\infty}\\frac{a_0/r_0+a_1/r_1+\\cdots+a_n/r_n}{\\ln r_n}=-1" }, { "math_id": 47, "text": "\\sum_{n=0}^\\infty\\frac{a_n}{r_n^t}" }, { "math_id": 48, "text": "t<1" }, { "math_id": 49, "text": "t\\ge1" }, { "math_id": 50, "text": "\\sum_{n=0}^\\infty1" }, { "math_id": 51, "text": "\\sum_{n=0}^\\infty\\frac1{n^t}" }, { "math_id": 52, "text": "1/n" }, { "math_id": 53, "text": "\\lim_{n\\to\\infty}\\frac{1+1/2+\\cdots+1/n}{\\ln n}=1." }, { "math_id": 54, "text": "\\sum_{n=1}^\\infty\\frac1n" }, { "math_id": 55, "text": "\\ln n" }, { "math_id": 56, "text": "\\sum_{n=1}^\\infty\\frac1{n\\ln^tn}" }, { "math_id": 57, "text": "\\lim_{n\\to\\infty}\\frac{1+1/(2\\ln 2)+\\cdots+1/(n\\ln n)}{\\ln\\ln n}=1." } ]
https://en.wikipedia.org/wiki?curid=76693494
76696530
MMLU
Language model benchmark In artificial intelligence, Measuring Massive Multitask Language Understanding (MMLU) is a benchmark for evaluating the capabilities of large language models. Benchmark. It consists of about 16,000 multiple-choice questions spanning 57 academic subjects including mathematics, philosophy, law, and medicine. It is one of the most commonly used benchmarks for comparing the capabilities of large language models, with over 100 million downloads as of July 2024. The MMLU was released by Dan Hendrycks and a team of researchers in 2020 and was designed to be more challenging than then-existing benchmarks such as General Language Understanding Evaluation (GLUE) on which new language models were achieving better-than-human accuracy. At the time of the MMLU's release, most existing language models performed around the level of random chance (25%), with the best performing GPT-3 model achieving 43.9% accuracy. The developers of the MMLU estimate that human domain-experts achieve around 89.8% accuracy. As of 2024, some of the most powerful language models, such as Claude 3 and GPT-4, were reported to achieve scores in the mid-80s. Examples. The following examples are taken from the "Abstract Algebra" and "International Law" tasks, respectively. The correct answers are marked in boldface: Find all formula_0 in formula_1 such that formula_2 is a field. (A) 0 (B) 1 (C) 2 (D) 3 Would a reservation to the definition of torture in the ICCPR be acceptable in contemporary practice?&lt;br&gt; (A) This is an acceptable reservation if the reserving country’s legislation employs a different definition&lt;br&gt; (B) This is an unacceptable reservation because it contravenes the object and purpose of the ICCPR&lt;br&gt; (C) This is an unacceptable reservation because the definition of torture in the ICCPR is consistent with customary international law&lt;br&gt; (D) This is an acceptable reservation because under general international law States have the right to enter reservations to treaties References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c" }, { "math_id": 1, "text": "\\mathbb{Z}_3" }, { "math_id": 2, "text": "\\mathbb{Z}_3[x]/(x^2 + c)" } ]
https://en.wikipedia.org/wiki?curid=76696530
76697957
Zermelo's categoricity theorem
Zermelo's categoricity theorem was proven by Ernst Zermelo in 1930. It states that all models of a certain second-order version of the Zermelo-Fraenkel axioms of set theory are isomorphic to a member of a certain class of sets. Statement. Let formula_0 denote Zermelo-Fraenkel set theory, but with a second-order version of the axiom of replacement formulated as follows: formula_1 , namely the second-order universal closure of the axiom schema of replacement.p. 289 Then every model of formula_0 is isomorphic to a set formula_2 in the von Neumann hierarchy, for some inaccessible cardinal formula_3. Original presentation. Zermelo originally considered a version of formula_0 with urelements. Rather than using the modern satisfaction relation formula_4, he defines a "normal domain" to be a collection of sets along with the true formula_5 relation that satisfies formula_0.p. 9 Related results. Dedekind proved that the second-order Peano axioms hold in a model if and only if the model is isomorphic to the true natural numbers.pp. 5–6p. 1 Uzquiano proved that when removing replacement form formula_6 and considering a second-order version of Zermelo set theory with a second-order version of separation, there exist models not isomorphic to any formula_7 for a limit ordinal formula_8.p. 396 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{ZFC}^2" }, { "math_id": 1, "text": "\\forall F\\forall x\\exists y\\forall z(z\\in y \\iff \\exists w(w\\in x\\land z = F(w)))" }, { "math_id": 2, "text": "V_\\kappa" }, { "math_id": 3, "text": "\\kappa" }, { "math_id": 4, "text": "\\vDash" }, { "math_id": 5, "text": "\\in" }, { "math_id": 6, "text": "\\mathsf{ZFC}^2" }, { "math_id": 7, "text": "V_\\delta" }, { "math_id": 8, "text": "\\delta>\\omega" } ]
https://en.wikipedia.org/wiki?curid=76697957
76700066
Rectangular Micro QR Code
Type of matrix barcode Rectangular Micro QR Code (also known as rMQR Code) is two-dimensional (2D) matrix barcode invented and standardized in 2022 by Denso Wave as ISO/IEC 23941. rMQR Code is designed as a rectangular variation of QR code and has the same parameters and applications as original QR code. But rMQR Code is more suitable for the rectangular areas and has difference between width and height up to 19 in R7x139 version. In this way it can be used in places where 1D barcodes are used. rMQR Code can replace Code 128 and Code 39 barcodes with more effective data encoding. rMQR Code consists of black squares and white square spaces arranged in a square grid on a white background. It has one finder pattern in left-top corner the same as in QR Code and small finder sub-pattern in right-bottom corner. Also, it has alignment and timing patterns to help with recognition. rMQR Code has Reed–Solomon error correction with ability to restore data from corrupted barcodes. As other 2D matrix barcodes it can be read with camera-based readers. As original QR code, rMQR Code can encode Unicode characters with Extended Channel Interpretation feature, bytes array and can natively encode Japanese characters in kanji encoding. In maximal R17x139 version rMQR Code can encode up to 361 numeric, 219 alphanumeric, 150 bytes and 92 kanji characters. History and application. rMQR Code was invented by Denso Wave company in 2022 and standardized as ISO/IEC 23941. It is an extension of QR Code for rectangular areas and designed to be a replacement of 1D barcodes. rMQR Code is a novel barcode and at this time it not widely used, but it can unite QR Code features like error correction and Unicode encoding and 1D barcodes features like effective usage of rectangular areas. At this time rMQR Code yet is not widely supported by hardware printers and scanners but it is already supported by barcode libraries. In this way rMQR Code can be used in: Main advantages of rMQR Code are: Barcode design. Rectangular Micro QR Code is designed for the better utilization of rectangular areas with all features of QR code. The symbology consists of black squares and white square spaces arranged in a square grid on a white background. Additionally, the barcode has inverse version with black background with inverse (luminance) color of elements. rMQR Code has minimal height of 7X and minimal width of 27X, where maximal height is 17X and maximal width 139X. rMQR Code has 32 versions with different combinations of height and width. Reed–Solomon error correction has two levels and allows to restore from 15% to 30% of corrupted data. rMQR Code symbol is constructed from the following elements: Here are some samples of Rectangular Micro QR Code (rMQR Code): Versions. Rectangular Micro QR Code can be encoded in 32 versions with height from 7X to 17X and width from 27X to 139X. All versions have two Error correction levels: M and H, which have influence on possible encoded data size and error correction. All of Rectangular Micro QR Code versions and their features can be watched in the following table: Finder patterns. Rectangular Micro QR Code has three types of finder pattern: Main finder pattern is used to detect the barcode on image and its corruption can make barcode unrecognizable. Finder pattern has vertical and horizontal size 1-1-3-1-1. Finder sub pattern helps to detect bottom-right corner of the barcode. Finder sub pattern does not have guard zone and has vertical and horizontal size 1-1-1-1-1. Corner finder patterns allows to detect top-right and bottom-left corners and in some version of the rMQR Code can be cut or absent. Corner finder pattern looks like corner with white dot in the center with size 3-3. Alignment and timing patterns. Rectangular Micro QR Code has alignment and timing patterns which help to detect misaligned cells damage. Alignment pattern is represented as black rectangle 3X size rounding 1X white dot. Alignment pattern in some versions can be absent and number of alignment patterns depends from version, up to 8 alignment patterns. Timing patterns boarding the barcode where area is clean from finder and alignment patterns and additionally split the barcode vertically in the area of alignment patterns. Format Information. Rectangular Micro QR Code places format information in the area of finder pattern and finder sub pattern. Format information is built as 18-bit sequence containing 6 data bits, 12 error correction bits calculated using the (18, 6) Extended BCH code. Format information is masked with 011111101010110010 sequence which is placed around finder pattern and 100000101001111011 for finder sub pattern. The first data bit defines error correction level and the second 5 data bits defines version indicator. Error correction. Rectangular Micro QR Code uses Reed–Solomon error correction and has two error correction levels M and H which can restore around 15 and 30% of damaged barcode area. All data in the barcode is split into error correction blocks (can be from 1 to 4 blocks) and error correction codewords are added to every block. After this, the blocks are united into single stream. rMQR Code use Reed–Solomon error correction over the finite field formula_0 or GF(28), the elements of which are encoded as bytes of 8 bits; the byte formula_1 with a standard numerical value formula_2 encodes the field element formula_3 where formula_4 is taken to be a primitive element satisfying formula_5. The primitive polynomial is formula_6, corresponding to the polynomial number 285, with initial root = 0. Data masking and placement. Rectangular Micro QR Code places data in the same way as QR code in two-module wide columns commencing at the lower right corner of the symbol and running alternately upwards and downwards from the right to the left. Before the placement the data is masked with single type of mask (instead of 8 types in QR Code): &lt;br&gt;formula_7, where &lt;br&gt;i is a row position; &lt;br&gt;j is a column position. Codeword sequence as a single bit stream is placed (starting with the most significant bit) in the two-module wide columns alternately upwards and downwards from the right to left of the symbol. In each column the bits are placed alternately in the right and left modules, moving upwards or downwards according to the direction of placement and skipping areas occupied by function patterns, changing direction at the top or bottom of the column. Each bit shall always be placed in the first available module position. When the data capacity of the symbol is such that it does not divide exactly into a number of 8-bit symbol characters, the appropriate number of remainder bits (1 to 7) shall be used to fill the symbol capacity. These remainder bits shall always have the value 0 before data masking. Encoding. Rectangular Micro QR Code can encode 361 numeric, 219 alphanumeric, 150 bytes and 92 kanji characters in the maximal version R17x139. Additionally, it allows to encode Unicode data with Extended Channel Interpretation feature and encode GS1 data. rMQR Code can encode data in 8 modes where 4 modes are data encoding modes and 3 modes are indicator modes, like ECI. Also, every encoding sequence must be completed with special Terminator mode. rMQR Code usually encodes data in mixed mode which is a combination of existing modes for better compactification or special selectors like ECI designator. Every compaction mode depends on version to select number of bits which are used as encoded characters (numbers, letters, bytes) counter. The number of bits required for every version can be watched in the following table. Numeric mode. Rectangular Micro QR Code encodes digits 0–9 in numeric mode. The number sequence is split into 3 digits which converted to 10 bits (000 - 999). Last 2 and 1 numbers are encoded in 7 and 4 bits. rMQR Code in numeric mode encodes 001 as mode indicator, then numbers counter and then numbers sequence converted into bits. Alphanumeric mode. Rectangular Micro QR Code encodes 2 alphanumeric characters from the table into 11 bits stream with the following formula: &lt;br&gt;formula_8 Final character is encoded into 6 bits. rMQR Code in alphanumeric mode encodes 010 as mode indicator, then alphanumeric counter and then bits stream which represents encoded characters. Byte mode. Rectangular Micro QR Code adds mode indicator 011 and bytes counter (version dependent) before the byte stream, converted into 8-bit sequence. Kanji mode. Rectangular Micro QR Code encodes characters from 2 bytes JIS X 0208 2-byte character set into 13 bits with the following rules: rMQR Code adds mode indicator 100 and characters counter before the encoded kanji sequence. Unicode encoding with ECI. Rectangular Micro QR Code encodes Unicode characters with Extended Channel Interpretation. Previously it encodes ECI designator which defines encoding charset. After this, it encodes byte array of Unicode characters encoded into byte stream with mix of numeric-text-byte modes. The default ECI designator is \000003(ISO/IEC 8859-1). ECI designator is encoded with mode indicator 111 and ECI assignment number which can be encoded in 8, 16 or 24 bits by rules from the following table. GS1 encoding. Rectangular Micro QR Code can encode GS1 Data with FNC1 in first position. Encoding mode indicator 101 switch the barcode symbol in GS1 Application Identifiers mode. FNC1 cannot be used as split FNC1 character like in Code 128 symbol. Instead of this, % character should be used in alphanumeric mode or GS (0x1D) in byte mode. To encode % character in alphanumeric mode the character should be doubled %% and after decoding it should be transmitted as single % character. FNC1 in second position. FNC1 in second position at this time has historical value and is not used. It was used to encode (obsolete at this time) mode identifier as first data codeword in Code 128 when FNC1 character is encoded in the second codeword (second position). More detail description you can read in ISO/IEC 15417 Annex B. Rectangular Micro QR Code encodes FNC1 in second position as mode indicator 111, 8-bit application identifier (AIM, but I am not sure) and any other mode/modes after this. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{F}_{256}" }, { "math_id": 1, "text": "b_7b_6b_5b_4b_3b_2b_1b_0" }, { "math_id": 2, "text": "\\textstyle\\sum_{i=0}^7 b_i 2^i" }, { "math_id": 3, "text": "\\textstyle\\sum_{i=0}^7 b_i \\alpha^i" }, { "math_id": 4, "text": " \\alpha \\in \\mathbb{F}_{256}" }, { "math_id": 5, "text": "\\alpha^8 + \\alpha^4 + \\alpha^3 + \\alpha^2 + 1 = 0" }, { "math_id": 6, "text": "x^8 + x^4 + x^3 + x^2 + 1 " }, { "math_id": 7, "text": "((i / 2) + (j / 3) )\\pmod{9} = 0" }, { "math_id": 8, "text": "V = 45 * C_1 + C_2" } ]
https://en.wikipedia.org/wiki?curid=76700066
767065
Rain fade
Rain fade refers primarily to the absorption of a microwave radio frequency (RF) signal by atmospheric rain, snow, or ice, and losses which are especially prevalent at frequencies above 11 GHz. It also refers to the degradation of a signal caused by the electromagnetic interference of the leading edge of a storm front. Rain fade can be caused by precipitation at the uplink or downlink location. It does not need to be raining at a location for it to be affected by rain fade, as the signal may pass through precipitation many miles away, especially if the satellite dish has a low look angle. From 5% to 20% of rain fade or satellite signal attenuation may also be caused by rain, snow, or ice on the uplink or downlink antenna reflector, radome, or feed horn. Rain fade is not limited to satellite uplinks or downlinks, as it can also affect terrestrial point-to-point microwave links (those on the Earth's surface). Rain fade is usually estimated experimentally and also can be calculated theoretically using scattering theory of raindrops. Raindrop size distribution (DSD) is an important consideration for studying rain fade characteristics. Various mathematical forms such as Gamma function, lognormal or exponential forms are usually used to model the DSD. Mie or Rayleigh scattering theory with point matching or t-matrix approach is used to calculate the scattering cross section, and specific rain attenuation. Since rain is a non-homogeneous process in both time and space, specific attenuation varies with location, time and rain type. Total rain attenuation is also dependent upon the spatial structure of rain field. Horizontal, as well as vertical, extension of rain again varies for different rain type and location. Limit of the vertical rain region is usually assumed to coincide with 0˚ isotherm and called rain height. Melting layer height is also used as the limits of rain region and can be estimated from the bright band signature of radar reflectivity. The horizontal rain structure is assumed to have a cellular form, called rain cell. Rain cell sizes can vary from a few hundred meters to several kilometers and dependent upon the rain type and location. Existence of very small size rain cells are recently observed in tropical rain. The rain attenuation on satellite communication can be predicted using rain attenuation prediction models which lead to a suitable selection of the Fade Mitigation Technique (FMT). The rain attenuation prediction models require rainfall rate data which, in turn, can be obtained from in either the prediction rainfall maps, which may reflect inaccurate rain performance prediction, or by actual measured rainfall data that gives more accurate prediction and hence the appropriate selection of FMT. Substantially, the earth altitude above the sea level is an essential factor affecting the rain attenuation performance. The satellite system designers and channel providers should account for the rain impairments at their channel setup. Possible ways to overcome the effects of rain fade are site diversity, uplink power control, variable rate encoding, and receiving antennas larger than the requested size for normal weather conditions. Uplink power control. The simplest way to compensate the rain fade effect in satellite communications is to increase the transmission power: this dynamic fade countermeasure is called uplink power control (UPC). Until more recently, uplink power control had limited use, since it required more powerful transmitters – ones that could normally run at lower levels and could be increased in power level on command (i.e. automatically). Also uplink power control could not provide very large signal margins without compressing the transmitting amplifier. Modern amplifiers coupled with advanced uplink power control systems that offer automatic controls to prevent transponder saturation make uplink power control systems an effective, affordable and easy solution to rain fade in satellite signals. Parallel fail-over links. In terrestrial point to point microwave systems ranging from 11 GHz to 80 GHz, a parallel backup link can be installed alongside a rain fade prone higher bandwidth connection. In this arrangement, a primary link such as an 80 GHz 1 Gbit/s full duplex microwave bridge may be calculated to have a 99.9% availability rate over the period of one year. The calculated 99.9% availability rate means that the link may be down for a cumulative total of ten or more hours per year as the peaks of rain storms pass over the area. A secondary lower bandwidth link such as a 5.8 GHz based 100 Mbit/s bridge may be installed parallel to the primary link, with routers on both ends controlling automatic failover to the 100 Mbit/s bridge when the primary 1 Gbit/s link is down due to rain fade. Using this arrangement, high frequency point to point links (23 GHz+) may be installed to service locations many kilometers farther than could be served with a single link requiring 99.99% uptime over the course of one year. CCIR interpolation formula. It is possible to extrapolate the cumulative attenuation distribution at a given location by using the CCIR interpolation formula: "A""p" = "A"001 0.12 "p"−(0.546 − 0.0043 log10 "p"). where "A""p" is the attenuation in dB exceeded for a "p" percentage of the time and "A"001 is the attenuation exceeded for 0.01% of the time. ITU-R frequency scaling formula. According to the ITU-R, rain attenuation statistics can be scaled in frequency in the range 7 to 55 GHz by the formula formula_0 where formula_1 and "f" is the frequency in GHz.
[ { "math_id": 0, "text": "\\frac{A_2}{A_1} = \\left(\\frac{b_2}{b_1}\\right) ^ {1-1.12 \\cdot 10^{-3}\\sqrt{b_2/b_1}(b_1A_1)^{0.55}}" }, { "math_id": 1, "text": "b_i = \\frac{f_i^2}{1+10^{-4} f_i^2}" } ]
https://en.wikipedia.org/wiki?curid=767065
76711561
Parking function
Generalization of permutations Parking functions are a generalization of permutations studied in combinatorics, a branch of mathematics. Definition and applications. A parking function of length formula_0 is a sequence of formula_0 positive integers, each in the range from 1 to formula_0, with the property that, for every formula_1 up to the sequence length, the sequence contains at least formula_1 values that are at most formula_1. That is, it must contain at least one 1, at least two values that are 1 or 2, at least three values that are 1, 2, or 3, etc. Equivalently, if the sequence is sorted, then for each formula_1 in the same range, the formula_1th value of the sorted sequence is at most formula_1. For instance, there are 16 parking functions of length three: (1,2,3), (2,3,1), (3,1,2), (3,2,1), (2,1,3), (1,3,2), (1,1,2), (1,2,1), (2,1,1), (1,1,3), (1,3,1), (3,1,1), (1,2,2), (2,1,2), (2,2,1), (1,1,1). The name is explained by the following thought experiment. A sequence of formula_0 drivers in cars travel down a one-way street having formula_0 parking spaces, with each driver having a preferred parking space. Each driver travels until reaching their preferred space, and then parks in the first available spot. A parking function describes preferences for which all cars can park. For instance, the parking function (2,1,2,1) describes preferences for which the first and third drivers both prefer the second space, while the other two drivers both prefer the first space. The first driver parks in space 2, the second in space 1, and the third in space 3 (because space 2 is taken). The fourth driver starts looking for a free space at space 1, but doesn't find it until space 4; all previous spaces were taken. The sequence (3,3,1,3) is not a parking function: too many drivers prefer space 3, so the last driver starts looking for a space after already passing the only free space, and will be unable to park. Parking functions also have a more serious application in the study of hash tables based on linear probing, a strategy for placing keys into a hash table that closely resembles the one-way parking strategy for cars. Combinatorial enumeration. The number of parking functions of length formula_0 is exactly formula_2 For instance for formula_3 this number is formula_4. John Riordan credits to Henry O. Pollak the following argument for this formula. On a circular one-way road with formula_5 spaces, each of formula_0 cars will always be able to park, no matter what preference each driver has for their starting space. There are formula_6 choices for the preferences, each of which leaves one vacant space. All spaces are symmetric to each other, so by symmetry, there are formula_7 choices for preferences that leave space formula_5 as the vacant space. These choices are exactly the parking functions. The parking functions can also be placed in bijection with the spanning trees on a complete graph with formula_5 vertices, one of which is designated as the root. This bijection, together with Cayley's formula for the number of spanning trees, again shows that there are formula_7 parking functions. Much research has studied the number of parking functions of a special form. As a very simple special case, the parking functions that allow each car to park in its own preferred spot are exactly the permutations, counted by the factorials. The parking functions that allow each car to park either in its preferred spot or in the next spot are counted by the ordered Bell numbers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "(n+1)^{n-1}." }, { "math_id": 3, "text": "n=3" }, { "math_id": 4, "text": "4^2=16" }, { "math_id": 5, "text": "n+1" }, { "math_id": 6, "text": "(n+1)^n" }, { "math_id": 7, "text": "(n+1)^{n-1}" } ]
https://en.wikipedia.org/wiki?curid=76711561
76715889
Property graph
Property Graphs. The data model of " "property graphs" ", " "labeled property graphs "", or " "attributed graphs "" has emerged since the early 2000s as a common denominator of various models of graph-oriented databases. It can be defined informally as follows: Properties take the form of key-value pairs, as used for example in JSON. Keys are defined by character strings. Values are either numeric or also character strings. These properties fall within the usual definition of attributes as understood in entity-attribute-value or object-oriented modeling. This is why the phrase "attributed graph" is relevant. Unlike what is the case with RDF graphs, " properties are not arcs of the graph proper". This is another reason why it would be preferable to call them "attributed graphs," or graphs " with " properties, rather than "property graphs", which is misleading. Relationships are represented by arcs of the graph. These are often called edges, even though, strictly speaking, edges belong in undirected graphs. Arcs "must" have an identifier, a source node and a target node, and "may" have one or more attributes/properties in the previous sense Formal definition. Building upon widely adopted definitions, a property graph/attributed graph can be defined by a 7-tuple (N , A, P, V, α, formula_0, π), where A complementary construct, used in several implementations of property graphs with commercial graph databases, is that of " labels", which can be associated both with nodes and arcs of the graph. Labels have a practical rather than theoretical justification, as they were originally intended for users of Entity-Relationship models and relational databases, to facilitate the import of their legacy data sets into graph databases :. labels make it possible to associate the same identifier (that of the relational table, or of the ER entity) to all graph nodes which would correspond to the different rows of this relational table, or to instances of the same generic entity / class. With the proposed definition, these labels could in fact be viewed as attributes defined only by a key, without an associated value (this is why formula_0 is defined separately as a binary relation, and π as a partial function). The basic definition thus becomes much clearer, simpler, and satisfies a principle of parsimony. Alternatively, and more consistently, labels can be defined through type graphs, as special types associated with nodes and arcs. Relations with other models. Graph theory and classical graph algorithmics. Attributed graphs, as defined above, are especially useful and relevant in that they provide an "umbrella" hypernymic concept ( i.e. common generalization) for several key graph-theoretic models, which have long-since been widely used in classical Knowledge graphs and RDF graphs. Knowledge graphs, usually represented as RDF graphs, are in fact hybrid labeled graphs, whose node labels correspond to instance identifiers (IRI)s or literals, and edge labels identify types (not instances) of predicates. They have now acquired a visibility which tends to obscure the longer-established use of graphs as "direct" model for systems of all kinds. Attributed graphs are, by their versatility and expressivity, the best-adapted for this type of modeling, where graphs which can rightly be called cyber-physical do not merely capture weakly structured " about " a physical system, as would be the case with a knowledge graph, but attempt to directly capture the " structure " of a physical system, as matched by the connectivity structure of the graph. In contrast, an RDF graph would mix structural relationships with attached properties, and category / class information with instance / individuals, drowning out the structure The expressivity of attributed graphs, on the level of higher order logic, is also far above that of RDF graphs, which is limited to first order logic. Properties of relationships, which are at the heart of the attributed graph model, require a very cumbersome reification process to be expressed in RDF. Standardization. NGSI-LD. The NGSI-LD data model specified by ETSI has been the first attempt to standardize property graphs under a de jure standards body. Compared to the basic model defined here, the NGSI-LD meta-model adds a formal definition of basic categories (entity, relation, property) on the basis of semantic webstandards (OWL, RDFS, RDF), which makes it possible to convert all data represented in NGSI-LD into RDF datasets, through JSON-LD serialization. NGSI-LD entities, relations and properties are thus defined by reference to types which can themselves be defined by reference to ontologies, thesauri, taxonomies or microdata vocabularies, for the purpose of ensuring the semantic interoperability of the corresponding information. GQL. The ISO/IEC JTC1/SC32/WG3 group of ISO, which established the SQL standard, is in the process of specifying a new query language suitable for graph-oriented databases, called GQL (Graph Query Language). This standard will include the specification of a property graph data model, which should be along the lines of the basic model described here, possibly adding notions of labels, types, and schemas . Type graphs and schemas. Graph-oriented databases are, compared to relational databases, touted for "not" requiring the prior definition of a schema to start populating the base. This is desirable and suitable for environments and applications where one operates under an open world assumption, such as the description of complex systems and systems of systems, characterized by bottom-up organization and evolution, not control of a single stakeholder. However, even in such environments, it may be needed to constrain the representation of specific subsets of the information entered into the database, in a way that may resemble a traditional database schema, while keeping the openness of the overall graph for addition of unforeseen data or configurations. For example, the description of a smart city falls under the open world assumption and will be described by the upper level of a graph database, without a schema. However, specific technical sub-systems of this city remain top-down closed-world systems managed by a single operator, who may impose a stronger structuring of information, as customarily represented by a schema. The notions of "type graphs" and schemas make it possible to meet this need, with types playing a role similar to that of "labels" in classical graph databases, but with the added possibility of specifying relations between these types and constraining them by keys and properties. The type graph is itself a property graph, linked by a relation of graph homomorphism with the graphs of instances that use the types it defines, playing a role similar to that of a schema in a data definition language. The ontologies, thesauri or taxonomies used to reference NGSI-LD types are also defined by graphs, but these are RDF graphs rather than property graphs, and they typically have broader scopes than database schemas. The complementary use, possible with NGSI-LD types, of type graphs "and" referencing of external ontologies, makes it possible to enforce strong data structuration and consistency, while affording semantic grounding and interoperability.
[ { "math_id": 0, "text": " \\kappa " }, { "math_id": 1, "text": "\\alpha\\colon A \\to N\\times N" }, { "math_id": 2, "text": "\\pi\\colon \\kappa \\to V" } ]
https://en.wikipedia.org/wiki?curid=76715889
767253
Kosambi–Karhunen–Loève theorem
Theory of stochastic processes In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève), also known as the Kosambi–Karhunen–Loève theorem (after Damodar Dharmananda Kosambi) states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields. There exist many such expansions of a stochastic process: if the process is indexed over ["a", "b"], any orthonormal basis of "L"2(["a", "b"]) yields an expansion thereof in that form. The importance of the Karhunen–Loève theorem is that it yields the best such basis in the sense that it minimizes the total mean squared error. In contrast to a Fourier series where the coefficients are fixed numbers and the expansion basis consists of sinusoidal functions (that is, sine and cosine functions), the coefficients in the Karhunen–Loève theorem are random variables and the expansion basis depends on the process. In fact, the orthogonal basis functions used in this representation are determined by the covariance function of the process. One can think that the Karhunen–Loève transform adapts to the process in order to produce the best possible basis for its expansion. In the case of a "centered" stochastic process {"Xt"}"t" ∈ ["a", "b"] ("centered" means E["Xt"] 0 for all "t" ∈ ["a", "b"]) satisfying a technical continuity condition, X admits a decomposition formula_0 where Zk are pairwise uncorrelated random variables and the functions ek are continuous real-valued functions on ["a", "b"] that are pairwise orthogonal in "L"2(["a", "b"]). It is therefore sometimes said that the expansion is "bi-orthogonal" since the random coefficients Zk are orthogonal in the probability space while the deterministic functions ek are orthogonal in the time domain. The general case of a process Xt that is not centered can be brought back to the case of a centered process by considering "Xt" − E["Xt"] which is a centered process. Moreover, if the process is Gaussian, then the random variables Zk are Gaussian and stochastically independent. This result generalizes the "Karhunen–Loève transform". An important example of a centered real stochastic process on [0, 1] is the Wiener process; the Karhunen–Loève theorem can be used to provide a canonical orthogonal representation for it. In this case the expansion consists of sinusoidal functions. The above expansion into uncorrelated random variables is also known as the "Karhunen–Loève expansion" or "Karhunen–Loève decomposition". The empirical version (i.e., with the coefficients computed from a sample) is known as the "Karhunen–Loève transform" (KLT), "principal component analysis", "proper orthogonal decomposition (POD)", "empirical orthogonal functions" (a term used in meteorology and geophysics), or the "Hotelling transform". formula_1 formula_2 formula_3 Formulation. The square-integrable condition formula_4 is logically equivalent to formula_5 being finite for all formula_6. formula_7 Since "T""K""X" is a linear operator, it makes sense to talk about its eigenvalues "λk" and eigenfunctions "e""k", which are found solving the homogeneous Fredholm integral equation of the second kind formula_8 Statement of the theorem. Theorem. Let Xt be a zero-mean square-integrable stochastic process defined over a probability space (Ω, "F", P) and indexed over a closed and bounded interval ["a", "b"], with continuous covariance function "K""X"("s", "t"). Then "K""X"("s,t") is a Mercer kernel and letting "e""k" be an orthonormal basis on "L"2(["a", "b"]) formed by the eigenfunctions of "T""K""X" with respective eigenvalues λk, Xt admits the following representation formula_9 where the convergence is in "L"2, uniform in "t" and formula_10 Furthermore, the random variables "Z""k" have zero-mean, are uncorrelated and have variance "λk" formula_11 Note that by generalizations of Mercer's theorem we can replace the interval ["a", "b"] with other compact spaces "C" and the Lebesgue measure on ["a", "b"] with a Borel measure whose support is "C". formula_12 formula_9 where the coefficients (random variables) "Z""k" are given by the projection of "X""t" on the respective eigenfunctions formula_13 formula_14 where we have used the fact that the "e""k" are eigenfunctions of "T""K""X" and are orthonormal. formula_15 Then: formula_16 which goes to 0 by Mercer's theorem. Properties of the Karhunen–Loève transform. Special case: Gaussian distribution. Since the limit in the mean of jointly Gaussian random variables is jointly Gaussian, and jointly Gaussian random (centered) variables are independent if and only if they are orthogonal, we can also conclude: Theorem. The variables Zi have a joint Gaussian distribution and are stochastically independent if the original process {"Xt"}"t" is Gaussian. In the Gaussian case, since the variables Zi are independent, we can say more: formula_17 almost surely. The Karhunen–Loève transform decorrelates the process. This is a consequence of the independence of the Zk. The Karhunen–Loève expansion minimizes the total mean square error. In the introduction, we mentioned that the truncated Karhunen–Loeve expansion was the best approximation of the original process in the sense that it reduces the total mean-square error resulting of its truncation. Because of this property, it is often said that the KL transform optimally compacts the energy. More specifically, given any orthonormal basis {"f""k"} of "L"2(["a", "b"]), we may decompose the process "Xt" as: formula_18 where formula_19 and we may approximate "X""t" by the finite sum formula_20 for some integer "N". Claim. Of all such approximations, the KL approximation is the one that minimizes the total mean square error (provided we have arranged the eigenvalues in decreasing order). &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof Consider the error resulting from the truncation at the "N"-th term in the following orthonormal expansion: formula_21 The mean-square error "ε""N"2("t") can be written as: formula_22 We then integrate this last equality over ["a", "b"]. The orthonormality of the "fk" yields: formula_23 The problem of minimizing the total mean-square error thus comes down to minimizing the right hand side of this equality subject to the constraint that the "f""k" be normalized. We hence introduce βk, the Lagrangian multipliers associated with these constraints, and aim at minimizing the following function: formula_24 Differentiating with respect to "f""i"("t") (this is a functional derivative) and setting the derivative to 0 yields: formula_25 which is satisfied in particular when formula_26 In other words, when the "f""k" are chosen to be the eigenfunctions of "T""K""X", hence resulting in the KL expansion. Explained variance. An important observation is that since the random coefficients "Z""k" of the KL expansion are uncorrelated, the Bienaymé formula asserts that the variance of "X""t" is simply the sum of the variances of the individual components of the sum: formula_27 Integrating over ["a", "b"] and using the orthonormality of the "e""k", we obtain that the total variance of the process is: formula_28 In particular, the total variance of the "N"-truncated approximation is formula_29 As a result, the "N"-truncated expansion explains formula_30 of the variance; and if we are content with an approximation that explains, say, 95% of the variance, then we just have to determine an formula_31 such that formula_32 The Karhunen–Loève expansion has the minimum representation entropy property. Given a representation of formula_33, for some orthonormal basis formula_34 and random formula_35, we let formula_36, so that formula_37. We may then define the representation entropy to be formula_38. Then we have formula_39, for all choices of formula_40. That is, the KL-expansion has minimal representation entropy. Proof: Denote the coefficients obtained for the basis formula_41 as formula_42, and for formula_34 as formula_43. Choose formula_44. Note that since formula_45 minimizes the mean squared error, we have that formula_46 Expanding the right hand size, we get: formula_47 Using the orthonormality of formula_34, and expanding formula_48 in the formula_34 basis, we get that the right hand size is equal to: formula_49 We may perform identical analysis for the formula_41, and so rewrite the above inequality as: formula_50 Subtracting the common first term, and dividing by formula_51, we obtain that: formula_52 This implies that: formula_53 Linear Karhunen–Loève approximations. Consider a whole class of signals we want to approximate over the first M vectors of a basis. These signals are modeled as realizations of a random vector "Y"["n"] of size N. To optimize the approximation we design a basis that minimizes the average approximation error. This section proves that optimal bases are Karhunen–Loeve bases that diagonalize the covariance matrix of Y. The random vector Y can be decomposed in an orthogonal basis formula_54 as follows: formula_55 where each formula_56 is a random variable. The approximation from the first "M" ≤ "N" vectors of the basis is formula_57 The energy conservation in an orthogonal basis implies formula_58 This error is related to the covariance of Y defined by formula_59 For any vector "x"["n"] we denote by K the covariance operator represented by this matrix, formula_60 The error "ε"["M"] is therefore a sum of the last "N" − "M" coefficients of the covariance operator formula_61 The covariance operator K is Hermitian and Positive and is thus diagonalized in an orthogonal basis called a Karhunen–Loève basis. The following theorem states that a Karhunen–Loève basis is optimal for linear approximations. Theorem (Optimality of Karhunen–Loève basis). Let K be a covariance operator. For all "M" ≥ 1, the approximation error formula_62 is minimum if and only if formula_63 is a Karhunen–Loeve basis ordered by decreasing eigenvalues. formula_64 Non-Linear approximation in bases. Linear approximations project the signal on "M" vectors a priori. The approximation can be made more precise by choosing the "M" orthogonal vectors depending on the signal properties. This section analyzes the general performance of these non-linear approximations. A signal formula_65 is approximated with M vectors selected adaptively in an orthonormal basis for formula_66 formula_67 Let formula_68 be the projection of f over M vectors whose indices are in IM: formula_69 The approximation error is the sum of the remaining coefficients formula_70 To minimize this error, the indices in IM must correspond to the M vectors having the largest inner product amplitude formula_71 These are the vectors that best correlate f. They can thus be interpreted as the main features of f. The resulting error is necessarily smaller than the error of a linear approximation which selects the M approximation vectors independently of f. Let us sort formula_72 in decreasing order formula_73 The best non-linear approximation is formula_74 It can also be written as inner product thresholding: formula_75 with formula_76 The non-linear error is formula_77 this error goes quickly to zero as M increases, if the sorted values of formula_78 have a fast decay as k increases. This decay is quantified by computing the formula_79 norm of the signal inner products in B: formula_80 The following theorem relates the decay of "ε"["M"] to formula_81 Theorem (decay of error). If formula_82 with "p" &lt; 2 then formula_83 and formula_84 Conversely, if formula_85 then formula_86 for any "q" &gt; "p". Non-optimality of Karhunen–Loève bases. To further illustrate the differences between linear and non-linear approximations, we study the decomposition of a simple non-Gaussian random vector in a Karhunen–Loève basis. Processes whose realizations have a random translation are stationary. The Karhunen–Loève basis is then a Fourier basis and we study its performance. To simplify the analysis, consider a random vector "Y"["n"] of size "N" that is random shift modulo "N" of a deterministic signal "f"["n"] of zero mean formula_87 formula_88 The random shift "P" is uniformly distributed on [0, "N" − 1]: formula_89 Clearly formula_90 and formula_91 Hence formula_92 Since RY is N periodic, Y is a circular stationary random vector. The covariance operator is a circular convolution with RY and is therefore diagonalized in the discrete Fourier Karhunen–Loève basis formula_93 The power spectrum is Fourier transform of "R""Y": formula_94 Example: Consider an extreme case where formula_95. A theorem stated above guarantees that the Fourier Karhunen–Loève basis produces a smaller expected approximation error than a canonical basis of Diracs formula_96. Indeed, we do not know a priori the abscissa of the non-zero coefficients of "Y", so there is no particular Dirac that is better adapted to perform the approximation. But the Fourier vectors cover the whole support of Y and thus absorb a part of the signal energy. formula_97 Selecting higher frequency Fourier coefficients yields a better mean-square approximation than choosing a priori a few Dirac vectors to perform the approximation. The situation is totally different for non-linear approximations. If formula_98 then the discrete Fourier basis is extremely inefficient because f and hence Y have an energy that is almost uniformly spread among all Fourier vectors. In contrast, since f has only two non-zero coefficients in the Dirac basis, a non-linear approximation of Y with "M" ≥ 2 gives zero error. Principal component analysis. We have established the Karhunen–Loève theorem and derived a few properties thereof. We also noted that one hurdle in its application was the numerical cost of determining the eigenvalues and eigenfunctions of its covariance operator through the Fredholm integral equation of the second kind formula_99 However, when applied to a discrete and finite process formula_100, the problem takes a much simpler form and standard algebra can be used to carry out the calculations. Note that a continuous process can also be sampled at "N" points in time in order to reduce the problem to a finite version. We henceforth consider a random "N"-dimensional vector formula_101. As mentioned above, "X" could contain "N" samples of a signal but it can hold many more representations depending on the field of application. For instance it could be the answers to a survey or economic data in an econometrics analysis. As in the continuous version, we assume that "X" is centered, otherwise we can let formula_102 (where formula_103 is the mean vector of "X") which is centered. Let us adapt the procedure to the discrete case. Covariance matrix. Recall that the main implication and difficulty of the KL transformation is computing the eigenvectors of the linear operator associated to the covariance function, which are given by the solutions to the integral equation written above. Define Σ, the covariance matrix of "X", as an "N" × "N" matrix whose elements are given by: formula_104 Rewriting the above integral equation to suit the discrete case, we observe that it turns into: formula_105 where formula_106 is an "N"-dimensional vector. The integral equation thus reduces to a simple matrix eigenvalue problem, which explains why the PCA has such a broad domain of applications. Since Σ is a positive definite symmetric matrix, it possesses a set of orthonormal eigenvectors forming a basis of formula_107, and we write formula_108 this set of eigenvalues and corresponding eigenvectors, listed in decreasing values of λi. Let also Φ be the orthonormal matrix consisting of these eigenvectors: formula_109 Principal component transform. It remains to perform the actual KL transformation, called the "principal component transform" in this case. Recall that the transform was found by expanding the process with respect to the basis spanned by the eigenvectors of the covariance function. In this case, we hence have: formula_110 In a more compact form, the principal component transform of "X" is defined by: formula_111 The "i"-th component of "Y" is formula_112, the projection of "X" on formula_113 and the inverse transform "X" Φ"Y" yields the expansion of X on the space spanned by the formula_113: formula_114 As in the continuous case, we may reduce the dimensionality of the problem by truncating the sum at some formula_115 such that formula_116 where α is the explained variance threshold we wish to set. We can also reduce the dimensionality through the use of multilevel dominant eigenvector estimation (MDEE). Examples. The Wiener process. There are numerous equivalent characterizations of the Wiener process which is a mathematical formalization of Brownian motion. Here we regard it as the centered standard Gaussian process W"t" with covariance function formula_117 We restrict the time domain to ["a", "b"]=[0,1] without loss of generality. The eigenvectors of the covariance kernel are easily determined. These are formula_118 and the corresponding eigenvalues are formula_119 &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof In order to find the eigenvalues and eigenvectors, we need to solve the integral equation: formula_120 differentiating once with respect to "t" yields: formula_121 a second differentiation produces the following differential equation: formula_122 The general solution of which has the form: formula_123 where "A" and "B" are two constants to be determined with the boundary conditions. Setting "t" = 0 in the initial integral equation gives "e"(0) = 0 which implies that "B" = 0 and similarly, setting "t" = 1 in the first differentiation yields "e' "(1) = 0, whence: formula_124 which in turn implies that eigenvalues of "T""K""X" are: formula_125 The corresponding eigenfunctions are thus of the form: formula_126 "A" is then chosen so as to normalize "e""k": formula_127 This gives the following representation of the Wiener process: Theorem. There is a sequence {"Z""i"}"i" of independent Gaussian random variables with mean zero and variance 1 such that formula_128 Note that this representation is only valid for formula_129 On larger intervals, the increments are not independent. As stated in the theorem, convergence is in the L2 norm and uniform in "t". The Brownian bridge. Similarly the Brownian bridge formula_130 which is a stochastic process with covariance function formula_131 can be represented as the series formula_132 Applications. Adaptive optics systems sometimes use K–L functions to reconstruct wave-front phase information (Dai 1996, JOSA A). Karhunen–Loève expansion is closely related to the Singular Value Decomposition. The latter has myriad applications in image processing, radar, seismology, and the like. If one has independent vector observations from a vector valued stochastic process then the left singular vectors are maximum likelihood estimates of the ensemble KL expansion. Applications in signal estimation and detection. Detection of a known continuous signal "S"("t"). In communication, we usually have to decide whether a signal from a noisy channel contains valuable information. The following hypothesis testing is used for detecting continuous signal "s"("t") from channel output "X"("t"), "N"("t") is the channel noise, which is usually assumed zero mean Gaussian process with correlation function formula_133 formula_134 formula_135 Signal detection in white noise. When the channel noise is white, its correlation function is formula_136 and it has constant power spectrum density. In physically practical channel, the noise power is finite, so: formula_137 Then the noise correlation function is sinc function with zeros at formula_138 Since are uncorrelated and gaussian, they are independent. Thus we can take samples from "X"("t") with time spacing formula_139 Let formula_140. We have a total of formula_141 i.i.d observations formula_142 to develop the likelihood-ratio test. Define signal formula_143, the problem becomes, formula_144 formula_145 The log-likelihood ratio formula_146 As "t" → 0, let: formula_147 Then "G" is the test statistics and the Neyman–Pearson optimum detector is formula_148 As "G" is Gaussian, we can characterize it by finding its mean and variances. Then we get formula_149 formula_150 where formula_151 is the signal energy. The false alarm error formula_152 And the probability of detection: formula_153 where Φ is the cdf of standard normal, or Gaussian, variable. Signal detection in colored noise. When N(t) is colored (correlated in time) Gaussian noise with zero mean and covariance function formula_154 we cannot sample independent discrete observations by evenly spacing the time. Instead, we can use K–L expansion to decorrelate the noise process and get independent Gaussian observation 'samples'. The K–L expansion of "N"("t"): formula_155 where formula_156 and the orthonormal bases formula_157 are generated by kernel formula_158, i.e., solution to formula_159 Do the expansion: formula_160 where formula_161, then formula_162 under H and formula_163 under K. Let formula_164, we have formula_165 are independent Gaussian r.v's with variance formula_166 under H: formula_167 are independent Gaussian r.v's. formula_168 under K: formula_169 are independent Gaussian r.v's. formula_170 Hence, the log-LR is given by formula_171 and the optimum detector is formula_172 Define formula_173 then formula_174 How to find "k"("t"). Since formula_175 k(t) is the solution to formula_176 If "N"("t")is wide-sense stationary, formula_177 which is known as the Wiener–Hopf equation. The equation can be solved by taking fourier transform, but not practically realizable since infinite spectrum needs spatial factorization. A special case which is easy to calculate "k"("t") is white Gaussian noise. formula_178 The corresponding impulse response is "h"("t") = "k"("T" − "t") = "CS"("T" − "t"). Let "C" = 1, this is just the result we arrived at in previous section for detecting of signal in white noise. Test threshold for Neyman–Pearson detector. Since X(t) is a Gaussian process, formula_179 is a Gaussian random variable that can be characterized by its mean and variance. formula_180 Hence, we obtain the distributions of "H" and "K": formula_181 formula_182 The false alarm error is formula_183 So the test threshold for the Neyman–Pearson optimum detector is formula_184 Its power of detection is formula_185 When the noise is white Gaussian process, the signal power is formula_186 Prewhitening. For some type of colored noise, a typical practise is to add a prewhitening filter before the matched filter to transform the colored noise into white noise. For example, N(t) is a wide-sense stationary colored noise with correlation function formula_187 formula_188 The transfer function of prewhitening filter is formula_189 Detection of a Gaussian random signal in Additive white Gaussian noise (AWGN). When the signal we want to detect from the noisy channel is also random, for example, a white Gaussian process "X"("t"), we can still implement K–L expansion to get independent sequence of observation. In this case, the detection problem is described as follows: formula_190 formula_191 "X"("t") is a random process with correlation function formula_192 The K–L expansion of "X"("t") is formula_193 where formula_194 and formula_195 are solutions to formula_196 So formula_197's are independent sequence of r.v's with zero mean and variance formula_166. Expanding "Y"("t") and "N"("t") by formula_195, we get formula_198 where formula_199 As "N"("t") is Gaussian white noise, formula_165's are i.i.d sequence of r.v with zero mean and variance formula_200, then the problem is simplified as follows, formula_201 formula_202 The Neyman–Pearson optimal test: formula_203 so the log-likelihood ratio is formula_204 Since formula_205 is just the minimum-mean-square estimate of formula_197 given formula_206's, formula_207 K–L expansion has the following property: If formula_208 where formula_209 then formula_210 So let formula_211 Noncausal filter "Q"("t","s") can be used to get the estimate through formula_212 By orthogonality principle, "Q"("t","s") satisfies formula_213 However, for practical reasons, it's necessary to further derive the causal filter "h"("t","s"), where "h"("t","s") = 0 for "s" &gt; "t", to get estimate formula_214. Specifically, formula_215 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " X_t = \\sum_{k=1}^\\infty Z_k e_k(t)" }, { "math_id": 1, "text": "\\forall t\\in [a,b] \\qquad X_t\\in L^2(\\Omega, F,\\mathbf{P}), \\quad \\text{i.e. } \\mathbf{E}[X_t^2] < \\infty," }, { "math_id": 2, "text": "\\forall t\\in [a,b] \\qquad \\mathbf{E}[X_t]=0," }, { "math_id": 3, "text": "\\forall t,s \\in [a,b] \\qquad K_X(s,t)=\\mathbf{E}[X_s X_t]." }, { "math_id": 4, "text": "\\mathbf{E}[X_t^2] < \\infty" }, { "math_id": 5, "text": "K_X(s,t)" }, { "math_id": 6, "text": "s,t \\in [a,b]" }, { "math_id": 7, "text": "\n\\begin{aligned}\n&T_{K_X}&: L^2([a,b]) &\\to L^2([a,b])\\\\\n&&: f \\mapsto T_{K_X}f &= \\int_a^b K_X(s,\\cdot) f(s) \\, ds\n\\end{aligned}\n" }, { "math_id": 8, "text": "\\int_a^b K_X(s,t) e_k(s)\\,ds=\\lambda_k e_k(t)" }, { "math_id": 9, "text": "X_t=\\sum_{k=1}^\\infty Z_k e_k(t)" }, { "math_id": 10, "text": "Z_k=\\int_a^b X_t e_k(t)\\, dt" }, { "math_id": 11, "text": "\\mathbf{E}[Z_k]=0,~\\forall k\\in\\mathbb{N} \\qquad \\mbox{and}\\qquad \\mathbf{E}[Z_i Z_j]=\\delta_{ij} \\lambda_j,~\\forall i,j\\in \\mathbb{N}" }, { "math_id": 12, "text": "K_X(s,t)=\\sum_{k=1}^\\infty \\lambda_k e_k(s) e_k(t) " }, { "math_id": 13, "text": "Z_k=\\int_a^b X_t e_k(t) \\,dt" }, { "math_id": 14, "text": "\\begin{align}\n\\mathbf{E}[Z_k] &=\\mathbf{E}\\left[\\int_a^b X_t e_k(t) \\,dt\\right]=\\int_a^b \\mathbf{E}[X_t] e_k(t) dt=0 \\\\ [8pt]\n\\mathbf{E}[Z_i Z_j]&=\\mathbf{E}\\left[ \\int_a^b \\int_a^b X_t X_s e_j(t)e_i(s)\\, dt\\, ds\\right]\\\\\n&=\\int_a^b \\int_a^b \\mathbf{E}\\left[X_t X_s\\right] e_j(t)e_i(s) \\, dt\\, ds\\\\\n&=\\int_a^b \\int_a^b K_X(s,t) e_j(t)e_i(s) \\,dt \\, ds\\\\\n&=\\int_a^b e_i(s)\\left(\\int_a^b K_X(s,t) e_j(t) \\,dt\\right) \\, ds\\\\\n&=\\lambda_j \\int_a^b e_i(s) e_j(s) \\, ds\\\\\n&=\\delta_{ij}\\lambda_j\n\\end{align}" }, { "math_id": 15, "text": "S_N=\\sum_{k=1}^N Z_k e_k(t)." }, { "math_id": 16, "text": "\\begin{align}\n\\mathbf{E} \\left [\\left |X_t-S_N \\right |^2 \\right ]&=\\mathbf{E} \\left [X_t^2 \\right ]+\\mathbf{E} \\left [S_N^2 \\right ] - 2\\mathbf{E} \\left [X_t S_N \\right ]\\\\\n&=K_X(t,t)+\\mathbf{E}\\left[\\sum_{k=1}^N \\sum_{l=1}^N Z_k Z_\\ell e_k(t)e_\\ell(t) \\right] -2\\mathbf{E}\\left[X_t\\sum_{k=1}^N Z_k e_k(t)\\right]\\\\\n&=K_X(t,t)+\\sum_{k=1}^N \\lambda_k e_k(t)^2 -2\\mathbf{E}\\left[\\sum_{k=1}^N \\int_a^b X_t X_s e_k(s) e_k(t) \\,ds \\right]\\\\\n&=K_X(t,t)-\\sum_{k=1}^N \\lambda_k e_k(t)^2\n\\end{align}" }, { "math_id": 17, "text": " \\lim_{N \\to \\infty} \\sum_{i=1}^N e_i(t) Z_i(\\omega) = X_t(\\omega) " }, { "math_id": 18, "text": "X_t(\\omega)=\\sum_{k=1}^\\infty A_k(\\omega) f_k(t)" }, { "math_id": 19, "text": "A_k(\\omega)=\\int_a^b X_t(\\omega) f_k(t)\\,dt" }, { "math_id": 20, "text": "\\hat{X}_t(\\omega)=\\sum_{k=1}^N A_k(\\omega) f_k(t)" }, { "math_id": 21, "text": "\\varepsilon_N(t)=\\sum_{k=N+1}^\\infty A_k(\\omega) f_k(t)" }, { "math_id": 22, "text": "\\begin{align}\n\\varepsilon_N^2(t)&=\\mathbf{E} \\left[\\sum_{i=N+1}^\\infty \\sum_{j=N+1}^\\infty A_i(\\omega) A_j(\\omega) f_i(t) f_j(t)\\right]\\\\\n&=\\sum_{i=N+1}^\\infty \\sum_{j=N+1}^\\infty \\mathbf{E}\\left[\\int_a^b \\int_a^b X_t X_s f_i(t)f_j(s) \\, ds\\, dt\\right] f_i(t) f_j(t)\\\\\n&=\\sum_{i=N+1}^\\infty \\sum_{j=N+1}^\\infty f_i(t) f_j(t) \\int_a^b \\int_a^b K_X(s,t) f_i(t)f_j(s) \\, ds\\, dt\n\\end{align}" }, { "math_id": 23, "text": "\\int_a^b \\varepsilon_N^2(t) \\, dt=\\sum_{k=N+1}^\\infty \\int_a^b \\int_a^b K_X(s,t) f_k(t)f_k(s) \\, ds\\, dt" }, { "math_id": 24, "text": "Er[f_k(t),k\\in\\{N+1,\\ldots\\}]=\\sum_{k=N+1}^\\infty \\int_a^b \\int_a^b K_X(s,t) f_k(t)f_k(s) \\, ds \\, dt-\\beta_k \\left(\\int_a^b f_k(t) f_k(t) \\, dt -1\\right)" }, { "math_id": 25, "text": "\\frac{\\partial Er}{\\partial f_i(t)}=\\int_a^b \\left(\\int_a^b K_X(s,t) f_i(s) \\, ds -\\beta_i f_i(t) \\right) \\, dt = 0" }, { "math_id": 26, "text": "\\int_a^b K_X(s,t) f_i(s) \\,ds =\\beta_i f_i(t)." }, { "math_id": 27, "text": "\\operatorname{var}[X_t]=\\sum_{k=0}^\\infty e_k(t)^2 \\operatorname{var}[Z_k]=\\sum_{k=1}^\\infty \\lambda_k e_k(t)^2" }, { "math_id": 28, "text": "\\int_a^b \\operatorname{var}[X_t] \\, dt=\\sum_{k=1}^\\infty \\lambda_k" }, { "math_id": 29, "text": "\\sum_{k=1}^N \\lambda_k." }, { "math_id": 30, "text": "\\frac{\\sum_{k=1}^N \\lambda_k}{\\sum_{k=1}^\\infty \\lambda_k}" }, { "math_id": 31, "text": "N\\in\\mathbb{N}" }, { "math_id": 32, "text": "\\frac{\\sum_{k=1}^N \\lambda_k}{\\sum_{k=1}^\\infty \\lambda_k} \\geq 0.95." }, { "math_id": 33, "text": "X_t=\\sum_{k=1}^\\infty W_k\\varphi_k(t)" }, { "math_id": 34, "text": "\\varphi_k(t)" }, { "math_id": 35, "text": "W_k" }, { "math_id": 36, "text": "p_k=\\mathbb{E}[|W_k|^2]/\\mathbb{E}[|X_t|_{L^2}^2]" }, { "math_id": 37, "text": "\\sum_{k=1}^\\infty p_k=1" }, { "math_id": 38, "text": "H(\\{\\varphi_k\\})=-\\sum_i p_k \\log(p_k)" }, { "math_id": 39, "text": "H(\\{\\varphi_k\\})\\ge H(\\{e_k\\})" }, { "math_id": 40, "text": "\\varphi_k" }, { "math_id": 41, "text": "e_k(t)" }, { "math_id": 42, "text": "p_k" }, { "math_id": 43, "text": "q_k" }, { "math_id": 44, "text": "N\\ge 1" }, { "math_id": 45, "text": "e_k" }, { "math_id": 46, "text": "\\mathbb{E} \\left|\\sum_{k=1}^N Z_ke_k(t)-X_t\\right|_{L^2}^2\\le \\mathbb{E}\\left|\\sum_{k=1}^N W_k\\varphi_k(t)-X_t\\right|_{L^2}^2" }, { "math_id": 47, "text": "\\mathbb{E}\\left|\\sum_{k=1}^N W_k\\varphi_k(t)-X_t\\right|_{L^2}^2 =\\mathbb{E}|X_t^2|_{L^2} + \\sum_{k=1}^N \\sum_{\\ell=1}^N \\mathbb{E}[W_\\ell \\varphi_\\ell(t)W_k^*\\varphi_k^*(t)]_{L^2}-\\sum_{k=1}^N \\mathbb{E}[W_k \\varphi_k X_t^*]_{L^2} - \\sum_{k=1}^N \\mathbb{E}[X_tW_k^*\\varphi_k^*(t)]_{L^2}" }, { "math_id": 48, "text": "X_t" }, { "math_id": 49, "text": "\\mathbb{E}[X_t]^2_{L^2}-\\sum_{k=1}^N\\mathbb{E}[|W_k|^2]" }, { "math_id": 50, "text": "{\\displaystyle \\mathbb {E} [X_t]_{L^2}^2-\\sum _{k=1}^N\\mathbb {E} [|Z_k|^2]}\\le {\\displaystyle \\mathbb {E} [X_t]_{L^2}^2-\\sum _{k=1}^N\\mathbb {E} [|W_k|^{2}]} " }, { "math_id": 51, "text": "\\mathbb{E}[|X_t|^2_{L^2}]" }, { "math_id": 52, "text": "\\sum_{k=1}^N p_k\\ge \\sum_{k=1}^N q_k" }, { "math_id": 53, "text": "-\\sum_{k=1}^\\infty p_k \\log(p_k)\\le -\\sum_{k=1}^\\infty q_k \\log(q_k)" }, { "math_id": 54, "text": "\\left\\{ g_m \\right\\}_{0\\le m\\le N}" }, { "math_id": 55, "text": "Y=\\sum_{m=0}^{N-1} \\left\\langle Y, g_m \\right\\rangle g_m," }, { "math_id": 56, "text": "\\left\\langle Y, g_m \\right\\rangle =\\sum_{n=0}^{N-1}{Y[n]} g_m^* [n]" }, { "math_id": 57, "text": "Y_M=\\sum_{m=0}^{M-1} \\left\\langle Y, g_m \\right\\rangle g_m" }, { "math_id": 58, "text": "\\varepsilon[M]= \\mathbf{E} \\left\\{ \\left\\| Y- Y_M \\right\\|^2 \\right\\} =\\sum_{m=M}^{N-1} \\mathbf{E}\\left\\{ \\left| \\left\\langle Y, g_m \\right\\rangle \\right|^2 \\right\\}" }, { "math_id": 59, "text": "R[ n,m]=\\mathbf{E} \\left\\{ Y[n] Y^*[m] \\right\\}" }, { "math_id": 60, "text": "\\mathbf{E}\\left\\{\\left|\\langle Y,x \\rangle\\right|^2\\right\\}=\\langle Kx,x \\rangle =\\sum_{n=0}^{N-1} \\sum_{m=0}^{N-1} R[n,m]x[n]x^*[m]" }, { "math_id": 61, "text": "\\varepsilon [M]=\\sum_{m=M}^{N-1}{\\left\\langle K g_m, g_m \\right\\rangle }" }, { "math_id": 62, "text": "\\varepsilon [M]=\\sum_{m=M}^{N-1}\\left\\langle K g_m, g_m \\right\\rangle" }, { "math_id": 63, "text": "\\left\\{ g_m \\right\\}_{0\\le m<N}" }, { "math_id": 64, "text": "\\left\\langle K g_m, g_m \\right\\rangle \\ge \\left\\langle Kg_{m+1}, g_{m+1} \\right\\rangle, \\qquad 0\\le m<N-1." }, { "math_id": 65, "text": "f\\in \\Eta " }, { "math_id": 66, "text": "\\Eta " }, { "math_id": 67, "text": "\\Beta =\\left\\{ g_m \\right\\}_{m\\in \\mathbb{N}}" }, { "math_id": 68, "text": "f_M" }, { "math_id": 69, "text": "f_M=\\sum_{m\\in I_M} \\left\\langle f, g_m \\right\\rangle g_m" }, { "math_id": 70, "text": "\\varepsilon [M]=\\left\\{ \\left\\| f- f_M \\right\\|^2 \\right\\}=\\sum_{m\\notin I_M}^{N-1} \\left\\{ \\left| \\left\\langle f, g_m \\right\\rangle \\right|^2 \\right\\}" }, { "math_id": 71, "text": "\\left| \\left\\langle f, g_m \\right\\rangle \\right|." }, { "math_id": 72, "text": "\\left\\{ \\left| \\left\\langle f, g_m \\right\\rangle \\right| \\right\\}_{m\\in \\mathbb{N}}" }, { "math_id": 73, "text": "\\left| \\left \\langle f, g_{m_k} \\right \\rangle \\right|\\ge \\left| \\left \\langle f, g_{m_{k+1}} \\right \\rangle \\right|." }, { "math_id": 74, "text": "f_M=\\sum_{k=1}^M \\left\\langle f, g_{m_k} \\right\\rangle g_{m_k}" }, { "math_id": 75, "text": "f_M=\\sum_{m=0}^\\infty \\theta_T \\left( \\left\\langle f, g_m \\right\\rangle \\right) g_m" }, { "math_id": 76, "text": "T=\\left|\\left\\langle f, g_{m_M} \\right \\rangle\\right|, \\qquad \\theta_T(x)= \\begin{cases} x & |x|\\ge T \\\\ 0 & |x| < T \\end{cases}" }, { "math_id": 77, "text": "\\varepsilon [M]=\\left\\{ \\left\\| f- f_M \\right\\|^2 \\right\\}=\\sum_{k=M+1}^{\\infty} \\left\\{ \\left| \\left\\langle f, g_{m_k} \\right\\rangle \\right|^2 \\right\\}" }, { "math_id": 78, "text": "\\left| \\left\\langle f, g_{m_k} \\right\\rangle \\right|" }, { "math_id": 79, "text": "\\Iota^\\Rho" }, { "math_id": 80, "text": "\\| f \\|_{\\Beta, p} =\\left( \\sum_{m=0}^\\infty \\left| \\left\\langle f, g_m \\right\\rangle \\right|^p \\right)^{\\frac{1}{p}}" }, { "math_id": 81, "text": "\\| f\\|_{\\Beta, p}" }, { "math_id": 82, "text": "\\| f\\|_{\\Beta ,p}<\\infty " }, { "math_id": 83, "text": "\\varepsilon [M]\\le \\frac{\\|f\\|_{\\Beta ,p}^2}{\\frac{2}{p}-1} M^{1-\\frac{2}{p}}" }, { "math_id": 84, "text": "\\varepsilon [M]=o\\left( M^{1-\\frac{2}{p}} \\right)." }, { "math_id": 85, "text": "\\varepsilon [M]=o\\left( M^{1-\\frac{2}{p}} \\right)" }, { "math_id": 86, "text": "\\| f\\|_{\\Beta ,q}<\\infty " }, { "math_id": 87, "text": "\\sum_{n=0}^{N-1}f[n]=0" }, { "math_id": 88, "text": "Y[n]=f [ (n-p)\\bmod N ]" }, { "math_id": 89, "text": "\\Pr ( P=p )=\\frac{1}{N}, \\qquad 0\\le p<N" }, { "math_id": 90, "text": "\\mathbf{E}\\{Y[n]\\}=\\frac{1}{N} \\sum_{p=0}^{N-1} f[(n-p)\\bmod N]=0" }, { "math_id": 91, "text": "R[n,k]=\\mathbf{E} \\{Y[n]Y[k] \\}=\\frac{1}{N}\\sum_{p=0}^{N-1} f[(n-p)\\bmod N] f [(k-p)\\bmod N ] = \\frac{1}{N} f\\Theta \\bar{f}[n-k], \\quad \\bar{f}[n]=f[-n]" }, { "math_id": 92, "text": "R[n,k]=R_Y[n-k], \\qquad R_Y[k]=\\frac{1}{N}f \\Theta \\bar{f}[k]" }, { "math_id": 93, "text": "\\left\\{ \\frac{1}{\\sqrt{N}} e^{i2\\pi mn/N} \\right\\}_{0\\le m<N}." }, { "math_id": 94, "text": "P_Y[m]=\\hat{R}_Y[m]=\\frac{1}{N} \\left| \\hat{f}[m] \\right|^2" }, { "math_id": 95, "text": "f[n]=\\delta [n]-\\delta [n-1]" }, { "math_id": 96, "text": "\\left\\{g_m[n]=\\delta[n-m] \\right\\}_{0\\le m<N}" }, { "math_id": 97, "text": "\\mathbf{E} \\left\\{ \\left| \\left\\langle Y[n],\\frac{1}{\\sqrt{N}} e^{i2\\pi mn/N} \\right\\rangle \\right|^2 \\right\\}=P_Y[m] = \\frac{4}{N}\\sin^2 \\left(\\frac{\\pi k}{N} \\right)" }, { "math_id": 98, "text": "f[n]=\\delta[n]-\\delta[n-1]" }, { "math_id": 99, "text": "\\int_a^b K_X(s,t) e_k(s)\\,ds=\\lambda_k e_k(t)." }, { "math_id": 100, "text": "\\left(X_n\\right)_{n\\in\\{1,\\ldots,N\\}}" }, { "math_id": 101, "text": "X=\\left(X_1~X_2~\\ldots~X_N\\right)^T" }, { "math_id": 102, "text": "X:=X-\\mu_X" }, { "math_id": 103, "text": "\\mu_X" }, { "math_id": 104, "text": "\\Sigma_{ij}= \\mathbf{E}[X_i X_j],\\qquad \\forall i,j \\in \\{1,\\ldots,N\\}" }, { "math_id": 105, "text": "\\sum_{j=1}^N \\Sigma_{ij} e_j=\\lambda e_i \\quad \\Leftrightarrow \\quad \\Sigma e=\\lambda e" }, { "math_id": 106, "text": "e=(e_1~e_2~\\ldots~e_N)^T" }, { "math_id": 107, "text": "\\R^N" }, { "math_id": 108, "text": "\\{\\lambda_i,\\varphi_i\\}_{i\\in\\{1,\\ldots,N\\}}" }, { "math_id": 109, "text": "\\begin{align}\n\\Phi &:=\\left(\\varphi_1~\\varphi_2~\\ldots~\\varphi_N\\right)^T\\\\\n\\Phi^T \\Phi &=I\n\\end{align}" }, { "math_id": 110, "text": "X =\\sum_{i=1}^N \\langle \\varphi_i,X\\rangle \\varphi_i =\\sum_{i=1}^N \\varphi_i^T X \\varphi_i" }, { "math_id": 111, "text": "\\begin{cases} Y=\\Phi^T X \\\\ X=\\Phi Y \\end{cases}" }, { "math_id": 112, "text": "Y_i=\\varphi_i^T X" }, { "math_id": 113, "text": "\\varphi_i" }, { "math_id": 114, "text": "X=\\sum_{i=1}^N Y_i \\varphi_i=\\sum_{i=1}^N \\langle \\varphi_i,X\\rangle \\varphi_i" }, { "math_id": 115, "text": "K\\in\\{1,\\ldots,N\\}" }, { "math_id": 116, "text": "\\frac{\\sum_{i=1}^K \\lambda_i}{\\sum_{i=1}^N \\lambda_i}\\geq \\alpha" }, { "math_id": 117, "text": " K_W(t,s) = \\operatorname{cov}(W_t,W_s) = \\min (s,t). " }, { "math_id": 118, "text": " e_k(t) = \\sqrt{2} \\sin \\left( \\left(k - \\tfrac{1}{2}\\right) \\pi t \\right)" }, { "math_id": 119, "text": " \\lambda_k = \\frac{1}{(k -\\frac{1}{2})^2 \\pi^2}. " }, { "math_id": 120, "text": "\\begin{align}\n\\int_a^b K_W(s,t) e(s) \\,ds &=\\lambda e(t)\\qquad \\forall t, 0\\leq t\\leq 1\\\\\n\\int_0^1 \\min(s,t) e(s) \\,ds &=\\lambda e(t)\\qquad \\forall t, 0\\leq t\\leq 1 \\\\\n\\int_0^t s e(s) \\,ds + t \\int_t^1 e(s) \\,ds &= \\lambda e(t) \\qquad \\forall t, 0\\leq t\\leq 1\n\\end{align}" }, { "math_id": 121, "text": "\\int_t^1 e(s) \\, ds=\\lambda e'(t)" }, { "math_id": 122, "text": "-e(t)=\\lambda e''(t)" }, { "math_id": 123, "text": "e(t)=A\\sin\\left(\\frac{t}{\\sqrt{\\lambda}}\\right)+B\\cos\\left(\\frac{t}{\\sqrt{\\lambda}}\\right)" }, { "math_id": 124, "text": "\\cos\\left(\\frac{1}{\\sqrt{\\lambda}}\\right)=0" }, { "math_id": 125, "text": "\\lambda_k=\\left(\\frac{1}{(k-\\frac{1}{2})\\pi}\\right)^2,\\qquad k\\geq 1" }, { "math_id": 126, "text": "e_k(t)=A \\sin\\left((k-\\frac{1}{2})\\pi t\\right),\\qquad k\\geq 1" }, { "math_id": 127, "text": "\\int_0^1 e_k^2(t) \\, dt=1\\quad \\implies\\quad A=\\sqrt{2}" }, { "math_id": 128, "text": " W_t = \\sqrt{2} \\sum_{k=1}^\\infty Z_k \\frac{\\sin \\left(\\left(k - \\frac 1 2 \\right) \\pi t\\right)}{ \\left(k - \\frac 1 2 \\right) \\pi}. " }, { "math_id": 129, "text": " t\\in[0,1]. " }, { "math_id": 130, "text": "B_t=W_t-tW_1" }, { "math_id": 131, "text": "K_B(t,s)=\\min(t,s)-ts" }, { "math_id": 132, "text": "B_t = \\sum_{k=1}^\\infty Z_k \\frac{\\sqrt{2} \\sin(k \\pi t)}{k \\pi}" }, { "math_id": 133, "text": "R_N (t, s) = E[N(t)N(s)]" }, { "math_id": 134, "text": "H: X(t) = N(t), " }, { "math_id": 135, "text": "K: X(t) = N(t)+s(t), \\quad t\\in(0,T)" }, { "math_id": 136, "text": "R_N(t) = \\tfrac{1}{2} N_0 \\delta (t)," }, { "math_id": 137, "text": "S_N(f) = \\begin{cases} \\frac{N_0}{2} &|f|<w \\\\ 0 & |f|>w \\end{cases}" }, { "math_id": 138, "text": "\\frac{n}{2\\omega}, n \\in \\mathbf{Z}." }, { "math_id": 139, "text": " \\Delta t = \\frac{n}{2\\omega} \\text{ within } (0,''T''). " }, { "math_id": 140, "text": "X_i = X(i\\,\\Delta t)" }, { "math_id": 141, "text": "n = \\frac{T}{\\Delta t} = T(2\\omega) = 2\\omega T" }, { "math_id": 142, "text": "\\{X_1, X_2,\\ldots,X_n\\}" }, { "math_id": 143, "text": "S_i = S(i\\,\\Delta t)" }, { "math_id": 144, "text": "H: X_i = N_i," }, { "math_id": 145, "text": "K: X_i = N_i + S_i, i = 1,2,\\ldots,n." }, { "math_id": 146, "text": "\\mathcal{L}(\\underline{x}) = \\log\\frac{\\sum^n_{i=1} (2S_i x_i - S_i^2)}{2\\sigma^2} \\Leftrightarrow \\Delta t \\sum^n_{i = 1} S_i x_i = \\sum^n_{i=1} S(i\\,\\Delta t)x(i\\,\\Delta t) \\, \\Delta t \\gtrless \\lambda_\\cdot2" }, { "math_id": 147, "text": "G = \\int^T_0 S(t)x(t) \\, dt." }, { "math_id": 148, "text": "G(\\underline{x}) > G_0 \\Rightarrow K < G_0 \\Rightarrow H." }, { "math_id": 149, "text": "H: G \\sim N \\left (0,\\tfrac{1}{2}N_0E \\right )" }, { "math_id": 150, "text": "K: G \\sim N \\left (E,\\tfrac{1}{2}N_0E \\right )" }, { "math_id": 151, "text": "\\mathbf{E} = \\int^T_0 S^2(t) \\, dt" }, { "math_id": 152, "text": "\\alpha = \\int^\\infty_{G_0} N \\left (0, \\tfrac{1}{2}N_0E \\right) \\, dG \\Rightarrow G_0 = \\sqrt{\\tfrac{1}{2} N_0E} \\Phi^{-1}(1-\\alpha)" }, { "math_id": 153, "text": "\\beta = \\int^\\infty_{G_0} N \\left (E, \\tfrac{1}{2}N_0E \\right) \\, dG = 1-\\Phi \\left (\\frac{G_0 - E}{\\sqrt{\\tfrac{1}{2} N_0 E}} \\right ) = \\Phi \\left (\\sqrt{\\frac{2E}{N_0}} - \\Phi^{-1}(1-\\alpha) \\right )," }, { "math_id": 154, "text": "R_N(t,s) = E[N(t)N(s)]," }, { "math_id": 155, "text": "N(t) = \\sum^{\\infty}_{i=1} N_i \\Phi_i(t), \\quad 0<t<T," }, { "math_id": 156, "text": "N_i =\\int N(t)\\Phi_i(t)\\,dt" }, { "math_id": 157, "text": "\\{\\Phi_i{t}\\}" }, { "math_id": 158, "text": "R_N(t,s)" }, { "math_id": 159, "text": " \\int ^T_0 R_N(t,s)\\Phi_i(s)\\,ds = \\lambda_i \\Phi_i(t), \\quad \\operatorname{var}[N_i] = \\lambda_i." }, { "math_id": 160, "text": "S(t) = \\sum^{\\infty}_{i = 1}S_i\\Phi_i(t)," }, { "math_id": 161, "text": "S_i = \\int^T _0 S(t)\\Phi_i(t) \\, dt" }, { "math_id": 162, "text": "X_i = \\int^T _0 X(t)\\Phi_i(t) \\, dt = N_i" }, { "math_id": 163, "text": "N_i + S_i" }, { "math_id": 164, "text": "\\overline{X} = \\{X_1,X_2,\\dots\\}" }, { "math_id": 165, "text": "N_i" }, { "math_id": 166, "text": "\\lambda_i" }, { "math_id": 167, "text": "\\{X_i\\}" }, { "math_id": 168, "text": "f_H[x(t)|0<t<T] = f_H(\\underline{x}) = \\prod^\\infty_{i=1} \\frac{1}{\\sqrt{2\\pi \\lambda_i}} \\exp \\left (-\\frac{x_i^2}{2 \\lambda_i} \\right )" }, { "math_id": 169, "text": "\\{X_i - S_i\\}" }, { "math_id": 170, "text": "f_K[x(t)\\mid 0<t<T] = f_K(\\underline{x}) = \\prod^\\infty_{i=1} \\frac{1}{\\sqrt{2\\pi \\lambda_i}} \\exp \\left(-\\frac{(x_i - S_i)^2}{2 \\lambda_i} \\right)" }, { "math_id": 171, "text": "\\mathcal{L}(\\underline{x}) = \\sum^{\\infty}_{i=1} \\frac{2S_i x_i - S_i^2}{2\\lambda_i}" }, { "math_id": 172, "text": "G = \\sum^\\infty_{i=1} S_i x_i \\lambda_i > G_0 \\Rightarrow K, < G_0 \\Rightarrow H." }, { "math_id": 173, "text": "k(t) = \\sum^\\infty_{i=1} \\lambda_i S_i \\Phi_i(t), 0<t<T," }, { "math_id": 174, "text": "G = \\int^T _0 k(t)x(t)\\,dt." }, { "math_id": 175, "text": "\\int^T_0 R_N(t,s)k(s) \\, ds = \\sum^\\infty_{i=1} \\lambda_i S_i \\int^T _0 R_N(t,s)\\Phi_i (s) \\, ds = \\sum^\\infty_{i=1} S_i \\Phi_i(t) = S(t), " }, { "math_id": 176, "text": "\\int^T_0 R_N(t,s)k(s)\\,ds = S(t)." }, { "math_id": 177, "text": "\\int^T_0 R_N(t-s)k(s) \\, ds = S(t), " }, { "math_id": 178, "text": "\\int^T_0 \\frac{N_0}{2}\\delta(t-s)k(s) \\, ds = S(t) \\Rightarrow k(t) = C S(t), \\quad 0<t<T." }, { "math_id": 179, "text": "G = \\int^T_0 k(t)x(t) \\, dt," }, { "math_id": 180, "text": "\\begin{align}\n\\mathbf{E}[G\\mid H] &= \\int^T_0 k(t)\\mathbf{E}[x(t)\\mid H]\\,dt = 0 \\\\\n\\mathbf{E}[G\\mid K] &= \\int^T_0 k(t)\\mathbf{E}[x(t)\\mid K]\\,dt = \\int^T_0 k(t)S(t)\\,dt \\equiv \\rho \\\\\n\\mathbf{E}[G^2\\mid H] &= \\int^T_0 \\int^T_0 k(t)k(s) R_N(t,s)\\,dt\\,ds = \\int^T_0 k(t) \\left (\\int^T_0 k(s)R_N(t,s) \\, ds \\right) = \\int^T_0 k(t)S(t) \\, dt = \\rho \\\\\n\\operatorname{var}[G\\mid H] &= \\mathbf{E}[G^2\\mid H] - (\\mathbf{E}[G\\mid H])^2 = \\rho \\\\\n\\mathbf{E}[G^2\\mid K] &=\\int^T_0\\int^T_0k(t)k(s) \\mathbf{E}[x(t)x(s)]\\,dt\\,ds = \\int^T_0\\int^T_0k(t)k(s)(R_N(t,s) +S(t)S(s)) \\, dt\\, ds = \\rho + \\rho^2\\\\\n\\operatorname{var}[G\\mid K] &= \\mathbf{E}[G^2|K] - (\\mathbf{E}[G|K])^2 = \\rho + \\rho^2 -\\rho^2 = \\rho\n\\end{align}" }, { "math_id": 181, "text": "H: G \\sim N(0,\\rho)" }, { "math_id": 182, "text": "K: G \\sim N(\\rho, \\rho)" }, { "math_id": 183, "text": "\\alpha = \\int^\\infty_{G_0} N(0,\\rho)\\,dG = 1 - \\Phi \\left (\\frac{G_0}{\\sqrt{\\rho}} \\right )." }, { "math_id": 184, "text": "G_0 = \\sqrt{\\rho} \\Phi^{-1} (1-\\alpha)." }, { "math_id": 185, "text": "\\beta = \\int^\\infty_{G_0} N(\\rho, \\rho) \\, dG = \\Phi \\left (\\sqrt{\\rho} - \\Phi^{-1}(1 - \\alpha) \\right) " }, { "math_id": 186, "text": "\\rho = \\int^T_0 k(t)S(t) \\, dt = \\int^T_0 S(t)^2 \\, dt = E." }, { "math_id": 187, "text": "R_N(\\tau) = \\frac{B N_0}{4} e^{-B|\\tau|}" }, { "math_id": 188, "text": "S_N(f) = \\frac{N_0}{2(1+(\\frac{w}{B})^2)}" }, { "math_id": 189, "text": "H(f) = 1 + j \\frac{w}{B}." }, { "math_id": 190, "text": "H_0 : Y(t) = N(t)" }, { "math_id": 191, "text": "H_1 : Y(t) = N(t) + X(t), \\quad 0<t<T. " }, { "math_id": 192, "text": "R_X(t,s) = E\\{X(t)X(s)\\}" }, { "math_id": 193, "text": "X(t) = \\sum^\\infty_{i=1} X_i \\Phi_i(t)," }, { "math_id": 194, "text": "X_i = \\int^T_0 X(t)\\Phi_i(t)\\,dt" }, { "math_id": 195, "text": "\\Phi_i(t)" }, { "math_id": 196, "text": " \\int^T_0 R_X(t,s)\\Phi_i(s)ds= \\lambda_i \\Phi_i(t). " }, { "math_id": 197, "text": "X_i" }, { "math_id": 198, "text": "Y_i = \\int^T_0 Y(t)\\Phi_i(t) \\, dt = \\int^T_0 [N(t) + X(t)]\\Phi_i(t) = N_i + X_i," }, { "math_id": 199, "text": "N_i = \\int^T_0 N(t)\\Phi_i(t)\\,dt." }, { "math_id": 200, "text": "\\tfrac{1}{2}N_0" }, { "math_id": 201, "text": "H_0: Y_i = N_i" }, { "math_id": 202, "text": "H_1: Y_i = N_i + X_i" }, { "math_id": 203, "text": "\\Lambda = \\frac{f_Y\\mid H_1}{f_Y\\mid H_0} = Ce^{-\\sum^\\infty_{i=1} \\frac{y_i^2}{2} \\frac{\\lambda_i}{\\tfrac{1}{2} N_0 (\\tfrac{1}{2}N_0 + \\lambda_i)} }," }, { "math_id": 204, "text": "\\mathcal{L} = \\ln(\\Lambda) = K -\\sum^\\infty_{i=1}\\tfrac{1}{2}y_i^2 \\frac{\\lambda_i}{\\frac{N_0}{2} \\left(\\frac{N_0}{2} + \\lambda_i\\right)}." }, { "math_id": 205, "text": "\\widehat{X}_i = \\frac{\\lambda_i}{\\frac{N_0}{2} \\left( \\frac{N_0}{2} + \\lambda_i \\right)}" }, { "math_id": 206, "text": "Y_i" }, { "math_id": 207, "text": "\\mathcal{L} = K + \\frac{1}{N_0} \\sum^\\infty_{i=1} Y_i \\widehat{X}_i." }, { "math_id": 208, "text": "f(t) = \\sum f_i \\Phi_i(t), g(t) = \\sum g_i \\Phi_i(t)," }, { "math_id": 209, "text": "f_i = \\int_0^T f(t) \\Phi_i(t)\\,dt, \\quad g_i = \\int_0^T g(t)\\Phi_i(t) \\, dt." }, { "math_id": 210, "text": "\\sum^\\infty_{i=1} f_i g_i = \\int^T_0 g(t)f(t)\\,dt." }, { "math_id": 211, "text": "\\widehat{X}(t\\mid T) = \\sum^\\infty_{i=1} \\widehat{X}_i \\Phi_i(t), \\quad \\mathcal{L} = K + \\frac{1}{N_0} \\int^T_0 Y(t) \\widehat{X}(t\\mid T) \\, dt." }, { "math_id": 212, "text": "\\widehat{X}(t\\mid T) = \\int^T_0 Q(t,s)Y(s)\\,ds." }, { "math_id": 213, "text": "\\int^T_0 Q(t,s)R_X(s,t)\\,ds + \\tfrac{N_0}{2} Q(t, \\lambda) = R_X(t, \\lambda), 0 < \\lambda < T, 0<t<T. " }, { "math_id": 214, "text": "\\widehat{X}(t\\mid t)" }, { "math_id": 215, "text": "Q(t,s) = h(t,s) + h(s, t) - \\int^T_0 h(\\lambda, t)h(s, \\lambda) \\, d\\lambda" } ]
https://en.wikipedia.org/wiki?curid=767253
7672589
Pythagorean quadruple
Four integers where the sum of the squares of three equals the square of the fourth A Pythagorean quadruple is a tuple of integers "a", "b", "c", and "d", such that "a"2 + "b"2 + "c"2 "d"2. They are solutions of a Diophantine equation and often only positive integer values are considered. However, to provide a more complete geometric interpretation, the integer values can be allowed to be negative and zero (thus allowing Pythagorean triples to be included) with the only condition being that "d" &gt; 0. In this setting, a Pythagorean quadruple ("a", "b", "c", "d") defines a cuboid with integer side lengths , , and , whose space diagonal has integer length "d"; with this interpretation, Pythagorean quadruples are thus also called "Pythagorean boxes". In this article we will assume, unless otherwise stated, that the values of a Pythagorean quadruple are all positive integers. Parametrization of primitive quadruples. A Pythagorean quadruple is called primitive if the greatest common divisor of its entries is 1. Every Pythagorean quadruple is an integer multiple of a primitive quadruple. The set of primitive Pythagorean quadruples for which "a" is odd can be generated by the formulas formula_0 where "m", "n", "p", "q" are non-negative integers with greatest common divisor 1 such that "m" + "n" + "p" + "q" is odd. Thus, all primitive Pythagorean quadruples are characterized by the identity formula_1 Alternate parametrization. All Pythagorean quadruples (including non-primitives, and with repetition, though "a", "b", and "c" do not appear in all possible orders) can be generated from two positive integers "a" and "b" as follows: If "a" and "b" have different parity, let "p" be any factor of "a"2 + "b"2 such that "p"2 &lt; "a"2 + "b"2. Then "c" and "d" . Note that "p" "d" − "c". A similar method exists for generating all Pythagorean quadruples for which "a" and "b" are both even. Let "l" and "m" and let "n" be a factor of "l"2 + "m"2 such that "n"2 &lt; "l"2 + "m"2. Then "c" and "d" . This method generates all Pythagorean quadruples exactly once each when "l" and "m" run through all pairs of natural numbers and "n" runs through all permissible values for each pair. No such method exists if both "a" and "b" are odd, in which case no solutions exist as can be seen by the parametrization in the previous section. Properties. The largest number that always divides the product "abcd" is 12. The quadruple with the minimal product is (1, 2, 2, 3). Given a Pythagorean quadruple formula_2 where formula_3 then formula_4 can be defined as the norm of the quadruple in that formula_5 and is analogous to the hypotenuse of a Pythagorean triple. Every odd positive number other than 1 and 5 can be the norm of a primitive Pythagorean quadruple formula_3 such that formula_6 are greater than zero and are coprime. All primitive Pythagorean quadruples with the odd numbers as norms up to 29 except 1 and 5 are given in the table below. Similar to a Pythagorean triple which generates a distinct right triangle, a Pythagorean quadruple will generate a distinct Heronian triangle. If a, b, c, d is a Pythagorean quadruple with formula_7 it will generate a Heronian triangle with sides x, y, z as follows:- formula_8 formula_9 formula_10. It will have a semiperimeter formula_11, an area formula_12 and an inradius formula_13. The exradii will be:- formula_14 formula_15 formula_16. The circumradius will be formula_17. The ordered sequence of areas of this class of Heronian triangles can be found at (sequence in the OEIS). Relationship with quaternions and rational orthogonal matrices. A primitive Pythagorean quadruple ("a", "b", "c", "d") parametrized by ("m", "n", "p", "q") corresponds to the first column of the matrix representation "E"("α") of conjugation "α"(⋅)"α" by the Hurwitz quaternion "α" "m" + "ni" + "pj" + "qk" restricted to the subspace of quaternions spanned by "i", "j", "k", which is given by formula_18 where the columns are pairwise orthogonal and each has norm "d". Furthermore, we have that "E"("α") belongs to the orthogonal group formula_19, and, in fact, "all" 3 × 3 orthogonal matrices with rational coefficients arise in this manner. Primitive Pythagorean quadruples with small norm. There are 31 primitive Pythagorean quadruples in which all entries are less than 30. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align} a &= m^2+n^2-p^2-q^2, \\\\ b &= 2(mq+np), \\\\ c &= 2(nq-mp), \\\\ d &= m^2+n^2+p^2+q^2, \\end{align}" }, { "math_id": 1, "text": "(m^2 + n^2 + p^2 + q^2)^2 = (2mq + 2np)^2 + (2nq - 2mp)^2 + (m^2 + n^2 - p^2 - q^2)^2." }, { "math_id": 2, "text": "(a,b,c,d)" }, { "math_id": 3, "text": "d^2=a^2+b^2+c^2" }, { "math_id": 4, "text": "d" }, { "math_id": 5, "text": "d = \\sqrt{a^2+b^2+c^2}" }, { "math_id": 6, "text": "a, b, c" }, { "math_id": 7, "text": "a^2 + b^2 + c^2 = d^2" }, { "math_id": 8, "text": "x = d^2 - a^2" }, { "math_id": 9, "text": "y = d^2 - b^2" }, { "math_id": 10, "text": "z = d^2 - c^2" }, { "math_id": 11, "text": "s = d^2" }, { "math_id": 12, "text": "A = abcd" }, { "math_id": 13, "text": "r = abc/d" }, { "math_id": 14, "text": "r_x = bcd/a" }, { "math_id": 15, "text": "r_y = acd/b" }, { "math_id": 16, "text": "r_z = abd/c" }, { "math_id": 17, "text": "R=(d^2 - a^2)(d^2 - b^2)(d^2 - c^2)/(4abcd) = abcd(1/a^2 + 1/b^2 + 1/c^2 -1/d^2)/4" }, { "math_id": 18, "text": "E(\\alpha) = \\begin{pmatrix}\nm^2+n^2-p^2-q^2&2np-2mq &2mp+2nq \\\\\n2mq+2np &m^2-n^2+p^2-q^2&2pq-2mn \\\\\n2nq-2mp &2mn+2pq &m^2-n^2-p^2+q^2\\\\\n\\end{pmatrix}," }, { "math_id": 19, "text": "SO(3,\\mathbb{Q})" } ]
https://en.wikipedia.org/wiki?curid=7672589
76736170
Sensitivity theorem
Theorem about complexity measures of Boolean functions In computational complexity, the sensitivity theorem, proved by Hao Huang in 2019, states that the "sensitivity" of a Boolean function formula_0 is at least the square root of its "degree", thus settling a conjecture posed by Nisan and Szegedy in 1992. The proof is notably succinct, given that prior progress had been limited. Background. Several papers in the late 1980s and early 1990s showed that various decision tree complexity measures of Boolean functions are polynomially related, meaning that if formula_1 are two such measures then formula_2 for some constant formula_3. Nisan and Szegedy showed that degree and approximate degree are also polynomially related to all these measures. Their proof went via yet another complexity measure, "block sensitivity", which had been introduced by Nisan. Block sensitivity generalizes a more natural measure, (critical) sensitivity, which had appeared before. Nisan and Szegedy asked whether block sensitivity is polynomially bounded by sensitivity (the other direction is immediate since sensitivity is at most block sensitivity). This is equivalent to asking whether sensitivity is polynomially related to the various decision tree complexity measures, as well as to degree, approximate degree, and other complexity measures which have been shown to be polynomially related to these along the years. This became known as the sensitivity conjecture. Along the years, several special cases of the sensitivity conjecture were proven. The sensitivity theorem was finally proven in its entirety by Huang, using a reduction of Gotsman and Linial. Statement. Every Boolean function formula_0 can be expressed in a unique way as a multilinear polynomial. The "degree" of formula_4 is the degree of this unique polynomial, denoted formula_5. The "sensitivity" of the Boolean function formula_4 at the point formula_6 is the number of indices formula_7 such that formula_8, where formula_9 is obtained from formula_10 by flipping the formula_11'th coordinate. The sensitivity of formula_4 is the maximum sensitivity of formula_4 at any point formula_6, denoted formula_12. The sensitivity theorem states that formula_13 In the other direction, Tal, improving on an earlier bound of Nisan and Szegedy, showed that formula_14 The sensitivity theorem is tight for the AND-of-ORs function: formula_15 This function has degree formula_16 and sensitivity formula_17. Proof. Let formula_0 be a Boolean function of degree formula_18. Consider any "maxonomial" of formula_4, that is, a monomial of degree formula_18 in the unique multilinear polynomial representing formula_4. If we substitute an arbitrary value in the coordinates not mentioned in the monomial then we get a function formula_19 on formula_18 coordinates which has degree formula_18, and moreover, formula_20. If we prove the sensitivity theorem for formula_19 then it follows for formula_4. So from now on, we assume without loss of generality that formula_4 has degree formula_21. Define a new function formula_22 by formula_23 It can be shown that since formula_4 has degree formula_21 then formula_24 is unbalanced (meaning that formula_25), say formula_26. Consider the subgraph formula_27 of the hypercube (the graph on formula_28 in which two vertices are connected if they differ by a single coordinate) induced by formula_29. In order to prove the sensitivity theorem, it suffices to show that formula_27 has a vertex whose degree is at least formula_30. This reduction is due to Gotsman and Linial. Huang constructs a "signing of the hypercube" in which the product of the signs along any square is formula_31. This means that there is a way to assign a sign to every edge of the hypercube so that this property is satisfied. The same signing had been found earlier by Ahmadi et al., which were interested in signings of graphs with few distinct eigenvalues. Let formula_32 be the signed adjacency matrix corresponding to the signing. The property that the product of the signs in every square is formula_31 implies that formula_33, and so half of the eigenvalues of formula_32 are formula_30 and half are formula_34. In particular, the eigenspace of formula_30 (which has dimension formula_35) intersects the space of vectors supported by formula_36 (which has dimension formula_37), implying that there is an eigenvector formula_38 of formula_32 with eigenvalue formula_30 which is supported on formula_36. (This is a simplification of Huang's original argument due to Shalev Ben-David.) Consider a point formula_39 maximizing formula_40. On the one hand, formula_41. On the other hand, formula_42 is at most the sum of absolute values of all neighbors of formula_10 in formula_36, which is at most formula_43. Hence formula_44. Constructing the signing. Huang constructed the signing recursively. When formula_45, we can take an arbitrary signing. Given a signing formula_46 of the formula_21-dimensional hypercube formula_47, we construct a signing of formula_48 as follows. Partition formula_48 into two copies of formula_47. Use formula_46 for one of them and formula_49 for the other, and assign all edges between the two copies the sign formula_50. The same signing can also be expressed directly. Let formula_51 be an edge of the hypercube. If formula_11 is the first coordinate on which formula_52 differ, we use the sign formula_53. Extensions. The sensitivity theorem can be equivalently restated as formula_54 Laplante et al. refined this to formula_55 where formula_56 is the maximum sensitivity of formula_4 at a point in formula_57. They showed furthermore that this bound is attained at two neighboring points of the hypercube. Aaronson, Ben-David, Kothari and Tal defined a new measure, the "spectral sensitivity" of formula_4, denoted formula_58. This is the largest eigenvalue of the adjacency matrix of the "sensitivity graph" of formula_4, which is the subgraph of the hypercube consisting of all sensitive edges (edges connecting two points formula_52 such that formula_59). They showed that Huang's proof can be decomposed into two steps: Using this measure, they proved several tight relations between complexity measures of Boolean functions: formula_62 and formula_63. Here formula_64 is the deterministic query complexity and formula_65 is the quantum query complexity. Dafni et al. extended the notions of degree and sensitivity to Boolean functions on the symmetric group and on the perfect matching association scheme, and proved analogs of the sensitivity theorem for such functions. Their proofs use a reduction to Huang's sensitivity theorem. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f\\colon \\{0,1\\}^n \\to \\{0,1\\}" }, { "math_id": 1, "text": "\\alpha(f),\\beta(f)" }, { "math_id": 2, "text": "\\alpha(f) \\leq C\\beta(f)^C" }, { "math_id": 3, "text": "C>0" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "\\deg(f)" }, { "math_id": 6, "text": "x \\in \\{0,1\\}^n" }, { "math_id": 7, "text": "i \\in [n]" }, { "math_id": 8, "text": "f(x^{\\oplus i}) \\neq f(x)" }, { "math_id": 9, "text": "x^{\\oplus i}" }, { "math_id": 10, "text": "x" }, { "math_id": 11, "text": "i" }, { "math_id": 12, "text": "s(f)" }, { "math_id": 13, "text": "s(f) \\ge \\sqrt{\\deg(f)}." }, { "math_id": 14, "text": "s(f) \\leq \\deg(f)^2." }, { "math_id": 15, "text": "\\bigwedge_{i=1}^m \\bigvee_{j=1}^m x_{ij} " }, { "math_id": 16, "text": "m^2" }, { "math_id": 17, "text": "m" }, { "math_id": 18, "text": "d" }, { "math_id": 19, "text": "F" }, { "math_id": 20, "text": "s(f) \\geq s(F)" }, { "math_id": 21, "text": "n" }, { "math_id": 22, "text": "g\\colon \\{0,1\\}^n \\to \\{0,1\\}" }, { "math_id": 23, "text": "g(x_1,\\dots,x_n) = f \\oplus x_1 \\oplus \\cdots \\oplus x_n." }, { "math_id": 24, "text": "g" }, { "math_id": 25, "text": "|g^{-1}(0)| \\neq |g^{-1}(1)|" }, { "math_id": 26, "text": "|g^{-1}(1)| > 2^{n-1}" }, { "math_id": 27, "text": "G" }, { "math_id": 28, "text": "\\{0,1\\}^n" }, { "math_id": 29, "text": "S = g^{-1}(1)" }, { "math_id": 30, "text": "\\sqrt{n}" }, { "math_id": 31, "text": "-1" }, { "math_id": 32, "text": "A" }, { "math_id": 33, "text": "A^2=nI" }, { "math_id": 34, "text": "-\\sqrt{n}" }, { "math_id": 35, "text": "2^{n-1}" }, { "math_id": 36, "text": "S" }, { "math_id": 37, "text": ">2^{n-1}" }, { "math_id": 38, "text": "v" }, { "math_id": 39, "text": "x \\in S" }, { "math_id": 40, "text": "|v_x|" }, { "math_id": 41, "text": "Av = \\sqrt{n}v" }, { "math_id": 42, "text": "Av" }, { "math_id": 43, "text": "\\deg_G(x) \\cdot |v_x|" }, { "math_id": 44, "text": "\\deg_G(x) \\geq \\sqrt{n}" }, { "math_id": 45, "text": "n=1" }, { "math_id": 46, "text": "\\sigma_n" }, { "math_id": 47, "text": "Q_n" }, { "math_id": 48, "text": "Q_{n+1}" }, { "math_id": 49, "text": "-\\sigma_n" }, { "math_id": 50, "text": "1" }, { "math_id": 51, "text": "(x,y)" }, { "math_id": 52, "text": "x,y" }, { "math_id": 53, "text": "(-1)^{x_1 + \\cdots + x_{i-1}}" }, { "math_id": 54, "text": " \\deg(f) \\leq s(f)^2. " }, { "math_id": 55, "text": " \\deg(f) \\leq s_0(f)s_1(f), " }, { "math_id": 56, "text": "s_b(f)" }, { "math_id": 57, "text": "f^{-1}(b)" }, { "math_id": 58, "text": "\\lambda(f)" }, { "math_id": 59, "text": "f(x) \\neq f(y)" }, { "math_id": 60, "text": "\\deg(f) \\leq \\lambda(f)^2" }, { "math_id": 61, "text": "\\lambda(f) \\leq s(f)" }, { "math_id": 62, "text": "<math>\\deg(f) = O(Q(f)^2)</matH>" }, { "math_id": 63, "text": "D(f) = O(Q(f)^4)" }, { "math_id": 64, "text": "D(f)" }, { "math_id": 65, "text": "Q(f)" } ]
https://en.wikipedia.org/wiki?curid=76736170
76743052
CLRg property
In mathematics, the notion of "“common limit in the range”" property denoted by CLRg property is a theorem that unifies, generalizes, and extends the contractive mappings in fuzzy metric spaces, where the range of the mappings does not necessarily need to be a closed subspace of a non-empty set formula_0. Suppose formula_0 is a non-empty set, and formula_1 is a distance metric; thus, formula_2 is a metric space. Now suppose we have self mappings formula_3 These mappings are said to fulfil CLRg property if  formula_4 for some formula_5  Next, we give some examples that satisfy the CLRg property. Examples. Source: Example 1. Suppose formula_6 is a usual metric space, with formula_7 Now, if the mappings formula_8 are defined respectively as follows: for all formula_11 Now, if the following sequence formula_12 is considered. We can see that formula_13 thus, the mappings formula_14 and formula_15 fulfilled the CLRg property. Another example that shades more light to this CLRg property is given below Example 2. Let formula_6 is a usual metric space, with formula_7 Now, if the mappings formula_8 are defined respectively as follows: for all formula_11 Now, if the following sequence formula_18 is considered. We can easily see that formula_19 hence, the mappings formula_14 and formula_15 fulfilled the CLRg property. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "d" }, { "math_id": 2, "text": "(X, d)" }, { "math_id": 3, "text": "f,g : X \\to X." }, { "math_id": 4, "text": "\\lim_{k \\to \\infty} f x_{k} = \\lim_{k \\to \\infty} g x_{k} = gx," }, { "math_id": 5, "text": "x \\in X." }, { "math_id": 6, "text": "(X,d)" }, { "math_id": 7, "text": "X=[0,\\infty)." }, { "math_id": 8, "text": "f,g: X \\to X" }, { "math_id": 9, "text": "fx = \\frac{x}{4}" }, { "math_id": 10, "text": "gx = \\frac{3x}{4}" }, { "math_id": 11, "text": "x\\in X." }, { "math_id": 12, "text": "\\{x_k\\}=\\{1/k\\}" }, { "math_id": 13, "text": "\n\\lim_{k\\to \\infty}fx_{k} = \\lim_{k\\to \\infty}gx_{k} = g0 = 0,\n" }, { "math_id": 14, "text": "f" }, { "math_id": 15, "text": "g" }, { "math_id": 16, "text": "fx = x+1" }, { "math_id": 17, "text": "gx = 2x" }, { "math_id": 18, "text": "\\{x_k\\}=\\{1+1/k \\}" }, { "math_id": 19, "text": "\n\\lim_{k\\to \\infty}fx_{k} = \\lim_{k\\to \\infty}gx_{k} = g1 = 2,\n" } ]
https://en.wikipedia.org/wiki?curid=76743052
76753765
Chromatic symmetric function
Symmetric function invariant of graphs The chromatic symmetric function is a symmetric function invariant of graphs studied in algebraic graph theory, a branch of mathematics. It is the weight generating function for proper graph colorings, and was originally introduced by Richard Stanley as a generalization of the chromatic polynomial of a graph. Definition. For a finite graph formula_0 with vertex set formula_1, a "vertex coloring" is a function formula_2 where formula_3 is a set of colors. A vertex coloring is called "proper" if all adjacent vertices are assigned distinct colors (i.e., formula_4). The chromatic symmetric function denoted formula_5 is defined to be the weight generating function of proper vertex colorings of formula_6:formula_7 Examples. For formula_8 a partition, let formula_9 be the monomial symmetric polynomial associated to formula_8. Example 1: Complete Graphs. Consider the complete graph formula_10 on formula_11 vertices: Thus, formula_14 Example 2: A Path Graph. Consider the path graph formula_15 of length formula_16: Altogether, the chromatic symmetric function of formula_15 is then given by:formula_23 Open Problems. There are a number of outstanding questions regarding the chromatic symmetric function which have received substantial attention in the literature surrounding them. (3+1)-free Conjecture. For a partition formula_8, let formula_49 be the elementary symmetric function associated to formula_8. A partially ordered set formula_53 is called formula_54"-free" if it does not contain a subposet isomorphic to the direct sum of the formula_16 element chain and the formula_28 element chain. The incomparability graph formula_55 of a poset formula_53 is the graph with vertices given by the elements of formula_53 which includes an edge between two vertices if and only if their corresponding elements in formula_53 are incomparable. Conjecture (Stanley-Stembridge) Let formula_6 be the incomparability graph of a formula_56-free poset, then formula_57 is formula_58-positive. A weaker positivity result is known for the case of expansions into the basis of Schur functions. Theorem (Gasharov) Let formula_6 be the incomparability graph of a formula_56-free poset, then formula_57 is formula_51-positive. In the proof of the theorem above, there is a combinatorial formula for the coefficients of the Schur expansion given in terms of "formula_53-tableaux" which are a generalization of semistandard Young tableaux instead labelled with the elements of formula_53. Generalizations. There are a number of generalizations of the chromatic symmetric function: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G=(V,E)" }, { "math_id": 1, "text": "V=\\{v_1,v_2,\\ldots, v_n\\}" }, { "math_id": 2, "text": "\\kappa:V\\to C" }, { "math_id": 3, "text": "C" }, { "math_id": 4, "text": "\\{i,j\\}\\in E \\implies \\kappa(i)\\neq\\kappa(j)" }, { "math_id": 5, "text": "X_G(x_1,x_2,\\ldots)" }, { "math_id": 6, "text": "G" }, { "math_id": 7, "text": "X_G(x_1,x_2,\\ldots):=\\sum_{\\underset{\\text{proper}}{\\kappa:V\\to\\N}}x_{\\kappa(v_1)}x_{\\kappa(v_2)}\\cdots x_{\\kappa(v_n)}" }, { "math_id": 8, "text": "\\lambda" }, { "math_id": 9, "text": "m_\\lambda" }, { "math_id": 10, "text": "K_n" }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": "n!" }, { "math_id": 13, "text": "n!x_1\\cdots x_n" }, { "math_id": 14, "text": "X_{K_n}(x_1,\\ldots,x_n)=n!x_1\\cdots x_n = n!m_{(1,\\ldots,1)}" }, { "math_id": 15, "text": "P_3" }, { "math_id": 16, "text": "3" }, { "math_id": 17, "text": "3!" }, { "math_id": 18, "text": "6x_1x_2x_3" }, { "math_id": 19, "text": "2" }, { "math_id": 20, "text": "x_i^2x_j" }, { "math_id": 21, "text": "x_ix_j^2" }, { "math_id": 22, "text": "i\\neq j" }, { "math_id": 23, "text": "X_{P_3}(x_1,x_2,x_3) = 6x_1x_2x_3 + x_1^2x_2 + x_1x_2^2 + x_1^2x_3 + x_1x_3^2 + x_2^2x_3 + x_2x_3^2 = 6m_{(1,1,1)} + m_{(1,2)}" }, { "math_id": 24, "text": "\\chi_G" }, { "math_id": 25, "text": "\\chi_G(k)" }, { "math_id": 26, "text": "k" }, { "math_id": 27, "text": "x_i" }, { "math_id": 28, "text": "1" }, { "math_id": 29, "text": "0" }, { "math_id": 30, "text": "X_G(1^k)=X_G(1,\\ldots,1,0,0,\\ldots)=\\chi_G(k)" }, { "math_id": 31, "text": "G\\amalg H" }, { "math_id": 32, "text": "H" }, { "math_id": 33, "text": "X_{G\\amalg H}=X_G\\cdot X_H" }, { "math_id": 34, "text": "\\pi" }, { "math_id": 35, "text": "V" }, { "math_id": 36, "text": "\\text{type}(\\pi)" }, { "math_id": 37, "text": "\\lambda\\vdash n" }, { "math_id": 38, "text": "z_\\lambda" }, { "math_id": 39, "text": "\\text{type}(\\pi)=\\lambda=\\langle1^{r_1}2^{r2}\\ldots\\rangle" }, { "math_id": 40, "text": "X_G" }, { "math_id": 41, "text": "\\tilde{m}_\\lambda:=r_1!r_2!\\cdots m_\\lambda" }, { "math_id": 42, "text": "X_G=\\sum_{\\lambda\\vdash n}z_\\lambda \\tilde{m}_\\lambda" }, { "math_id": 43, "text": "p_\\lambda" }, { "math_id": 44, "text": "S\\subseteq E" }, { "math_id": 45, "text": "\\lambda(S)" }, { "math_id": 46, "text": "S" }, { "math_id": 47, "text": "X_G=\\sum_{S\\subseteq E}(-1)^{|S|}p_{\\lambda(S)}" }, { "math_id": 48, "text": "X_G=\\sum_{\\lambda\\vdash n}c_\\lambda e_\\lambda" }, { "math_id": 49, "text": "e_\\lambda" }, { "math_id": 50, "text": "\\text{sink}(G,s)" }, { "math_id": 51, "text": "s" }, { "math_id": 52, "text": "\\text{sink}(G,s)=\\sum_{\\underset{l(\\lambda)=s}{\\lambda\\vdash n}}c_\\lambda" }, { "math_id": 53, "text": "P" }, { "math_id": 54, "text": "(3+1)" }, { "math_id": 55, "text": "\\text{inc}(P)" }, { "math_id": 56, "text": "(3+1)" }, { "math_id": 57, "text": "X_G" }, { "math_id": 58, "text": "e" }, { "math_id": 59, "text": "\\kappa" }, { "math_id": 60, "text": "\\text{asc}(\\kappa)=\\{\\{i,j\\}\\in E:i<j \\text{ and } \\kappa(i)<\\kappa(j)\\}" }, { "math_id": 61, "text": "X_G(x_1,x_2,\\ldots;t):=\\sum_{\\underset{\\text{proper}}{\\kappa:V\\to \\N}}t^{|asc(\\kappa)|}x_{\\kappa(v_1)}\\cdots x_{\\kappa(v_n)}" } ]
https://en.wikipedia.org/wiki?curid=76753765
767637
System F
Typed lambda calculus System F (also polymorphic lambda calculus or second-order lambda calculus) is a typed lambda calculus that introduces, to simply typed lambda calculus, a mechanism of universal quantification over types. System F formalizes parametric polymorphism in programming languages, thus forming a theoretical basis for languages such as Haskell and ML. It was discovered independently by logician Jean-Yves Girard (1972) and computer scientist John C. Reynolds. Whereas simply typed lambda calculus has variables ranging over terms, and binders for them, System F additionally has variables ranging over "types", and binders for them. As an example, the fact that the identity function can have any type of the form "A" → "A" would be formalized in System F as the judgement formula_0 where formula_1 is a type variable. The upper-case formula_2 is traditionally used to denote type-level functions, as opposed to the lower-case formula_3 which is used for value-level functions. (The superscripted formula_1 means that the bound "x" is of type formula_1; the expression after the colon is the type of the lambda expression preceding it.) As a term rewriting system, System F is strongly normalizing. However, type inference in System F (without explicit type annotations) is undecidable. Under the Curry–Howard isomorphism, System F corresponds to the fragment of second-order intuitionistic logic that uses only universal quantification. System F can be seen as part of the lambda cube, together with even more expressive typed lambda calculi, including those with dependent types. According to Girard, the "F" in "System F" was picked by chance. Typing rules. The typing rules of System F are those of simply typed lambda calculus with the addition of the following: where formula_4 are types, formula_1 is a type variable, and formula_5 in the context indicates that formula_1 is bound. The first rule is that of application, and the second is that of abstraction. Logic and predicates. The formula_6 type is defined as: formula_7, where formula_1 is a type variable. This means: formula_6 is the type of all functions which take as input a type α and two expressions of type α, and produce as output an expression of type α (note that we consider formula_8 to be right-associative.) The following two definitions for the boolean values formula_9 and formula_10 are used, extending the definition of Church booleans: formula_11 formula_12 Then, with these two formula_3-terms, we can define some logic operators (which are of type formula_15): formula_16 Note that in the definitions above, formula_6 is a type argument to formula_17, specifying that the other two parameters that are given to formula_17 are of type formula_6. As in Church encodings, there is no need for an IFTHENELSE function as one can just use raw formula_6-typed terms as decision functions. However, if one is requested: formula_18 will do. A "predicate" is a function which returns a formula_6-typed value. The most fundamental predicate is ISZERO which returns formula_9 if and only if its argument is the Church numeral 0: formula_19 System F structures. System F allows recursive constructions to be embedded in a natural manner, related to that in Martin-Löf's type theory. Abstract structures (S) are created using "constructors". These are functions typed as: formula_20. Recursivity is manifested when S itself appears within one of the types formula_21. If you have m of these constructors, you can define the type of S as: formula_22 For instance, the natural numbers can be defined as an inductive datatype N with constructors formula_23 The System F type corresponding to this structure is formula_24. The terms of this type comprise a typed version of the Church numerals, the first few of which are: formula_25 If we reverse the order of the curried arguments ("i.e.," formula_26), then the Church numeral for n is a function that takes a function f as argument and returns the nth power of f. That is to say, a Church numeral is a higher-order function – it takes a single-argument function f, and returns another single-argument function. Use in programming languages. The version of System F used in this article is as an explicitly typed, or Church-style, calculus. The typing information contained in λ-terms makes type-checking straightforward. Joe Wells (1994) settled an "embarrassing open problem" by proving that type checking is undecidable for a Curry-style variant of System F, that is, one that lacks explicit typing annotations. Wells's result implies that type inference for System F is impossible. A restriction of System F known as "Hindley–Milner", or simply "HM", does have an easy type inference algorithm and is used for many statically typed functional programming languages such as Haskell 98 and the ML family. Over time, as the restrictions of HM-style type systems have become apparent, languages have steadily moved to more expressive logics for their type systems. GHC, a Haskell compiler, goes beyond HM (as of 2008) and uses System F extended with non-syntactic type equality; non-HM features in OCaml's type system include GADT. The Girard-Reynolds Isomorphism. In second-order intuitionistic logic, the second-order polymorphic lambda calculus (F2) was discovered by Girard (1972) and independently by Reynolds (1974). Girard proved the "Representation Theorem": that in second-order intuitionistic predicate logic (P2), functions from the natural numbers to the natural numbers that can be proved total, form a projection from P2 into F2. Reynolds proved the "Abstraction Theorem": that every term in F2 satisfies a logical relation, which can be embedded into the logical relations P2. Reynolds proved that a Girard projection followed by a Reynolds embedding form the identity, i.e., the Girard-Reynolds Isomorphism. System Fω. While System F corresponds to the first axis of Barendregt's lambda cube, System Fω or the higher-order polymorphic lambda calculus combines the first axis (polymorphism) with the second axis (type operators); it is a different, more complex system. System Fω can be defined inductively on a family of systems, where induction is based on the kinds permitted in each system: In the limit, we can define system formula_32 to be That is, Fω is the system which allows functions from types to types where the argument (and result) may be of any order. Note that although Fω places no restrictions on the "order" of the arguments in these mappings, it does restrict the "universe" of the arguments for these mappings: they must be types rather than values. System Fω does not permit mappings from values to types (dependent types), though it does permit mappings from values to values (formula_3 abstraction), mappings from types to values (formula_2 abstraction), and mappings from types to types (formula_3 abstraction at the level of types). System F&lt;:. System F&lt;:, pronounced "F-sub", is an extension of system F with subtyping. System F&lt;: has been of central importance to programming language theory since the 1980s because the core of functional programming languages, like those in the ML family, support both parametric polymorphism and record subtyping, which can be expressed in System F&lt;:. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vdash \\Lambda\\alpha. \\lambda x^\\alpha.x: \\forall\\alpha.\\alpha \\to \\alpha" }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "\\Lambda" }, { "math_id": 3, "text": "\\lambda" }, { "math_id": 4, "text": "\\sigma, \\tau" }, { "math_id": 5, "text": "\\alpha~\\text{type}" }, { "math_id": 6, "text": "\\mathsf{Boolean}" }, { "math_id": 7, "text": "\\forall\\alpha.\\alpha \\to \\alpha \\to \\alpha" }, { "math_id": 8, "text": "\\to" }, { "math_id": 9, "text": "\\mathbf{T}" }, { "math_id": 10, "text": "\\mathbf{F}" }, { "math_id": 11, "text": "\\mathbf{T} = \\Lambda\\alpha{.}\\lambda x^{\\alpha} \\lambda y^\\alpha{.}x" }, { "math_id": 12, "text": "\\mathbf{F} = \\Lambda\\alpha{.}\\lambda x^{\\alpha} \\lambda y^{\\alpha}{.}y" }, { "math_id": 13, "text": " \\mathbf{T}" }, { "math_id": 14, "text": " \\mathbf{F}" }, { "math_id": 15, "text": " \\mathsf{Boolean} \\rightarrow \\mathsf{Boolean} \\rightarrow \\mathsf{Boolean}" }, { "math_id": 16, "text": "\\begin{align}\n\\mathrm{AND} &= \\lambda x^{\\mathsf{Boolean}} \\lambda y^{\\mathsf{Boolean}}{.} x \\, \\mathsf{Boolean} \\, y\\, \\mathbf{F}\\\\\n\\mathrm{OR} &= \\lambda x^{\\mathsf{Boolean}} \\lambda y^{\\mathsf{Boolean}}{.} x \\, \\mathsf{Boolean} \\, \\mathbf{T}\\, y\\\\\n\\mathrm{NOT} &= \\lambda x^{\\mathsf{Boolean}}{.} x \\, \\mathsf{Boolean} \\, \\mathbf{F}\\, \\mathbf{T} \n\\end{align}" }, { "math_id": 17, "text": "x" }, { "math_id": 18, "text": "\\mathrm{IFTHENELSE} = \\Lambda \\alpha.\\lambda x^{\\mathsf{Boolean}}\\lambda y^{\\alpha}\\lambda z^{\\alpha}. x \\alpha y z " }, { "math_id": 19, "text": "\\mathrm{ISZERO} = \\lambda n^{\\forall \\alpha. (\\alpha \\rightarrow \\alpha) \\rightarrow \\alpha \\rightarrow \\alpha}{.}n \\, \\mathsf{Boolean} \\, (\\lambda x^{\\mathsf{Boolean}}{.}\\mathbf{F})\\, \\mathbf{T}" }, { "math_id": 20, "text": "K_1\\rightarrow K_2\\rightarrow\\dots\\rightarrow S" }, { "math_id": 21, "text": "K_i" }, { "math_id": 22, "text": "\\forall \\alpha.(K_1^1[\\alpha/S]\\rightarrow\\dots\\rightarrow \\alpha)\\dots\\rightarrow(K_1^m[\\alpha/S]\\rightarrow\\dots\\rightarrow \\alpha)\\rightarrow \\alpha" }, { "math_id": 23, "text": "\\begin{align}\n\\mathit{zero} &: \\mathrm{N}\\\\\n\\mathit{succ} &: \\mathrm{N} \\rightarrow \\mathrm{N}\n\\end{align}" }, { "math_id": 24, "text": "\\forall \\alpha. \\alpha \\to (\\alpha \\to \\alpha) \\to \\alpha" }, { "math_id": 25, "text": "\\begin{align}\n0 &:= \\Lambda \\alpha . \\lambda x^\\alpha . \\lambda f^{\\alpha\\to\\alpha} . x\\\\\n1 &:= \\Lambda \\alpha . \\lambda x^\\alpha . \\lambda f^{\\alpha\\to\\alpha} . f x\\\\\n2 &:= \\Lambda \\alpha . \\lambda x^\\alpha . \\lambda f^{\\alpha\\to\\alpha} . f (f x)\\\\\n3 &:= \\Lambda \\alpha . \\lambda x^\\alpha . \\lambda f^{\\alpha\\to\\alpha} . f (f (f x))\n\\end{align}" }, { "math_id": 26, "text": "\\forall \\alpha. (\\alpha \\rightarrow \\alpha) \\rightarrow \\alpha \\rightarrow \\alpha" }, { "math_id": 27, "text": "F_n" }, { "math_id": 28, "text": "\\star" }, { "math_id": 29, "text": "J\\Rightarrow K" }, { "math_id": 30, "text": "J\\in F_{n-1}" }, { "math_id": 31, "text": "K\\in F_n" }, { "math_id": 32, "text": "F_\\omega" }, { "math_id": 33, "text": "F_\\omega = \\underset{1 \\leq i}{\\bigcup} F_i" } ]
https://en.wikipedia.org/wiki?curid=767637
7676542
Dilatometer
Instrument measuring volume changes A dilatometer is a scientific instrument that measures volume changes caused by a physical or chemical process. A familiar application of a dilatometer is the mercury-in-glass thermometer, in which the change in volume of the liquid column is read from a graduated scale. Because mercury has a fairly constant rate of expansion over ambient temperature ranges, the volume changes are directly related to temperature. Applications. Dilatometers have been used in the fabrication of metallic alloys, study of martensite transformation, compressed and sintered refractory compounds, glasses, ceramic products, composite materials, and plastics. Dilatometry is also used to monitor the progress of chemical reactions, particularly those displaying a substantial molar volume change (e.g., polymerisation). A specific example is the rate of phase changes. In food science, dilatometers are used to measure the solid fat index of food oils and butter. Another common application of a dilatometer is the measurement of thermal expansion. Thermal expansivity is an important engineering parameter, and is defined as: formula_0 Types. There are a number of dilatometer types: For simpler measurements in a temperature range from 0 to 100 °C, where water is heated up and flow or over the sample. If linear coefficients of expansion of a metal is to be measured, hot water will run through a pipe made from the metal. The pipe warms up to the temperature of the water and the relative expansion can be determined as a function of the water temperature. For the measurement of the volumetric expansion of liquids one takes a large glass container filled with water. In an expansion tank (glass container with an accurate volume scale) with the sample liquid. If one heats the water up, the sample liquid expands and the volume changes is read. However the expansion of the sample container must also be taken into consideration. The expansion and retraction coefficient of gases cannot be measured using dilatometer, since the pressure plays a role here. For such measurements a gas thermometer is more suitable. Dilatometers often include a mechanism for controlling temperature. This may be a furnace for measurements at elevated temperatures (temperatures to 2000 °C), or a cryostat for measurements at temperatures below room temperature. Metallurgical applications often involve sophisticated temperature controls capable of applying precise temperature-time profiles for heating and quenching the sample. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha = \\frac{1}{V} \\biggl(\\frac{\\partial V}{\\partial T} \\biggr)_{p,N}\\ " } ]
https://en.wikipedia.org/wiki?curid=7676542
7676971
Local Langlands conjectures
In mathematics, the local Langlands conjectures, introduced by Robert Langlands (1967, 1970), are part of the Langlands program. They describe a correspondence between the complex representations of a reductive algebraic group "G" over a local field "F", and representations of the Langlands group of "F" into the "L"-group of "G". This correspondence is not a bijection in general. The conjectures can be thought of as a generalization of local class field theory from abelian Galois groups to non-abelian Galois groups. Local Langlands conjectures for GL1. The local Langlands conjectures for GL1("K") follow from (and are essentially equivalent to) local class field theory. More precisely the Artin map gives an isomorphism from the group GL1("K")= "K"* to the abelianization of the Weil group. In particular irreducible smooth representations of GL1("K") are 1-dimensional as the group is abelian, so can be identified with homomorphisms of the Weil group to GL1(C). This gives the Langlands correspondence between homomorphisms of the Weil group to GL1(C) and irreducible smooth representations of GL1("K"). Representations of the Weil group. Representations of the Weil group do not quite correspond to irreducible smooth representations of general linear groups. To get a bijection, one has to slightly modify the notion of a representation of the Weil group, to something called a Weil–Deligne representation. This consists of a representation of the Weil group on a vector space "V" together with a nilpotent endomorphism "N" of "V" such that "wNw"−1=||"w"||"N", or equivalently a representation of the Weil–Deligne group. In addition the representation of the Weil group should have an open kernel, and should be (Frobenius) semisimple. For every Frobenius semisimple complex "n"-dimensional Weil–Deligne representation ρ of the Weil group of "F" there is an L-function "L"("s",ρ) and a local ε-factor ε("s",ρ,ψ) (depending on a character ψ of "F"). Representations of GL"n"("F"). The representations of GL"n"("F") appearing in the local Langlands correspondence are smooth irreducible complex representations. Smooth irreducible complex representations are automatically admissible. The Bernstein–Zelevinsky classification reduces the classification of irreducible smooth representations to cuspidal representations. For every irreducible admissible complex representation π there is an L-function "L"("s",π) and a local ε-factor ε("s",π,ψ) (depending on a character ψ of "F"). More generally, if there are two irreducible admissible representations π and π' of general linear groups there are local Rankin–Selberg convolution L-functions "L"("s",π×π') and ε-factors ε("s",π×π',ψ). described the irreducible admissible representations of general linear groups over local fields. Local Langlands conjectures for GL2. The local Langlands conjecture for GL2 of a local field says that there is a (unique) bijection π from 2-dimensional semisimple Weil-Deligne representations of the Weil group to irreducible smooth representations of GL2("F") that preserves "L"-functions, ε-factors, and commutes with twisting by characters of "F"*. verified the local Langlands conjectures for GL2 in the case when the residue field does not have characteristic 2. In this case the representations of the Weil group are all of cyclic or dihedral type. classified the smooth irreducible representations of GL2("F") when "F" has odd residue characteristic (see also ), and claimed incorrectly that the classification for even residue characteristic differs only insignifictanly from the odd residue characteristic case. pointed out that when the residue field has characteristic 2, there are some extra exceptional 2-dimensional representations of the Weil group whose image in PGL2(C) is of tetrahedral or octahedral type. (For global Langlands conjectures, 2-dimensional representations can also be of icosahedral type, but this cannot happen in the local case as the Galois groups are solvable.) proved the local Langlands conjectures for the general linear group GL2("K") over the 2-adic numbers, and over local fields containing a cube root of unity. Kutzko (1980, 1980b) proved the local Langlands conjectures for the general linear group GL2("K") over all local fields. and gave expositions of the proof. Local Langlands conjectures for GL"n". The local Langlands conjectures for general linear groups state that there are unique bijections π ↔ ρπ from equivalence classes of irreducible admissible representations π of GL"n"("F") to equivalence classes of continuous Frobenius semisimple complex "n"-dimensional Weil–Deligne representations ρπ of the Weil group of "F", that preserve "L"-functions and ε-factors of pairs of representations, and coincide with the Artin map for 1-dimensional representations. In other words, proved the local Langlands conjectures for the general linear group GL"n"("K") for positive characteristic local fields "K". gave an exposition of their work. proved the local Langlands conjectures for the general linear group GL"n"("K") for characteristic 0 local fields "K". gave another proof. and gave expositions of their work. Local Langlands conjectures for other groups. and discuss the Langlands conjectures for more general groups. The Langlands conjectures for arbitrary reductive groups "G" are more complicated to state than the ones for general linear groups, and it is unclear what the best way of stating them should be. Roughly speaking, admissible representations of a reductive group are grouped into disjoint finite sets called "L"-packets, which should correspond to some classes of homomorphisms, called "L"-parameters, from the local Langlands group to the "L"-group of "G". Some earlier versions used the Weil−Deligne group or the Weil group instead of the local Langlands group, which gives a slightly weaker form of the conjecture. proved the Langlands conjectures for groups over the archimedean local fields R and C by giving the Langlands classification of their irreducible admissible representations (up to infinitesimal equivalence), or, equivalently, of their irreducible formula_0-modules. proved the local Langlands conjectures for the symplectic similitude group GSp(4) and used that in to deduce it for the symplectic group Sp(4).
[ { "math_id": 0, "text": "(\\mathfrak{g},K)" } ]
https://en.wikipedia.org/wiki?curid=7676971
76769820
Genetic map function
In genetics, mapping functions are used to model the relationship between map distance (measured in map units or centimorgans) between markers and recombination frequency between markers. One utility of this is that it allows values to be obtained for genetic distances, which is typically not estimable, from recombination fractions, which typically are. The simplest mapping function is the Morgan Mapping Function, eponymously devised by Thomas Hunt Morgan. Other well-known mapping functions include the Haldane Mapping Function introduced by J. B. S. Haldane in 1919, and the Kosambi Mapping Function introduced by Damodar Dharmananda Kosambi in 1944. Few mapping functions are used in practice other than Haldane and Kosambi. The main difference between them is in how crossover interference is incorporated. Morgan Mapping Function. Where "d" is the distance in map units, the Morgan Mapping Function states that the recombination frequency "r" can be expressed as formula_0 . This assumes that one crossover occurs, at most, in an interval between two loci, and that the probability of the occurrence of this crossover is proportional to the map length of the interval. Where "d" is the distance in map units, the recombination frequency "r" can be expressed as: formula_1 The equation only holds when formula_2 as, otherwise, recombination frequency would exceed 50%. Therefore, the function cannot approximate recombination frequencies beyond short distances. Haldane Mapping Function. Overview. Two properties of the Haldane Mapping Function is that it limits recombination frequency up to, but not beyond 50%, and that it represents a linear relationship between the frequency of recombination and map distance up to recombination frequencies of 10%. It also assumes that crossovers occur at random positions and that they do so independent of one another. This assumption therefore also assumes no crossover interference takes place; but using this assumption allows Haldane to model the mapping function using a Poisson distribution. Formula. formula_3 Inverse. formula_4 Kosambi Mapping Function. Overview. The Kosambi mapping function was introduced to account for the impact played by crossover interference on recombination frequency. It introduces a parameter C, representing the coefficient of coincidence, and sets it equal to 2r. For loci which are strongly linked, interference is strong; otherwise, interference decreases towards zero. Interference declines according to the linear function i = 1 - 2r. Formula. formula_5 Inverse. formula_6 Comparison and application. Below 10% recombination frequency, there is little mathematical difference between different mapping functions and the relationship between map distance and recombination frequency is linear (that is, 1 map unit = 1% recombination frequency). When genome-wide SNP sampling and mapping data is present, the difference between the functions is negligible outside of regions of high recombination, such as recombination hotspots or ends of chromosomes. While many mapping functions now exist, in practice functions other than Haldane and Kosambi are rarely used. More specifically, the Haldane function is preferred when distance between markers is relatively small, whereas the Kosambi function is preferred when distances between markers is larger and crossovers need to be accounted for. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ r=d" }, { "math_id": 1, "text": "\\ r = \\frac{1}{2} [1-(1-2d)] = d" }, { "math_id": 2, "text": "\\frac{1}{2} \\geq d \\geq 0" }, { "math_id": 3, "text": "\\ r = \\frac{1}{2} (1-e^{-2d})" }, { "math_id": 4, "text": "\\ d = -\\frac{1}{2} ln (1-2r)" }, { "math_id": 5, "text": "\\ r = \\frac{1}{2}\\tanh(2d) = \\frac{1}{2}\\frac{e^{4d}-1}{e^{4d}+1}" }, { "math_id": 6, "text": "\\ d = \\frac{1}{2} \\tanh^{-1} (2r) = \\frac{1}{4}\\ln(\\frac{1+2r}{1-2r})" } ]
https://en.wikipedia.org/wiki?curid=76769820
76773020
Online matrix-vector multiplication problem
Problem in computational complexity theory &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in computer science: Is there an algorithm for solving the OMv problem in time formula_0, for some constant formula_1? In computational complexity theory, the online matrix-vector multiplication problem (OMv) asks an online algorithm to return, at each round, the product of an formula_2 matrix and a newly-arrived formula_3-dimensional vector. OMv is conjectured to require roughly cubic time. This conjectured hardness implies lower bounds on the time needed to solve various dynamic problems and is of particular interest in fine-grained complexity. Definition. In OMv, an algorithm is given an integer formula_3 and an formula_2 Boolean matrix formula_4. The algorithm then runs for formula_3 rounds, and at each round formula_5 receives an formula_3-dimensional Boolean vector formula_6 and must return the product formula_7 (before continuing to round formula_8). An algorithm is said to solve OMv if, with probability at least formula_9 over the randomness of the algorithm, it returns the product formula_7 at every round formula_5. Variants of OMv. The online vector-matrix-vector problem (OuMv) is a variant of OMv where the algorithm receives, at each round formula_5, two Boolean vectors formula_10 and formula_6, and returns the product formula_11. This version has the benefit of returning a Boolean value at each round instead of a vector of an formula_3-dimensional Boolean vector. The hardness of OuMv is implied by the hardness of OMv. More heavily parameterized variants of OMv are also used, where the matrix formula_4 is not necessarily square and where the dimension of each vector formula_6 is not necessarily equal to the number of rounds. Conjectured hardness. In 2015, Henzinger, Krinninger, Nanongkai, and Saranurak conjectured that OMv cannot be solved in "truly subcubic" time. Formally, they presented the following conjecture: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;For any constant formula_1, there is no formula_0-time algorithm that solves OMv with probability at least formula_9. Algorithms for solving OMv. OMv can be solved in formula_12 time by a naive algorithm that, in each of the formula_3 rounds, multiplies the matrix formula_4 and the new vector formula_6 in formula_13 time. The fastest known algorithm for OMv is implied by a result of Williams and runs in time formula_14. Implications of conjectured hardness. The OMv conjecture implies lower bounds on the time needed to solve a large class of dynamic graph problems, including reachability and connectivity, shortest path, and subgraph detection. For many of these problems, the implied lower bounds have matching upper bounds. While some of these lower bounds also followed from previous conjectures (e.g., 3SUM), many of the lower bounds that follow from OMv are stronger or new. Later work showed that the OMv conjecture implies lower bounds on the time needed for subgraph counting in average-case graphs. Lower bounds from OMv. Several lower bounds for dynamic problems follow from the OMv conjecture. Examples of tight lower bounds include the following. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n^{3-\\varepsilon})" }, { "math_id": 1, "text": "\\varepsilon>0" }, { "math_id": 2, "text": "n\\times n" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "M" }, { "math_id": 5, "text": "i" }, { "math_id": 6, "text": "v_i" }, { "math_id": 7, "text": "Mv_i" }, { "math_id": 8, "text": "i+1" }, { "math_id": 9, "text": "2/3" }, { "math_id": 10, "text": "u_i" }, { "math_id": 11, "text": "u_i M v_i" }, { "math_id": 12, "text": "O(n^3)" }, { "math_id": 13, "text": "O(n^2)" }, { "math_id": 14, "text": "O(n^3/\\log^2 n)" }, { "math_id": 15, "text": "k" }, { "math_id": 16, "text": "m\\leq n^2" }, { "math_id": 17, "text": "\\widetilde{\\Omega}(m)" }, { "math_id": 18, "text": "\\widetilde{\\Omega}(n^2)" }, { "math_id": 19, "text": "\\widetilde{\\Omega}(n^3)" } ]
https://en.wikipedia.org/wiki?curid=76773020
7677352
Scleronomous
Mechanical system whose constraints are independent of time A mechanical system is scleronomous if the equations of constraints do not contain the time as an explicit variable and the equation of constraints can be described by generalized coordinates. Such constraints are called scleronomic constraints. The opposite of scleronomous is rheonomous. Application. In 3-D space, a particle with mass formula_0, velocity formula_1 has kinetic energy formula_2 formula_3 Velocity is the derivative of position formula_4 with respect to time formula_5. Use chain rule for several variables: formula_6 where formula_7 are generalized coordinates. Therefore, formula_8 Rearranging the terms carefully, formula_9 formula_10 formula_11 formula_12 where formula_13, formula_14, formula_15 are respectively homogeneous functions of degree 0, 1, and 2 in generalized velocities. If this system is scleronomous, then the position does not depend explicitly with time: formula_16 Therefore, only term formula_15 does not vanish: formula_17 Kinetic energy is a homogeneous function of degree 2 in generalized velocities. Example: pendulum. As shown at right, a simple pendulum is a system composed of a weight and a string. The string is attached at the top end to a pivot and at the bottom end to a weight. Being inextensible, the string’s length is a constant. Therefore, this system is scleronomous; it obeys scleronomic constraint formula_18 where formula_19 is the position of the weight and formula_20 is length of the string. Take a more complicated example. Refer to the next figure at right, Assume the top end of the string is attached to a pivot point undergoing a simple harmonic motion formula_21 where formula_22 is amplitude, formula_23 is angular frequency, and formula_24 is time. Although the top end of the string is not fixed, the length of this inextensible string is still a constant. The distance between the top end and the weight must stay the same. Therefore, this system is rheonomous as it obeys constraint explicitly dependent on time formula_25
[ { "math_id": 0, "text": "m\\,\\!" }, { "math_id": 1, "text": "\\mathbf{v}\\,\\!" }, { "math_id": 2, "text": "T\\,\\!" }, { "math_id": 3, "text": "T =\\frac{1}{2}m v^2 \\,\\!." }, { "math_id": 4, "text": " r\\,\\!" }, { "math_id": 5, "text": " t\\,\\!" }, { "math_id": 6, "text": "\\mathbf{v}=\\frac{d\\mathbf{r}}{dt}=\\sum_i\\ \\frac{\\partial \\mathbf{r}}{\\partial q_i}\\dot{q}_i+\\frac{\\partial \\mathbf{r}}{\\partial t}\\,\\!." }, { "math_id": 7, "text": " q_i\\,\\!" }, { "math_id": 8, "text": "T =\\frac{1}{2}m \\left(\\sum_i\\ \\frac{\\partial \\mathbf{r}}{\\partial q_i}\\dot{q}_i+\\frac{\\partial \\mathbf{r}}{\\partial t}\\right)^2\\,\\!." }, { "math_id": 9, "text": "T =T_0+T_1+T_2\\,\\!:" }, { "math_id": 10, "text": "T_0=\\frac{1}{2}m\\left(\\frac{\\partial \\mathbf{r}}{\\partial t}\\right)^2\\,\\!," }, { "math_id": 11, "text": "T_1=\\sum_i\\ m\\frac{\\partial \\mathbf{r}}{\\partial t}\\cdot \\frac{\\partial \\mathbf{r}}{\\partial q_i}\\dot{q}_i\\,\\!," }, { "math_id": 12, "text": "T_2=\\sum_{i,j}\\ \\frac{1}{2}m\\frac{\\partial \\mathbf{r}}{\\partial q_i}\\cdot \\frac{\\partial \\mathbf{r}}{\\partial q_j}\\dot{q}_i\\dot{q}_j\\,\\!," }, { "math_id": 13, "text": "T_0\\,\\!" }, { "math_id": 14, "text": "T_1\\,\\!" }, { "math_id": 15, "text": "T_2\\,\\!" }, { "math_id": 16, "text": "\\frac{\\partial \\mathbf{r}}{\\partial t}=0\\,\\!." }, { "math_id": 17, "text": "T = T_2\\,\\!." }, { "math_id": 18, "text": " \\sqrt{x^2+y^2} - L=0\\,\\!," }, { "math_id": 19, "text": "(x,y)\\,\\!" }, { "math_id": 20, "text": "L\\,\\!" }, { "math_id": 21, "text": "x_t=x_0\\cos\\omega t\\,\\!," }, { "math_id": 22, "text": "x_0\\,\\!" }, { "math_id": 23, "text": "\\omega\\,\\!" }, { "math_id": 24, "text": "t\\,\\!" }, { "math_id": 25, "text": " \\sqrt{(x - x_0\\cos\\omega t)^2+y^2} - L=0\\,\\!." } ]
https://en.wikipedia.org/wiki?curid=7677352
7677370
Rheonomous
Mechanical system whose constraints are dependent on time A mechanical system is rheonomous if its equations of constraints contain the time as an explicit variable. Such constraints are called rheonomic constraints. The opposite of rheonomous is scleronomous. Example: simple 2D pendulum. As shown at right, a simple pendulum is a system composed of a weight and a string. The string is attached at the top end to a pivot and at the bottom end to a weight. Being inextensible, the string has a constant length. Therefore, this system is scleronomous; it obeys the scleronomic constraint formula_0, where formula_1 is the position of the weight and formula_2 the length of the string. The situation changes if the pivot point is moving, e.g. undergoing a simple harmonic motion formula_3, where formula_4 is the amplitude, formula_5 the angular frequency, and formula_6 time. Although the top end of the string is not fixed, the length of this inextensible string is still a constant. The distance between the top end and the weight must stay the same. Therefore, this system is rheonomous; it obeys the rheonomic constraint formula_7. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\sqrt{x^2+y^2} - L=0\\,\\!" }, { "math_id": 1, "text": "(x,\\ y)\\,\\!" }, { "math_id": 2, "text": "L\\,\\!" }, { "math_id": 3, "text": "x_t=x_0\\cos\\omega t\\,\\!" }, { "math_id": 4, "text": "x_0\\,\\!" }, { "math_id": 5, "text": "\\omega\\,\\!" }, { "math_id": 6, "text": "t\\,\\!" }, { "math_id": 7, "text": " \\sqrt{(x - x_0\\cos\\omega t)^2+y^2} - L=0\\,\\!" } ]
https://en.wikipedia.org/wiki?curid=7677370
76786946
Frei-Chen operator
Edge detection algorithm The Frei-Chen operator, sometimes called Frei and Chen operator, is used in image processing for edge detection. It was proposed by Werner Frei and Chung-Ching Chen, researchers at USC's Image Processing Institute, in 1977. The idea is to use a set of orthogonal basis vectors related to distinctive image features, which enable the algorithm to extract boundary elements effectively. Formulation. The operator uses nine 3x3 kernels which are convolved with the original image to calculate the gradient. We define the nine kernels formula_0 as: formula_1 formula_2 formula_3 formula_4 formula_5 Let formula_13 be the image sub-area, and formula_14 be the angle (in formula_15space), formula_16 is the number of orthgonal edge basis vectors formula_17 spanning the edge subspace. formula_18 The larger formula_19, the poorer the fit between B and an element of the edge subspace. The strategy is to classify image sub-area as containing and edge element only if formula_19 is small which is done by thresholding. Simple description. The image is convoled with each of the kernel. Thus, 9 results are obtained. Vectors formula_20 are used for edge subspace identification. Hence numerator in the formula will be formula_21. Similarly, for line subspace identification, the numerator will be formula_22. Using formula, we compute formula_19, if it is above a certain threshold formula_23, we say that an edge is detected in the image sub-area. Example comparisons. Here, frie-chen operator, along with three different gradient operators is used to detect edges in the test image. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{W1,...,W9}" }, { "math_id": 1, "text": "W1 = \\begin{bmatrix} 1 & \\sqrt{2} & 1 \\\\ 0 & 0 & 0 \\\\ -1 & -\\sqrt{2} & -1 \\end{bmatrix}\n\\quad\nW2 = \\begin{bmatrix} 1 & 0 & -1 \\\\ \\sqrt{2} & 0 & -\\sqrt{2} \\\\ 1 & 0 & -1 \\end{bmatrix}\n" }, { "math_id": 2, "text": "W3 = \\begin{bmatrix} 0 & -1 & \\sqrt{2} \\\\ 1 & 0 & -1 \\\\ -\\sqrt{2} & 1 & 0 \\end{bmatrix}\n\\quad\nW4 = \\begin{bmatrix} \\sqrt{2} & -1 & 0 \\\\ -1 & 0 & 1 \\\\ 0 & 1 & -\\sqrt{2} \\end{bmatrix}" }, { "math_id": 3, "text": "W5 = \\begin{bmatrix} 0 & 1 & 0 \\\\ -1 & 0 & -1 \\\\ 0 & 1 & 0 \\end{bmatrix}\n\\quad\nW6 = \\begin{bmatrix} -1 & 0 & 1 \\\\ 0 & 0 & 0 \\\\ 1 & 0 & -1 \\end{bmatrix}" }, { "math_id": 4, "text": "W7 = \\begin{bmatrix} 1 & -2 & 1 \\\\ -2 & 4 & -2 \\\\ 1 & -2 & 1 \\end{bmatrix}\n\\quad\nW8 = \\begin{bmatrix} -2 & 1 & -2 \\\\ 1 & 4 & 1 \\\\ -2 & 1 & -2 \\end{bmatrix}" }, { "math_id": 5, "text": "W9 = \\begin{bmatrix} 1 & 1 & 1 \\\\ 1 & 1 & 1 \\\\ 1 & 1 & 1 \\end{bmatrix}" }, { "math_id": 6, "text": "(W1, W2)" }, { "math_id": 7, "text": "(W3, W4)" }, { "math_id": 8, "text": "(W5,W6)" }, { "math_id": 9, "text": "(W7,W8)" }, { "math_id": 10, "text": "W9" }, { "math_id": 11, "text": "W1,...W4" }, { "math_id": 12, "text": "W5,...W8" }, { "math_id": 13, "text": "B" }, { "math_id": 14, "text": "\\theta" }, { "math_id": 15, "text": "n^2" }, { "math_id": 16, "text": "e\n" }, { "math_id": 17, "text": "W_1,...W_e\n" }, { "math_id": 18, "text": "\\theta = \\arccos \\left [ \\textstyle \\sum_{i=1}^e \\displaystyle (B * W_i)^2 / \\textstyle \\sum_{i=1}^9 \\displaystyle (B * W_i)^2 \\right ]^{\\frac{1}{2}}\n\n" }, { "math_id": 19, "text": "\\theta\n" }, { "math_id": 20, "text": "W_1, ... W_4\n" }, { "math_id": 21, "text": "\\textstyle \\sum_{i=1}^4 \\displaystyle (B * W_i)^2\n" }, { "math_id": 22, "text": "\\textstyle \\sum_{i=5}^8 \\displaystyle (B * W_i)^2\n" }, { "math_id": 23, "text": "r\n" } ]
https://en.wikipedia.org/wiki?curid=76786946
7679433
Moving equilibrium theorem
Consider a dynamical system (1)...formula_0 (2)...formula_1 with the state variables formula_2 and formula_3. Assume that formula_2 is "fast" and formula_3 is "slow". Assume that the system (1) gives, for any fixed formula_3, an asymptotically stable solution formula_4. Substituting this for formula_2 in (2) yields (3)...formula_5 Here formula_3 has been replaced by formula_6 to indicate that the solution formula_6 to (3) differs from the solution for formula_3 obtainable from the system (1), (2). The Moving Equilibrium Theorem suggested by Lotka states that the solutions formula_6 obtainable from (3) approximate the solutions formula_3 obtainable from (1), (2) provided the partial system (1) is asymptotically stable in formula_2 for any given formula_3 and heavily damped ("fast"). The theorem has been proved for linear systems comprising real vectors formula_2 and formula_3. It permits reducing high-dimensional dynamical problems to lower dimensions and underlies Alfred Marshall's temporary equilibrium method.
[ { "math_id": 0, "text": "\\dot{x}=f(x,y)" }, { "math_id": 1, "text": "\\qquad \\dot{y}=g(x,y)" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "y" }, { "math_id": 4, "text": "\\bar{x}(y)" }, { "math_id": 5, "text": "\\qquad \\dot{Y}=g(\\bar{x}(Y),Y)=:G(Y)." }, { "math_id": 6, "text": "Y" } ]
https://en.wikipedia.org/wiki?curid=7679433
767959
Anders Johan Lexell
Russian mathematician (1740–1784) Anders Johan Lexell (24 December 1740 – 11 December [O.S. 30 November] 1784) was a Finnish-Swedish astronomer, mathematician, and physicist who spent most of his life in Imperial Russia, where he was known as Andrei Ivanovich Leksel (Андрей Иванович Лексель). Lexell made important discoveries in polygonometry and celestial mechanics; the latter led to a comet named in his honour. La Grande Encyclopédie states that he was the prominent mathematician of his time who contributed to spherical trigonometry with new and interesting solutions, which he took as a basis for his research of comet and planet motion. His name was given to a theorem of spherical triangles. Lexell was one of the most prolific members of the Russian Academy of Sciences at that time, having published 66 papers in 16 years of his work there. A statement attributed to Leonhard Euler expresses high approval of Lexell's works: "Besides Lexell, such a paper could only be written by D'Alambert or me". Daniel Bernoulli also praised his work, writing in a letter to Johann Euler "I like Lexell's works, they are profound and interesting, and the value of them is increased even more because of his modesty, which adorns great men". Lexell was unmarried, and kept up a close friendship with Leonhard Euler and his family. He witnessed Euler's death at his house and succeeded Euler to the chair of the mathematics department at the Russian Academy of Sciences, but died the following year. The asteroid 2004 Lexell is named in his honour, as is the lunar crater Lexell. Life. Early years. Anders Johan Lexell was born in Turku to Johan Lexell, a goldsmith and local administrative officer, and Madeleine-Catherine née Björkegren. At the age of fourteen he enrolled at the Academy of Åbo and in 1760 received his Doctor of Philosophy degree with a dissertation "Aphorismi mathematico-physici" (academic advisor Jakob Gadolin). In 1763 Lexell moved to Uppsala and worked at Uppsala University as a mathematics lecturer. From 1766 he was a professor of mathematics at the Uppsala Nautical School. St. Petersburg. In 1762, Catherine the Great ascended to the Russian throne and started the politics of enlightened absolutism. She was aware of the importance of science and ordered to offer Leonhard Euler to "state his conditions, as soon as he moves to St. Petersburg without delay". Soon after his return to Russia, Euler suggested that the director of the Russian Academy of Science should invite Lexell to study mathematics and its application to astronomy, especially spherical geometry. The invitation by Euler and the preparations that were made at that time to observe the 1769 transit of Venus from eight locations in the vast Russian Empire made Lexell seek the opportunity to become a member of the St. Petersburg scientific community. To be admitted to the Russian Academy of Sciences, Lexell in 1768 wrote a paper on integral calculus called "Methodus integrandi nonnulis aequationum exemplis illustrata". Euler was appointed to evaluate the paper and highly praised it, and Count , director of the Russian Academy of Sciences, invited Lexell to the position of mathematics adjunct, which Lexell accepted. In the same year he received permission from the Swedish king to leave Sweden, and moved to St. Petersburg. His first task was to become familiar with the astronomical instruments that would be used in the observations of the transit of Venus. He participated in observing the 1769 transit at St. Petersburg together with Christian Mayer, who was hired by the Academy to work at the observatory while the Russian astronomers went to other locations. Lexell made a large contribution to Lunar theory and especially to determining the parallax of the Sun from the results of observations of the transit of Venus. He earned universal recognition and, in 1771, when the Russian Academy of Sciences affiliated new members, Lexell was admitted as an Astronomy academician, he also became a member of the Academy of Stockholm and Academy of Uppsala in 1773 and 1774, and became a corresponding member of the Paris Royal Academy of Sciences. Foreign trip. In 1775, the Swedish King appointed Lexell to a chair of the mathematics department at the University of Åbo with permission to stay at St. Petersburg for another three years to finish his work there; this permission was later prolonged for two more years. Hence, in 1780, Lexell was supposed to leave St. Petersburg and return to Sweden, which would have been a great loss for the Russian Academy of Sciences. Therefore, Director proposed that Lexell travel to Germany, England, and France and then to return to St. Petersburg via Sweden. Lexell made the trip and, to the Academy's pleasure, got a discharge from the Swedish King and returned to St. Petersburg in 1781, after more than a year of absence, very satisfied with his trip. Sending academicians abroad was quite rare at that time (as opposed to the early years of the Russian Academy of Sciences), so Lexell willingly agreed to make the trip. He was instructed to write his itinerary, which without changes was signed by . The aims were as follows: since Lexell would visit major observatories on his way, he should learn how they were built, note the number and types of scientific instruments used, and if he found something new and interesting he should buy the plans and design drawings. He should also learn everything about cartography and try to get new geographic, hydrographic, military, and mineralogic maps. He should also write letters to the Academy regularly to report interesting news on science, arts, and literature. Lexell departed St. Petersburg in late July 1780 on a sailing ship and via Swinemünde arrived in Berlin, where he stayed for a month and travelled to Potsdam, seeking in vain for an audience with King Frederick II. In September he left for Bavaria, visiting Leipzig, Göttingen, and Mannheim. In October he traveled to Straßbourg and then to Paris, where he spent the winter. In March 1781 he moved to London. In August he left London for Belgium, where he visited Flanders and Brabant, then moved to the Netherlands, visited The Hague, Amsterdam, and Saardam, and then returned to Germany in September. He visited Hamburg and then boarded a ship in Kiel to sail to Sweden; he spent three days in Kopenhagen on the way. In Sweden he spent time in his native city Åbo, and also visited Stockholm, Uppsala, and Åland. In early December 1781 Lexell returned to St. Petersburg, after having travelled for almost a year and a half. There are 28 letters in the archive of the academy that Lexell wrote during the trip to Johann Euler, while the official reports that Euler wrote to the Director of the academy, , were lost. However, unofficial letters to Johann Euler often contain detailed descriptions of places and people whom Lexell had met, and his impressions. Last years. Lexell became very attached to Leonhard Euler, who lost his sight in his last years but continued working using his elder son Johann Euler to read for him. Lexell helped Leonhard Euler greatly, especially in applying mathematics to physics and astronomy. He helped Euler to write calculations and prepare papers. On 18 September 1783, after a lunch with his family, during a conversation with Lexell about the newly discovered Uranus and its orbit, Euler felt sick. He died a few hours later. After Euler's passing, Academy Director, Princess Dashkova, appointed Lexell in 1783 Euler's successor. Lexell became a corresponding member of the Turin Royal Academy, and the London Board of Longitude put him on the list of scientists receiving its proceedings. Lexell did not enjoy his position for long: he died on 30 November 1784. Contribution to science. Lexell is mainly known for his works in astronomy and celestial mechanics, but he also worked in almost all areas of mathematics: algebra, differential calculus, integral calculus, geometry, analytic geometry, trigonometry, and continuum mechanics. Being a mathematician and working on the main problems of mathematics, he never missed the opportunity to look into specific problems in applied science, allowing for experimental proof of theory underlying the physical phenomenon. In 16 years of his work at the Russian Academy of Sciences, he published 62 works, and 4 more with coauthors, among whom are Leonhard Euler, Johann Euler, Wolfgang Ludwig Krafft, , and Christian Mayer. Differential equations. When applying for a position at the Russian Academy of Sciences, Lexell submitted a paper called "Method of analysing some differential equations, illustrated with examples", which was highly praised by Leonhard Euler in 1768. Lexell's method is as follows: for a given nonlinear differential equation (e.g. second order) we pick an intermediate integral—a first-order differential equation with undefined coefficients and exponents. After differentiating this intermediate integral we compare it with the original equation and get the equations for the coefficients and exponents of the intermediate integral. After we express the undetermined coefficients via the known coefficients we substitute them in the intermediate integral and get two particular solutions of the original equation. Subtracting one particular solution from another we get rid of the differentials and get a general solution, which we analyse at various values of constants. The method of reducing the order of the differential equation was known at that time, but in another form. Lexell's method was significant because it was applicable to a broad range of linear differential equations with constant coefficients that were important for physics applications. In the same year, Lexell published another article "On integrating the differential equation "a""n""d""n""y" + "ba""n-1""d""m-1""ydx" + "ca""n-2""d""m-2""ydx""2" + ... + "rydx""n" = "Xdx""n"" presenting a general highly algorithmic method of solving higher order linear differential equations with constant coefficients. Lexell also looked for criteria of integrability of differential equations. He tried to find criteria for the whole differential equations and also for separate differentials. In 1770 he derived a criterion for integrating differential function, proved it for any number of items, and found the integrability criteria for formula_0, formula_1, formula_2. His results agreed with those of Leonhard Euler but were more general and were derived without the means of calculus of variations. At Euler's request, in 1772 Lexell communicated these results to Lagrange and Lambert. Concurrently with Euler, Lexell worked on expanding the integrating factor method to higher order differential equations. He developed the method of integrating differential equations with two or three variables by means of the integrating factor. He stated that his method could be expanded for the case of four variables: "The formulas will be more complicated, while the problems leading to such equations are rare in analysis". Also of interest is the integration of differential equations in Lexell's paper "On reducing integral formulas to rectification of ellipses and hyperbolae", which discusses elliptic integrals and their classification, and in his paper "Integrating one differential formula with logarithms and circular functions", which was reprinted in the transactions of the Swedish Academy of Sciences. He also integrated a few complicated differential equations in his papers on continuum mechanics, including a four-order partial differential equation in a paper about coiling a flexible plate to a circular ring. There is an unpublished Lexell paper in the archive of the Russian Academy of Sciences with the title "Methods of integration of some differential equations", in which a complete solution of the equation formula_3, now known as the Lagrange–d'Alembert equation, is presented. Polygonometry. Polygonometry was a significant part of Lexell's work. He used the trigonometric approach using the advance in trigonometry made mainly by Euler and presented a general method of solving simple polygons in two articles "On solving rectilinear polygons". Lexell discussed two separate groups of problems: the first had the polygon defined by its sides and angles, the second with its diagonals and angles between diagonals and sides. For the problems of the first group Lexell derived two general formulas giving formula_4 equations allowing to solve a polygon with formula_4 sides. Using these theorems he derived explicit formulas for triangles and tetragons and also gave formulas for pentagons, hexagons, and heptagons. He also presented a classification of problems for tetragons, pentagons, and hexagons. For the second group of problems, Lexell showed that their solutions can be reduced to a few general rules and presented a classification of these problems, solving the corresponding combinatorial problems. In the second article he applied his general method for specific tetragons and showed how to apply his method to a polygon with any number of sides, taking a pentagon as an example. The successor of Lexell's trigonometric approach (as opposed to a coordinate approach) was Swiss mathematician L'Huilier. Both L'Huilier and Lexell emphasized the importance of polygonometry for theoretical and practical applications. Celestial mechanics and astronomy. Lexell's first work at the Russian Academy of Sciences was to analyse data collected from the observation of the 1769 transit of Venus. He published four papers in "Novi Commentarii Academia Petropolitanae" and ended his work with a monograph on determining the parallax of the Sun, published in 1772. Lexell aided Euler in finishing his Lunar theory, and was credited as a co-author in Euler's 1772 "Theoria motuum Lunae". After that, Lexell spent most of his effort on comet astronomy (though his first paper on calculating the orbit of a comet is dated 1770). In the next ten years he calculated the orbits of all the newly discovered comets, among them the comet which Charles Messier discovered in 1770. Lexell calculated its orbit, showed that the comet had had a much larger perihelion before the encounter with Jupiter in 1767 and predicted that after encountering Jupiter again in 1779 it would be altogether expelled from the inner Solar System. This comet was later named Lexell's Comet. Lexell also was the first to calculate the orbit of Uranus and to actually prove that it was a planet rather than a comet. He made preliminary calculations while travelling in Europe in 1781 based on Hershel's and Maskelyne's observations. Having returned to Russia, he estimated the orbit more precisely based on new observations, but due to the long orbital period it was still not enough data to prove that the orbit was not parabolic. Lexell then found the record of a star observed in 1759 by Christian Mayer in Pisces that was neither in the Flamsteed catalogues nor in the sky by the time Bode sought it. Lexell presumed that it was an earlier sighting of the same astronomical object and using this data he calculated the exact orbit, which proved to be elliptical, and proved that the new object was actually a planet. In addition to calculating the parameters of the orbit Lexell also estimated the planet's size more precisely than his contemporaries using Mars that was in the vicinity of the new planet at that time. Lexell also noticed that the orbit of Uranus was being perturbed. He then stated that, based on his data on various comets, the size of the Solar System can be 100 AU or even more, and that it could be other planets there that perturb the orbit of Uranus (although the position of the eventual Neptune was not calculated until much later by Urbain Le Verrier). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "dx\\int{Vdx}" }, { "math_id": 1, "text": "dx\\int{dx\\int{Vdx}}" }, { "math_id": 2, "text": "dx\\int{dx\\int{dx\\int{Vdx}}}" }, { "math_id": 3, "text": "x=y\\phi(x')+\\psi(x')" }, { "math_id": 4, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=767959
7679762
Normal coordinates
Special coordinate system in Differential Geometry In differential geometry, normal coordinates at a point "p" in a differentiable manifold equipped with a symmetric affine connection are a local coordinate system in a neighborhood of "p" obtained by applying the exponential map to the tangent space at "p". In a normal coordinate system, the Christoffel symbols of the connection vanish at the point "p", thus often simplifying local calculations. In normal coordinates associated to the Levi-Civita connection of a Riemannian manifold, one can additionally arrange that the metric tensor is the Kronecker delta at the point "p", and that the first partial derivatives of the metric at "p" vanish. A basic result of differential geometry states that normal coordinates at a point always exist on a manifold with a symmetric affine connection. In such coordinates the covariant derivative reduces to a partial derivative (at "p" only), and the geodesics through "p" are locally linear functions of "t" (the affine parameter). This idea was implemented in a fundamental way by Albert Einstein in the general theory of relativity: the equivalence principle uses normal coordinates via inertial frames. Normal coordinates always exist for the Levi-Civita connection of a Riemannian or Pseudo-Riemannian manifold. By contrast, in general there is no way to define normal coordinates for Finsler manifolds in a way that the exponential map are twice-differentiable . Geodesic normal coordinates. Geodesic normal coordinates are local coordinates on a manifold with an affine connection defined by means of the exponential map formula_0 with formula_1 an open neighborhood of 0 in formula_2, and an isomorphism formula_3 given by any basis of the tangent space at the fixed basepoint formula_4. If the additional structure of a Riemannian metric is imposed, then the basis defined by "E" may be required in addition to be orthonormal, and the resulting coordinate system is then known as a Riemannian normal coordinate system. Normal coordinates exist on a normal neighborhood of a point "p" in "M". A normal neighborhood "U" is an open subset of "M" such that there is a proper neighborhood "V" of the origin in the tangent space "TpM", and exp"p" acts as a diffeomorphism between "U" and "V". On a normal neighborhood "U" of "p" in "M", the chart is given by: formula_5 The isomorphism "E," and therefore the chart, is in no way unique. A convex normal neighborhood "U" is a normal neighborhood of every "p" in "U". The existence of these sort of open neighborhoods (they form a topological basis) has been established by J.H.C. Whitehead for symmetric affine connections. Properties. The properties of normal coordinates often simplify computations. In the following, assume that formula_6 is a normal neighborhood centered at a point formula_7 in formula_8 and formula_9 are normal coordinates on formula_6. Explicit formulae. In the neighbourhood of any point formula_23 equipped with a locally orthonormal coordinate system in which formula_24 and the Riemann tensor at formula_7 takes the value formula_25 we can adjust the coordinates formula_26 so that the components of the metric tensor away from formula_7 become formula_27 The corresponding Levi-Civita connection Christoffel symbols are formula_28 Similarly we can construct local coframes in which formula_29 and the spin-connection coefficients take the values formula_30 Polar coordinates. On a Riemannian manifold, a normal coordinate system at "p" facilitates the introduction of a system of spherical coordinates, known as polar coordinates. These are the coordinates on "M" obtained by introducing the standard spherical coordinate system on the Euclidean space "T""p""M". That is, one introduces on "T""p""M" the standard spherical coordinate system ("r",φ) where "r" ≥ 0 is the radial parameter and φ = (φ1...,φ"n"−1) is a parameterization of the ("n"−1)-sphere. Composition of ("r",φ) with the inverse of the exponential map at "p" is a polar coordinate system. Polar coordinates provide a number of fundamental tools in Riemannian geometry. The radial coordinate is the most significant: geometrically it represents the geodesic distance to "p" of nearby points. Gauss's lemma asserts that the gradient of "r" is simply the partial derivative formula_31. That is, formula_32 for any smooth function "ƒ". As a result, the metric in polar coordinates assumes a block diagonal form formula_33
[ { "math_id": 0, "text": "\\exp_p : T_{p}M \\supset V \\rightarrow M" }, { "math_id": 1, "text": " V " }, { "math_id": 2, "text": " T_{p}M " }, { "math_id": 3, "text": "E: \\mathbb{R}^n \\rightarrow T_{p}M" }, { "math_id": 4, "text": "p\\in M" }, { "math_id": 5, "text": "\\varphi := E^{-1} \\circ \\exp_p^{-1}: U \\rightarrow \\mathbb{R}^n" }, { "math_id": 6, "text": "U" }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "M" }, { "math_id": 9, "text": "x^i" }, { "math_id": 10, "text": "V" }, { "math_id": 11, "text": "T_p M" }, { "math_id": 12, "text": "V^i" }, { "math_id": 13, "text": "\\gamma_V" }, { "math_id": 14, "text": "\\gamma_V(0) = p" }, { "math_id": 15, "text": "\\gamma_V'(0) = V" }, { "math_id": 16, "text": "\\gamma_V(t) = (tV^1, ... , tV^n)" }, { "math_id": 17, "text": "(0, ..., 0)" }, { "math_id": 18, "text": "g_{ij}" }, { "math_id": 19, "text": "\\delta_{ij}" }, { "math_id": 20, "text": "g_{ij}(p)=\\delta_{ij}" }, { "math_id": 21, "text": " \\Gamma_{ij}^k(p)=0 " }, { "math_id": 22, "text": "\\frac{\\partial g_{ij}}{\\partial x^k}(p) = 0,\\,\\forall i,j,k" }, { "math_id": 23, "text": "p=(0,\\ldots 0)" }, { "math_id": 24, "text": "g_{\\mu\\nu}(0)= \\delta_{\\mu\\nu}" }, { "math_id": 25, "text": " R_{\\mu\\sigma \\nu\\tau}(0) " }, { "math_id": 26, "text": "x^\\mu " }, { "math_id": 27, "text": "g_{\\mu\\nu}(x)= \\delta_{\\mu\\nu} - \\tfrac{1}{3} R_{\\mu\\sigma \\nu\\tau}(0) x^\\sigma x^\\tau + O(|x|^3)." }, { "math_id": 28, "text": "{\\Gamma^{\\lambda}}_{\\mu\\nu}(x) = -\\tfrac{1}{3} \\bigl[ R_{\\lambda\\nu\\mu\\tau}(0)+R_{\\lambda\\mu\\nu\\tau}(0) \\bigr] x^\\tau+ O(|x|^2)." }, { "math_id": 29, "text": "e^{*a}_\\mu(x)= \\delta_{a \\mu} - \\tfrac{1}{6} R_{a \\sigma \\mu\\tau}(0) x^\\sigma x^\\tau +O(x^2)," }, { "math_id": 30, "text": "{\\omega^a}_{b\\mu}(x)= - \\tfrac{1}{2} {R^a}_{b\\mu\\tau}(0)x^\\tau+O(|x|^2)." }, { "math_id": 31, "text": "\\partial/\\partial r" }, { "math_id": 32, "text": "\\langle df, dr\\rangle = \\frac{\\partial f}{\\partial r}" }, { "math_id": 33, "text": "g = \\begin{bmatrix}\n1&0&\\cdots\\ 0\\\\\n0&&\\\\\n\\vdots &&g_{\\phi\\phi}(r,\\phi)\\\\\n0&&\n\\end{bmatrix}." } ]
https://en.wikipedia.org/wiki?curid=7679762
7679889
Coordinate-induced basis
In mathematics, a coordinate-induced basis is a basis for the tangent space or cotangent space of a manifold that is induced by a certain coordinate system. Given the coordinate system formula_0, the coordinate-induced basis formula_1 of the tangent space is given by formula_2 and the dual basis formula_3 of the cotangent space is formula_4
[ { "math_id": 0, "text": " x^a " }, { "math_id": 1, "text": "e_a" }, { "math_id": 2, "text": " e_a = \\frac{\\partial}{\\partial x^a} " }, { "math_id": 3, "text": " \\omega^a " }, { "math_id": 4, "text": " \\omega^a=dx^a. \\, " } ]
https://en.wikipedia.org/wiki?curid=7679889
76807453
Mass injection flow
Mass injection flow ( Limbach Flow) refers to inviscid, adiabatic flow through a constant area duct where the effect of mass addition is considered. For this model, the duct area remains constant, the flow is assumed to be steady and one-dimensional, and mass is added within the duct. Because the flow is adiabatic, unlike in Rayleigh flow, the stagnation temperature is a constant. Compressibility effects often come into consideration, though this flow model also applies to incompressible flow. For supersonic flow (an upstream Mach number greater than 1), deceleration occurs with mass addition to the duct and the flow can become choked. Conversely, for subsonic flow (an upstream Mach number less than 1), acceleration occurs and the flow can become choked given sufficient mass addition. Therefore, mass addition will cause both supersonic and subsonic Mach numbers to approach Mach 1, resulting in choked flow. Theory. The 1D mass injection flow model begins with a mass-velocity relation derived for mass injection into a steady, adiabatic, frictionless, constant area flow of calorically perfect gas: formula_0 where formula_1 represents a mass flux, formula_2. This expression describes how velocity will change with a change in mass flux (i.e. how a change in mass flux formula_3 drives a change in velocity formula_4). From this relation, two distinct modes of behavior are seen: From the mass-velocity relation, an explicit mass-Mach relation may be derived: formula_8 Derivations. Although Fanno flow and Rayleigh flow are covered in detail in many textbooks, mass injection flow is not. For this reason, derivations of fundamental mass flow properties are given here. In the following derivations, the constant formula_9 is used to denote the specific gas constant (i.e. formula_10). Mass-Velocity Relation. We begin by establishing a relationship between the differential enthalpy, pressure, and density of a calorically perfect gas: From the adiabatic energy equation (formula_11) we find: Substituting the enthalpy-pressure-density relation (1) into the adiabatic energy relation (2) yields Next, we find a relationship between differential density, mass flux (formula_2), and velocity: Substituting the density-mass-velocity relation (4) into the modified energy relation (3) yields Substituting the 1D steady flow momentum conservation equation (see also the Euler equations) of the form formula_12 into (5) yields From the ideal gas law we find, and from the definition of a calorically perfect gas we find, Substituting expressions (7) and (8) into the combined equation (6) yields Using the speed of sound in an ideal gas (formula_13) and the definition of the Mach number (formula_14) yields Mass-Velocity Relation formula_15 This is the mass-velocity relationship for mass injection into a steady, adiabatic, frictionless, constant area flow of calorically perfect gas. Mass-Mach Relation. To find a relationship between differential mass and Mach number, we will find an expression for formula_16 solely in terms of the Mach number, formula_17. We can then substitute this expression into the mass-velocity relation to yield a mass-Mach relation. We begin by relating differential velocity, mach number, and speed of sound: We can now re-express formula_18 in terms of formula_19: Substituting (12) into (11) yields, We can now re-express formula_19 in terms of formula_4: By substituting (14) into (13), we can create an expression completely in terms of formula_4 and formula_20. Performing this substitution and solving for formula_16 yields, Finally, expression (15) for formula_16 in terms of formula_20 may be substituted directly into the mass-velocity relation (10): Mass-Mach Relation formula_8 This is the mass-Mach relationship for mass injection into a steady, adiabatic, frictionless, constant area flow of calorically perfect gas. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ \\frac{dm}{m}=-\\frac{du}{u}\\left(M^2-1\\right)" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "m=\\dot{m}/A" }, { "math_id": 3, "text": "dm" }, { "math_id": 4, "text": "du" }, { "math_id": 5, "text": "M<1" }, { "math_id": 6, "text": "[M^2 - 1]" }, { "math_id": 7, "text": "M>1" }, { "math_id": 8, "text": "\\frac{dm}{m} = \\frac{1-M^2}{M+\\frac{1}{2}M^3(\\gamma - 1)}dM" }, { "math_id": 9, "text": "R" }, { "math_id": 10, "text": "R=\\bar{R}/M" }, { "math_id": 11, "text": "dh_0=0" }, { "math_id": 12, "text": "dp=-\\rho udu" }, { "math_id": 13, "text": "a^2=\\gamma RT" }, { "math_id": 14, "text": "M = u / a" }, { "math_id": 15, "text": "\\frac{dm}{m} = -\\frac{du}{u}[M^2 -1]" }, { "math_id": 16, "text": "du/u" }, { "math_id": 17, "text": "M" }, { "math_id": 18, "text": "da" }, { "math_id": 19, "text": "dT" }, { "math_id": 20, "text": "dM" } ]
https://en.wikipedia.org/wiki?curid=76807453
76808462
Supersilver ratio
Algebraic integer, approximately 2.20557 In mathematics, the supersilver ratio is a geometrical proportion close to 75/34. Its true value is the real solution of the equation "x"3 = 2"x"2 + 1. The name "supersilver ratio" results from analogy with the silver ratio, the positive solution of the equation "x"2 = 2"x" + 1, and the supergolden ratio. Definition. Two quantities a &gt; b &gt; 0 are in the supersilver ratio-squared if formula_0. The ratio formula_1 is here denoted &amp;NoBreak;&amp;NoBreak; Based on this definition, one has formula_2 It follows that the supersilver ratio is found as the unique real solution of the cubic equation formula_3 The decimal expansion of the root begins as formula_4 (sequence in the OEIS). The minimal polynomial for the reciprocal root is the depressed cubic formula_5 thus the simplest solution with Cardano's formula, formula_6 formula_7 or, using the hyperbolic sine, formula_8 &amp;NoBreak;&amp;NoBreak; is the superstable fixed point of the iteration formula_9 Rewrite the minimal polynomial as formula_10, then the iteration formula_11 results in the continued radical formula_12 Dividing the defining trinomial formula_13 by &amp;NoBreak;&amp;NoBreak; one obtains formula_14, and the conjugate elements of &amp;NoBreak;&amp;NoBreak; are formula_15 with formula_16 and formula_17 Properties. The growth rate of the average value of the n-th term of a random Fibonacci sequence is &amp;NoBreak;&amp;NoBreak;. The supersilver ratio can be expressed in terms of itself as the infinite geometric series formula_18 and formula_19 in comparison to the silver ratio identities formula_20 and formula_21 For every integer formula_22 one has formula_23 Continued fraction pattern of a few low powers formula_24 (5/24) formula_25 (5/11) formula_26 formula_27 (53/24) formula_28 (73/15) formula_29 (118/11) The supersilver ratio is a Pisot number. Because the absolute value formula_30 of the algebraic conjugates is smaller than 1, powers of &amp;NoBreak;&amp;NoBreak; generate almost integers. For example: formula_31 After ten rotation steps the phases of the inward spiraling conjugate pair – initially close to &amp;NoBreak;&amp;NoBreak; – nearly align with the imaginary axis. The minimal polynomial of the supersilver ratio formula_32 has discriminant formula_33 and factors into formula_34 the imaginary quadratic field formula_35 has class number &amp;NoBreak;&amp;NoBreak; Thus, the Hilbert class field of &amp;NoBreak;&amp;NoBreak; can be formed by adjoining &amp;NoBreak;&amp;NoBreak; With argument formula_36 a generator for the ring of integers of &amp;NoBreak;&amp;NoBreak;, the real root  "j"("τ") of the Hilbert class polynomial is given by formula_37 The Weber-Ramanujan class invariant is approximated with error &lt; 3.5 ∙ 10−20 by formula_38 while its true value is the single real root of the polynomial formula_39 The elliptic integral singular value formula_40 for &amp;NoBreak;&amp;NoBreak; has closed form expression formula_41 (which is less than 1/294 the eccentricity of the orbit of Venus). Third-order Pell sequences. These numbers are related to the supersilver ratio as the Pell numbers and Pell-Lucas numbers are to the silver ratio. The fundamental sequence is defined by the third-order recurrence relation formula_42 for "n" &gt; 2, with initial values formula_43 The first few terms are 1, 2, 4, 9, 20, 44, 97, 214, 472, 1041, 2296, 5064... (sequence in the OEIS). The limit ratio between consecutive terms is the supersilver ratio. The first 8 indices n for which formula_44 is prime are n = 1, 6, 21, 114, 117, 849, 2418, 6144. The last number has 2111 decimal digits. The sequence can be extended to negative indices using formula_45. The generating function of the sequence is given by formula_46 for formula_47 The third-order Pell numbers are related to sums of binomial coefficients by formula_48. The characteristic equation of the recurrence is formula_49 If the three solutions are real root &amp;NoBreak;&amp;NoBreak; and conjugate pair &amp;NoBreak;&amp;NoBreak; and &amp;NoBreak;&amp;NoBreak;, the supersilver numbers can be computed with the Binet formula formula_50 with real &amp;NoBreak;&amp;NoBreak; and conjugates &amp;NoBreak;&amp;NoBreak; and &amp;NoBreak;&amp;NoBreak; the roots of formula_51 Since formula_52 and formula_53 the number &amp;NoBreak;}&amp;NoBreak; is the nearest integer to formula_54 with "n" ≥ 0 and formula_55 Coefficients formula_56 result in the Binet formula for the related sequence formula_57 The first few terms are 3, 2, 4, 11, 24, 52, 115, 254, 560, 1235, 2724, 6008... (sequence in the OEIS). This third-order Pell-Lucas sequence has the Fermat property: if p is prime, formula_58 The converse does not hold, but the small number of odd pseudoprimes formula_59 makes the sequence special. The 14 odd composite numbers below 108 to pass the test are n = 32, 52, 53, 315, 99297, 222443, 418625, 9122185, 32572, 11889745, 20909625, 24299681, 64036831, 76917325. The third-order Pell numbers are obtained as integral powers "n" &gt; 3 of a matrix with real eigenvalue &amp;NoBreak;&amp;NoBreak; formula_60 formula_61 The trace of &amp;NoBreak;}&amp;NoBreak; gives the above &amp;NoBreak;&amp;NoBreak; Alternatively, &amp;NoBreak;&amp;NoBreak; can be interpreted as incidence matrix for a D0L Lindenmayer system on the alphabet &amp;NoBreak;}&amp;NoBreak; with corresponding substitution rule formula_62 and initiator &amp;NoBreak;&amp;NoBreak;. The series of words &amp;NoBreak;&amp;NoBreak; produced by iterating the substitution have the property that the number of c's, b's and a's are equal to successive third-order Pell numbers. The lengths of these words are given by formula_63 Associated to this string rewriting process is a compact set composed of self-similar tiles called the Rauzy fractal, that visualizes the combinatorial information contained in a multiple-generation three-letter sequence. Supersilver rectangle. A supersilver rectangle is a rectangle whose side lengths are in a &amp;NoBreak;&amp;NoBreak; ratio. Compared to the silver rectangle, containing a single scaled copy of itself, the supersilver rectangle has one more degree of self-similarity. Given a rectangle of height 1, length &amp;NoBreak;&amp;NoBreak; and diagonal length formula_64 The triangles on the diagonal have altitudes formula_65 each perpendicular foot divides the diagonal in ratio &amp;NoBreak;&amp;NoBreak;. On the right-hand side, cut off a square of side length 1 and mark the intersection with the falling diagonal. The remaining rectangle now has aspect ratio formula_66 (according to formula_67). Divide the original rectangle into four parts by a second, horizontal cut passing through the intersection point. Along the diagonal are two supersilver rectangles. The original rectangle and the scaled copies have diagonal lengths in the ratios formula_68 The areas of the rectangles opposite the diagonal are both equal to formula_69 with aspect ratios formula_70 (below) and formula_71 (above). If the diagram is further subdivided by perpendicular lines through the feet of the altitudes, the lengths of the diagonal and its seven distinct subsections are in ratios formula_72 formula_73 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\left( \\frac{2a+b}{a} \\right)^{2} = \\frac{a}{b} " }, { "math_id": 1, "text": " \\frac{2a+b}{a} " }, { "math_id": 2, "text": " \\begin{align}\n1&=\\left( \\frac{2a+b}{a} \\right)^{2} \\frac{b}{a}\\\\\n&=\\left( \\frac{2a+b}{a} \\right)^{2} \\left( \\frac{2a+b}{a} - 2 \\right)\\\\\n&\\implies \\varsigma^{2} \\left( \\varsigma - 2 \\right) = 1 \\end{align} " }, { "math_id": 3, "text": "\\varsigma^{3} -2\\varsigma^{2} -1 =0." }, { "math_id": 4, "text": "2.205\\,569\\,430\\,400\\,590..." }, { "math_id": 5, "text": "x^{3} +2x -1," }, { "math_id": 6, "text": " w_{1,2} = \\left( 1 \\pm \\frac{1}{3} \\sqrt{ \\frac{59}{3}} \\right) /2 " }, { "math_id": 7, "text": " 1 /\\varsigma =\\sqrt[3]{w_1} +\\sqrt[3]{w_2} " }, { "math_id": 8, "text": " 1 /\\varsigma =-2 \\sqrt\\frac{2}{3} \\sinh \\left( \\frac{1}{3} \\operatorname{arsinh} \\left( -\\frac{3}{4} \\sqrt\\frac{3}{2} \\right) \\right)." }, { "math_id": 9, "text": " x \\gets (2x^{3}+1) /(3x^{2}+2)." }, { "math_id": 10, "text": "(x^2+1)^2 =1+x" }, { "math_id": 11, "text": " x \\gets \\sqrt{-1 +\\sqrt{1+x}} " }, { "math_id": 12, "text": " 1/\\varsigma =\\sqrt{-1 +\\sqrt{1 +\\sqrt{-1 +\\sqrt{1 +\\cdots}}}} \\;" }, { "math_id": 13, "text": "x^{3} -2x^{2} -1" }, { "math_id": 14, "text": " x^{2} +x /\\varsigma^2 +1 /\\varsigma " }, { "math_id": 15, "text": " x_{1,2} = \\left( -1 \\pm i \\sqrt{8\\varsigma^2 +3} \\right) /2 \\varsigma^2," }, { "math_id": 16, "text": "x_1 +x_2 = 2 -\\varsigma \\;" }, { "math_id": 17, "text": "\\; x_1x_2 =1 /\\varsigma." }, { "math_id": 18, "text": " \\varsigma = 2\\sum_{k=0}^{\\infty} \\varsigma^{-3k}" }, { "math_id": 19, "text": " \\,\\varsigma^2 =-1 +\\sum_{k=0}^{\\infty} (\\varsigma -1)^{-k}," }, { "math_id": 20, "text": " \\sigma = 2\\sum_{k=0}^{\\infty} \\sigma^{-2k}" }, { "math_id": 21, "text": " \\,\\sigma^2 =-1 +2\\sum_{k=0}^{\\infty} (\\sigma -1)^{-k}." }, { "math_id": 22, "text": "n" }, { "math_id": 23, "text": "\\begin{align}\n\\varsigma^{n} &=2\\varsigma^{n-1} +\\varsigma^{n-3}\\\\\n&=4\\varsigma^{n-2} +\\varsigma^{n-3} +2\\varsigma^{n-4}\\\\\n&=\\varsigma^{n-1} +2\\varsigma^{n-2} +\\varsigma^{n-3} +\\varsigma^{n-4}.\\end{align}" }, { "math_id": 24, "text": " \\varsigma^{-2} = [0;4,1,6,2,1,1,1,1,1,1,...] \\approx 0.2056 " }, { "math_id": 25, "text": " \\varsigma^{-1} = [0;2,4,1,6,2,1,1,1,1,1,...] \\approx 0.4534 " }, { "math_id": 26, "text": "\\ \\varsigma^{0} = [1] " }, { "math_id": 27, "text": " \\varsigma^{1} = [2;4,1,6,2,1,1,1,1,1,1,...] \\approx 2.2056 " }, { "math_id": 28, "text": " \\varsigma^{2} = [4;1,6,2,1,1,1,1,1,1,2,...] \\approx 4.8645 " }, { "math_id": 29, "text": " \\varsigma^{3} = [10;1,2,1,2,4,4,2,2,6,2,...] \\approx 10.729 " }, { "math_id": 30, "text": "1 /\\sqrt{\\varsigma}" }, { "math_id": 31, "text": " \\varsigma^{10} =2724.00146856... \\approx 2724 +1/681." }, { "math_id": 32, "text": " m(x) = x^{3}-2x^{2}-1 " }, { "math_id": 33, "text": "\\Delta=-59" }, { "math_id": 34, "text": "(x -21)^{2}(x -19) \\pmod{59};\\;" }, { "math_id": 35, "text": " K = \\mathbb{Q}( \\sqrt{\\Delta}) " }, { "math_id": 36, "text": " \\tau=(1 +\\sqrt{\\Delta})/2\\, " }, { "math_id": 37, "text": "(\\varsigma^{-6} -27\\varsigma^{6} -6)^{3}." }, { "math_id": 38, "text": "\\sqrt{2}\\,\\mathfrak{f}( \\sqrt{ \\Delta} ) = \\sqrt[4]{2}\\,G_{59} \\approx (e^{\\pi \\sqrt{- \\Delta}} + 24)^{1/24}," }, { "math_id": 39, "text": "W_{59}(x) = x^9 -4x^8 +4x^7 -2x^6 +4x^5 -8x^4 +4x^3 -8x^2 +16x -8." }, { "math_id": 40, "text": " k_{r} =\\lambda^{*}(r) " }, { "math_id": 41, "text": " \\lambda^{*}(59) =\\sin ( \\arcsin \\left( G_{59}^{-12} \\right) /2) " }, { "math_id": 42, "text": " S_{n} =2S_{n-1} +S_{n-3} " }, { "math_id": 43, "text": " S_{0} =1, S_{1} =2, S_{2} =4." }, { "math_id": 44, "text": "S_{n}" }, { "math_id": 45, "text": " S_{n} =S_{n+3} -2S_{n+2}" }, { "math_id": 46, "text": " \\frac{1}{1 - 2x - x^{3}} = \\sum_{n=0}^{\\infty} S_{n}x^{n} " }, { "math_id": 47, "text": "x <1 /\\varsigma \\;." }, { "math_id": 48, "text": " S_{n} =\\sum_{k =0}^{\\lfloor n /3 \\rfloor}{n -2k \\choose k} \\cdot 2^{n -3k} \\; " }, { "math_id": 49, "text": "x^{3} -2x^{2} -1 =0." }, { "math_id": 50, "text": " S_{n-2} =a \\alpha^{n} +b \\beta^{n} +c \\gamma^{n}," }, { "math_id": 51, "text": "59x^{3} +4x -1 =0." }, { "math_id": 52, "text": " \\left\\vert b \\beta^{n} +c \\gamma^{n} \\right\\vert < 1 /\\sqrt{ \\alpha^{n}} " }, { "math_id": 53, "text": " \\alpha = \\varsigma," }, { "math_id": 54, "text": " a\\,\\varsigma^{n+2}," }, { "math_id": 55, "text": " a =\\varsigma /( 2\\varsigma^{2} +3) =" }, { "math_id": 56, "text": " a =b =c =1 " }, { "math_id": 57, "text": " A_{n} =S_{n} +2S_{n-3}." }, { "math_id": 58, "text": " A_{p} \\equiv A_{1} \\bmod p." }, { "math_id": 59, "text": "\\,n \\mid (A_{n} -2) " }, { "math_id": 60, "text": " Q = \\begin{pmatrix} 2 & 0 & 1 \\\\ 1 & 0 & 0 \\\\ 0 & 1 & 0 \\end{pmatrix} ," }, { "math_id": 61, "text": " Q^{n} = \\begin{pmatrix} S_{n} & S_{n-2} & S_{n-1} \\\\ S_{n-1} & S_{n-3} & S_{n-2} \\\\ S_{n-2} & S_{n-4} & S_{n-3} \\end{pmatrix} " }, { "math_id": 62, "text": "\\begin{cases}\na \\;\\mapsto \\;aab\\\\\nb \\;\\mapsto \\;c\\\\\nc \\;\\mapsto \\;a\n\\end{cases}" }, { "math_id": 63, "text": "l(w_n) =S_{n-2} +S_{n-3} +S_{n-4}." }, { "math_id": 64, "text": "\\varsigma \\sqrt{\\varsigma -1}." }, { "math_id": 65, "text": "1 /\\sqrt{\\varsigma -1}\\,;" }, { "math_id": 66, "text": "1 +1/ \\varsigma^2:1" }, { "math_id": 67, "text": "\\varsigma =2 +1/ \\varsigma^2" }, { "math_id": 68, "text": "\\varsigma:\\varsigma -1:1." }, { "math_id": 69, "text": "(\\varsigma -1)/ \\varsigma," }, { "math_id": 70, "text": "\\varsigma(\\varsigma -1)" }, { "math_id": 71, "text": "\\varsigma /(\\varsigma -1)" }, { "math_id": 72, "text": "\\varsigma^2 +1:\\varsigma^2:\\varsigma^2 -1:\\varsigma +1:" }, { "math_id": 73, "text": "\\, \\varsigma(\\varsigma -1):\\varsigma:2/(\\varsigma -1):1." }, { "math_id": 74, "text": "x^{3} =2x^{2} +1" }, { "math_id": 75, "text": "x^{2}=2x+1" }, { "math_id": 76, "text": "x^{2}=x+1" }, { "math_id": 77, "text": "x^{3}=x^{2}+1" } ]
https://en.wikipedia.org/wiki?curid=76808462
768124
Tax bracket
Division at which a tax rate changes Tax brackets are the divisions at which tax rates change in a progressive tax system (or an explicitly regressive tax system, though that is rarer). Essentially, tax brackets are the cutoff values for taxable income—income past a certain point is taxed at a higher rate. Example. Imagine that there are three tax brackets: 10%, 20%, and 30%. The 10% rate applies to income from $1 to $10,000; the 20% rate applies to income from $10,001 to $20,000; and the 30% rate applies to all income above $20,000. Under this system, someone earning $10,000 is taxed at 10%, paying a total of $1,000. Someone earning $5,000 pays $500, and so on. Meanwhile, someone who earns $25,000 faces a more complicated calculation. The rate on the first $10,000 is 10%, from $10,001 to $20,000 is 20%, and above that is 30%. Thus, they pay $1,000 for the first $10,000 of income (10%), $2,000 for the second $10,000 of income (20%), and $1,500 for the last $5,000 of income (30%), In total, they pay $4,500, or an 18% average tax rate. In practice the computation is simplified by using point–slope form or slope–intercept form of the linear equation for the tax on a specific bracket, either as tax on the bottom amount of the bracket "plus" the tax on the marginal amount "within" the bracket: formula_0 or the tax on the entire amount ("at" the marginal rate), "minus" the amount that this overstates tax on the bottom end of the bracket. formula_1 See Progressive tax#Computation for details. Tax brackets in Australia. Individual income tax rates (residents). Financial years 2018–19, 2019–20 The above rates do not include the Medicare levy of 2.0%. Tax brackets in Canada. Canada's federal government has the following tax brackets for the 2012 tax year (all in Canadian dollars). The "basic personal amount" of $15,527 effectively means that income up to this amount is not subject to tax, although it is included in the calculation of taxable income. Each province except Québec adds their own tax on top of the federal tax. Québec has a completely separate income tax. Provincial / Territorial Tax Rates for 2012: Tax brackets in India. Income tax slabs applicable for financial year 2015–16 (Assessment Year- 2016–17)is summarized below: Tax brackets in Malaysia. Malaysia has the following income tax brackets based on assessment year. Tax brackets in Malta. Malta has the following tax brackets for income received during 2012 Single Rates: Married Rates: Tax brackets in New Zealand. New Zealand has the following income tax brackets (as of 1 October 2010). All values in New Zealand dollars, with the ACC Earners' levy not included. 45% when the employee does not complete a declaration form (IR330). ACC Earners' Levy for the 2010 tax year is 2.0%, an increase from 1.7% in the 2008 tax year. Tax brackets in Singapore. 2007 &amp; 2008. A personal tax rebate of 20% was granted for 2008, up to a maximum of $2,000. 2013. All figures are in Singapore dollars. Tax brackets in South Africa. The Minister of Finance announced new tax rates for the 2012–2013 tax year. They are as follows : Tax brackets in Switzerland. Personal income tax is progressive in nature. The total rate does not usually exceed 40%. The Swiss Federal Tax Administration website provides a broad outline of the Swiss tax system, and full details and tax tables are available in PDF documents. The complexity of the system is partly because the Confederation, the 26 Cantons that make up the federation, and about 2 900 communes [municipalities] levy their own taxes based on the Federal Constitution and 26 Cantonal Constitutions. Tax brackets in Taiwan. Income tax rates (Individual). Financial year 2013 Tax brackets in the United States. 2018 tax brackets. As of 1 January 2018, the tax brackets have been updated due to the passage of the Tax Cuts and Jobs Act: In the United States, the dollar amounts of the federal income tax standard deduction and personal exemptions for the taxpayer and dependents are adjusted annually to account for inflation. This results in yearly changes to the personal income tax brackets even when the federal income tax rates remain unchanged. 2011 tax brackets. Two higher tax brackets (36% and 39.6%) were added in 1993, and then taxes in all brackets were lowered in 2001 through 2003 as follows: Internal Revenue Code terminology. Gross salary is the amount your employer pays an employee, plus one's income tax liability. Although the tax itself is included in this figure, it is typically the one used when discussing one's pay. For example, John gets paid $50/hour as an administrative director. His annual gross salary is $50/hour x 2,000 hours/year = $100,000/year. Of this, some is paid to John, and the rest to taxes. W-2 wages are the wages that appear on the employee's W-2 issued by his employer each year in January. A copy of the W-2 is sent to the Internal Revenue Service (IRS). It is the gross salary less any contributions to pre-tax plans. The W-2 form also shows the amount withheld by the employer for federal income tax. W-2 wages = gross salary less (contributions to employer retirement plan) less (contributions to employer health plan) less (contributions to some other employer plans) Total income is the sum of all taxable income, including the W-2 wages. Almost all income is taxable. There are a few exemptions for individuals such as non-taxable interest on government bonds, a portion of the Social Security (SS) income (not the payments to SS, but the payments from SS to the individual), etc. Adjusted gross income (AGI) is Total Income less some specific allowed deductions. Such as; alimony paid (income to the recipient), permitted moving expenses, self-employed retirement program, student loan interest, etc. Itemized deductions are other specific deductions such as; mortgage interest on a home, state income taxes or sales taxes, local property taxes, charitable contributions, state income tax withheld, etc. Standard deduction is a sort of minimum itemized deduction. If all itemized deductions are added up and it is less than the standard deduction, the standard deduction is taken. In 2007 this was $5,350 for those filing individually and $10,700 for married filing jointly. Personal exemption is a tax exemption in which the taxpayer may deduct an amount from their gross income for each dependent they claim. It was $3,400 in 2007. Sample tax calculation. Given the complexity of the United States' income tax code, individuals often find it necessary to consult a tax accountant or professional tax preparer. For example, John, a married 44-year-old who has two children, earned a gross salary of $100,000 in 2007. He contributes the maximum $15,500 per year to his employer's 401(k) retirement plan, pays $1,800 per year for his employer's family health plan, and $500 per year to his employer's Flexfund medical expense plan. All of the plans are allowed pre-tax contributions. Gross pay = $100,000 W-2 wages = $100,000 – $15,500 – $1,800 – $500 = $82,200 John's and his wife's other income is $12,000 from John's wife's wages (she also got a W-2 but had no pre-tax contributions), $200 interest from a bank account, and a $150 state tax refund. Total Income = $82,200 + $12,000 + $200 + $150 = $94,550. John's employer reassigned John to a new office and his moving expenses were $8,000, of which $2,000 was not reimbursed by his employer. Adjusted gross income = $94,550 – $2,000 = $92,550. John's itemized deductions were $22,300 (mortgage interest, property taxes, and state income tax withheld). John had four personal exemptions—himself, his wife and two children. His total personal exemptions were 4 x $3,400 = $13,600. Taxable Income = $92,550 – $22,300 – $13,600 = $56,650. The tax on the Taxable Income is found in a Tax Table if the Taxable Income is less than $100,000 and is computed if over $100,000. Both are used. The Tax Tables are in the 2007 1040 Instructions. The Tax Tables list income in $50 increments for all categories of taxpayers, single, married filing jointly, married filing separately, and head of household. For the Taxable Income range of "at least $56,650 but less than $56,700" the tax is $7,718 for a taxpayer who is married filing jointly. The 2007 tax rates schedule for married filing jointly is: The tax is 10% on the first $15,650 = $1,565.00 plus 15% of the amount over $15,650 ($56,650 – $15,650) = $41,000 x 15% = $6,150.00 Total ($1,565.00 + $6,150.00) = $7,715.00 In addition to the Federal income tax, John probably pays state income tax, Social Security tax, and Medicare tax. The Social Security tax in 2007 for John is 6.2% on the first $97,500 of earned income (wages), or a maximum of $6,045. There are no exclusions from earned income for Social Security so John pays the maximum of $6,045. His wife pays $12,000 x 6.2% = $744. Medicare is 1.45% on all earned income with no maximum. John and his wife pays $112,000 x 1.45% = $1,624 for Medicare in 2007. Most states also levy income tax, exceptions being Alaska, Florida, Nevada, South Dakota, Texas, Washington, New Hampshire, Tennessee and Wyoming. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\$3\\,000 + (\\$25\\,000 - \\$20\\,000) \\times 30\\% = \\$3\\,000 + \\$1\\,500 = \\$4\\,500," }, { "math_id": 1, "text": "\\$25\\,000 \\times 30\\% - \\$3\\,000 = \\$7\\,500 - \\$3\\,000 = \\$4\\,500." } ]
https://en.wikipedia.org/wiki?curid=768124
768370
P50 (pressure)
Chemistry term for pressure In biochemistry, "p"50 represents the partial pressure of a gas required to achieve 50% saturation of a particular protein's binding sites. Values of "p"50 are negatively correlated with substrate affinity; lower values correspond to higher affinity and "vice versa". The term is analogous to the Michaelis–Menten constant ("KM"), which identifies the concentration of substrate required for an enzyme to achieve 50% of its maximum reaction velocity. The concept of "p"50 is derived from considering the fractional saturation of a protein by a gas. Imagine myoglobin, a protein which is able to bind a single molecule of oxygen, as per the reversible reaction below, whose equilibrium constant "K" (which is also a dissociation constant, since it describes a reversible association-dissociation event) is equal to the product of the concentrations (at equilibrium) of free myoglobin and free oxygen, divided by the concentration of myoglobin-oxygen complex. &lt;chem&gt;Mb + O_2 &lt;=&gt; Mb \cdot O_2&lt;/chem&gt; &lt;chem&gt;\it{K}=\rm\frac{[Mb][O_2]}{[Mb\cdot O_2]}&lt;/chem&gt; The fractional saturation "Y""O"2 of the myoglobin is what proportion of the total myoglobin concentration is made up of oxygen-bound myoglobin, which can be rearranged as the concentration of free oxygen over the sum of that concentration and the dissociation constant "K". Since diatomic oxygen is a gas, its concentration in solution can be thought of as a partial pressure. formula_0 From defining the "p"50 as the partial pressure at which the fractional saturation is 50%, we can deduce that it is in fact equal to the dissociation constant "K". formula_1 For example, myoglobin's "p"50 for O2 is 130 pascals while the "P"50 for adult hemoglobin is 3.5 kPa. Thus, when O2 partial pressure is low, hemoglobin-bound O2 is more readily transferred to myoglobin. Myoglobin, found in high concentrations in muscle tissue, can then transfer the oxygen to muscle tissue muscle fibers, where it will be used in the generation of energy to fuel muscle contraction. Another example is that of human fetal hemoglobin, which has a higher affinity (lower "P"50) than adult hemoglobin, and therefore allows uptake of oxygen across the placental diffusion barrier. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y_{O_2}=\\rm\\frac{[Mb\\cdot O_2]}{[Mb]+[Mb\\cdot O_2]}\n\\Rightarrow\\rm\\frac{[O_2]}{\\it{K}\\,\\rm{+\\,[O_2]}}\n\\Rightarrow\\rm\\frac{\\it{p\\rm{O_2}}}{\\it{K}\\,\\rm{+\\,\\it{p\\rm{O_2}}}}" }, { "math_id": 1, "text": "\\frac{p_{50}}{K+p_{50}}=0.5\n\\Rightarrow p_{50}=K\n" } ]
https://en.wikipedia.org/wiki?curid=768370
76848
Periodic function
Function that repeats its values at regular intervals or periods A periodic function also called a periodic waveform (or simply periodic wave), is a function that repeats its values at regular intervals or periods. The repeatable part of the function or waveform is called a cycle. For example, the trigonometric functions, which repeat at intervals of formula_0 radians, are periodic functions. Periodic functions are used throughout science to describe oscillations, waves, and other phenomena that exhibit periodicity. Any function that is not periodic is called aperiodic. Definition. A function f is said to be periodic if, for some nonzero constant P, it is the case that formula_1 for all values of x in the domain. A nonzero constant P for which this is the case is called a period of the function. If there exists a least positive constant P with this property, it is called the fundamental period (also primitive period, basic period, or prime period.) Often, "the" period of a function is used to mean its fundamental period. A function with period P will repeat on intervals of length P, and these intervals are sometimes also referred to as periods of the function. Geometrically, a periodic function can be defined as a function whose graph exhibits translational symmetry, i.e. a function f is periodic with period P if the graph of f is invariant under translation in the x-direction by a distance of P. This definition of periodicity can be extended to other geometric shapes and patterns, as well as be generalized to higher dimensions, such as periodic tessellations of the plane. A sequence can also be viewed as a function defined on the natural numbers, and for a periodic sequence these notions are defined accordingly. Examples. Real number examples. The sine function is periodic with period formula_0, since formula_2 for all values of formula_3. This function repeats on intervals of length formula_0 (see the graph to the right). Everyday examples are seen when the variable is "time"; for instance the hands of a clock or the phases of the moon show periodic behaviour. Periodic motion is motion in which the position(s) of the system are expressible as periodic functions, all with the "same" period. For a function on the real numbers or on the integers, that means that the entire graph can be formed from copies of one particular portion, repeated at regular intervals. A simple example of a periodic function is the function formula_4 that gives the "fractional part" of its argument. Its period is 1. In particular, formula_5 The graph of the function formula_4 is the sawtooth wave. The trigonometric functions sine and cosine are common periodic functions, with period formula_0 (see the figure on the right). The subject of Fourier series investigates the idea that an 'arbitrary' periodic function is a sum of trigonometric functions with matching periods. According to the definition above, some exotic functions, for example the Dirichlet function, are also periodic; in the case of Dirichlet function, any nonzero rational number is a period. Complex number examples. Using complex variables we have the common period function: formula_7 Since the cosine and sine functions are both periodic with period formula_0, the complex exponential is made up of cosine and sine waves. This means that Euler's formula (above) has the property such that if formula_8 is the period of the function, then formula_9 Double-periodic functions. A function whose domain is the complex numbers can have two incommensurate periods without being constant. The elliptic functions are such functions. ("Incommensurate" in this context means not real multiples of each other.) Properties. Periodic functions can take on values many times. More specifically, if a function formula_4 is periodic with period formula_10, then for all formula_3 in the domain of formula_4 and all positive integers formula_11, formula_12 If formula_13 is a function with period formula_10, then formula_14, where formula_15 is a non-zero real number such that formula_16 is within the domain of formula_4, is periodic with period formula_17. For example, formula_6 has period formula_18 and, therefore, formula_19 will have period formula_20. Some periodic functions can be described by Fourier series. For instance, for "L"2 functions, Carleson's theorem states that they have a pointwise (Lebesgue) almost everywhere convergent Fourier series. Fourier series can only be used for periodic functions, or for functions on a bounded (compact) interval. If formula_4 is a periodic function with period formula_10 that can be described by a Fourier series, the coefficients of the series can be described by an integral over an interval of length formula_10. Any function that consists only of periodic functions with the same period is also periodic (with period equal or smaller), including: Generalizations. Antiperiodic functions. One subset of periodic functions is that of antiperiodic functions. This is a function formula_4 such that formula_21 for all formula_22. For example, the sine and cosine functions are formula_23-antiperiodic and formula_0-periodic. While a formula_24-antiperiodic function is a formula_25-periodic function, the converse is not necessarily true. Bloch-periodic functions. A further generalization appears in the context of Bloch's theorems and Floquet theory, which govern the solution of various periodic differential equations. In this context, the solution (in one dimension) is typically a function of the form formula_26 where formula_27 is a real or complex number (the "Bloch wavevector" or "Floquet exponent"). Functions of this form are sometimes called Bloch-periodic in this context. A periodic function is the special case formula_28, and an antiperiodic function is the special case formula_29. Whenever formula_30 is rational, the function is also periodic. Quotient spaces as domain. In signal processing you encounter the problem, that Fourier series represent periodic functions and that Fourier series satisfy convolution theorems (i.e. convolution of Fourier series corresponds to multiplication of represented periodic function and vice versa), but periodic functions cannot be convolved with the usual definition, since the involved integrals diverge. A possible way out is to define a periodic function on a bounded but periodic domain. To this end you can use the notion of a quotient space: formula_31. That is, each element in formula_32 is an equivalence class of real numbers that share the same fractional part. Thus a function like formula_33 is a representation of a 1-periodic function. Calculating period. Consider a real waveform consisting of superimposed frequencies, expressed in a set as ratios to a fundamental frequency, f: F = &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄f [f1 f2 f3 ... fN] where all non-zero elements ≥1 and at least one of the elements of the set is 1. To find the period, T, first find the least common denominator of all the elements in the set. Period can be found as T = &lt;templatestyles src="Fraction/styles.css" /&gt;LCD⁄f. Consider that for a simple sinusoid, T = &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄f. Therefore, the LCD can be seen as a periodicity multiplier. If no least common denominator exists, for instance if one of the above elements were irrational, then the wave would not be periodic. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2\\pi" }, { "math_id": 1, "text": "f(x+P) = f(x) " }, { "math_id": 2, "text": "\\sin(x + 2\\pi) = \\sin x" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "f(0.5) = f(1.5) = f(2.5) = \\cdots = 0.5" }, { "math_id": 6, "text": "f(x) = \\sin(x)" }, { "math_id": 7, "text": "e^{ikx} = \\cos kx + i\\,\\sin kx." }, { "math_id": 8, "text": "L" }, { "math_id": 9, "text": "L = \\frac{2\\pi}{k}." }, { "math_id": 10, "text": "P" }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": "f(x + nP) = f(x)" }, { "math_id": 13, "text": "f(x)" }, { "math_id": 14, "text": "f(ax)" }, { "math_id": 15, "text": "a" }, { "math_id": 16, "text": "ax" }, { "math_id": 17, "text": "\\frac{P}{a}" }, { "math_id": 18, "text": "2 \\pi" }, { "math_id": 19, "text": "\\sin(5x)" }, { "math_id": 20, "text": "\\frac{2\\pi}{5}" }, { "math_id": 21, "text": "f(x+P) = -f(x)" }, { "math_id": 22, "text": " x" }, { "math_id": 23, "text": "\\pi" }, { "math_id": 24, "text": " P" }, { "math_id": 25, "text": " 2P" }, { "math_id": 26, "text": "f(x+P) = e^{ikP} f(x) ~," }, { "math_id": 27, "text": "k" }, { "math_id": 28, "text": "k=0" }, { "math_id": 29, "text": "k=\\pi/P" }, { "math_id": 30, "text": "k P/ \\pi" }, { "math_id": 31, "text": "{\\mathbb{R}/\\mathbb{Z}}\n = \\{x+\\mathbb{Z} : x\\in\\mathbb{R}\\}\n = \\{\\{y : y\\in\\mathbb{R}\\land y-x\\in\\mathbb{Z}\\} : x\\in\\mathbb{R}\\}" }, { "math_id": 32, "text": "{\\mathbb{R}/\\mathbb{Z}}" }, { "math_id": 33, "text": "f : {\\mathbb{R}/\\mathbb{Z}}\\to\\mathbb{R}" } ]
https://en.wikipedia.org/wiki?curid=76848
7685346
Mycielskian
Derived graph of higher chromatic number In the mathematical area of graph theory, the Mycielskian or Mycielski graph of an undirected graph is a larger graph formed from it by a construction of Jan Mycielski (1955). The construction preserves the property of being triangle-free but increases the chromatic number; by applying the construction repeatedly to a triangle-free starting graph, Mycielski showed that there exist triangle-free graphs with arbitrarily large chromatic number. Construction. Let the "n" vertices of the given graph "G" be "v"1, "v"2, . . . , "v"n. The Mycielski graph μ("G") contains "G" itself as a subgraph, together with "n"+1 additional vertices: a vertex "u""i" corresponding to each vertex "v""i" of "G", and an extra vertex "w". Each vertex "u""i" is connected by an edge to "w", so that these vertices form a subgraph in the form of a star "K"1,"n". In addition, for each edge "v""i""v""j" of "G", the Mycielski graph includes two edges, "u""i""v""j" and "v""i""u""j". Thus, if "G" has "n" vertices and "m" edges, μ("G") has 2"n"+1 vertices and 3"m"+"n" edges. The only new triangles in μ("G") are of the form "v""i""v""j""u""k", where "v""i""v""j""v""k" is a triangle in "G". Thus, if "G" is triangle-free, so is μ("G"). To see that the construction increases the chromatic number formula_0, consider a proper "k"-coloring of formula_1; that is, a mapping formula_2 with formula_3 for adjacent vertices "x","y". If we had formula_4 for all "i", then we could define a proper ("k"−1)-coloring of "G" by formula_5 when formula_6, and formula_7 otherwise. But this is impossible for formula_0, so "c" must use all "k" colors for formula_8, and any proper coloring of the last vertex "w" must use an extra color. That is, formula_9. Iterated Mycielskians. Applying the Mycielskian repeatedly, starting with the one-edge graph, produces a sequence of graphs "M""i" = μ("M""i"−1), sometimes called the Mycielski graphs. The first few graphs in this sequence are the graph "M"2 = "K"2 with two vertices connected by an edge, the cycle graph "M"3 = "C"5, and the Grötzsch graph "M"4 with 11 vertices and 20 edges. In general, the graph "M""i" is triangle-free, ("i"−1)-vertex-connected, and "i"-chromatic. The number of vertices in "M""i" for "i" ≥ 2 is 3 × 2"i"−2 − 1 (sequence in the OEIS), while the number of edges for "i" = 2, 3, . . . is: 1, 5, 20, 71, 236, 755, 2360, 7271, 22196, 67355, ... (sequence in the OEIS). Cones over graphs. A generalization of the Mycielskian, called a cone over a graph, was introduced by and further studied by and . In this construction, one forms a graph formula_10 from a given graph "G" by taking the tensor product "G" × "H", where "H" is a path of length "i" with a self-loop at one end, and then collapsing into a single supervertex all of the vertices associated with the vertex of "H" at the non-loop end of the path. The Mycielskian itself can be formed in this way as μ("G") = Δ2("G"). While the cone construction does not always increase the chromatic number, proved that it does so when applied iteratively to "K"2. That is, define a sequence of families of graphs, called generalized Mycielskians, as ℳ(2) = {"K"2} and ℳ("k"+1) = {formula_10 | "G" ∈ ℳ("k"), i ∈ formula_11}. For example, ℳ(3) is the family of odd cycles. Then each graph in ℳ("k") is "k"-chromatic. The proof uses methods of topological combinatorics developed by László Lovász to compute the chromatic number of Kneser graphs. The triangle-free property is then strengthened as follows: if one only applies the cone construction Δ"i" for "i" ≥ "r", then the resulting graph has odd girth at least 2"r" + 1, that is, it contains no odd cycles of length less than 2"r" + 1. Thus generalized Mycielskians provide a simple construction of graphs with high chromatic number and high odd girth.
[ { "math_id": 0, "text": "\\chi(G)=k" }, { "math_id": 1, "text": "\\mu(G){-}\\{w\\}" }, { "math_id": 2, "text": "c : \\{v_1,\\ldots,v_n,u_1,\\ldots,u_n\\}\\to \\{1,2,\\ldots,k\\}" }, { "math_id": 3, "text": "c(x)\\neq c(y)" }, { "math_id": 4, "text": "c(u_i)\\in \\{1,2,\\ldots,k{-}1\\}" }, { "math_id": 5, "text": "c'\\!(v_i) = c(u_i)" }, { "math_id": 6, "text": "c(v_i) = k" }, { "math_id": 7, "text": "c'\\!(v_i) = c(v_i)" }, { "math_id": 8, "text": "\\{u_1,\\ldots,u_n\\}" }, { "math_id": 9, "text": "\\chi(\\mu(G))=k{+}1" }, { "math_id": 10, "text": "\\Delta_i(G)" }, { "math_id": 11, "text": "\\mathbb{N}" } ]
https://en.wikipedia.org/wiki?curid=7685346
7685783
Grötzsch graph
Triangle-free graph requiring four colors In the mathematical field of graph theory, the Grötzsch graph is a triangle-free graph with 11 vertices, 20 edges, chromatic number 4, and crossing number 5. It is named after German mathematician Herbert Grötzsch, who used it as an example in connection with his 1959 theorem that planar triangle-free graphs are 3-colorable. The Grötzsch graph is a member of an infinite sequence of triangle-free graphs, each the Mycielskian of the previous graph in the sequence, starting from the one-edge graph; this sequence of graphs was constructed by to show that there exist triangle-free graphs with arbitrarily large chromatic number. Therefore, the Grötzsch graph is sometimes also called the Mycielski graph or the Mycielski–Grötzsch graph. Unlike later graphs in this sequence, the Grötzsch graph is the smallest triangle-free graph with its chromatic number. Properties. The full automorphism group of the Grötzsch graph is isomorphic to the dihedral group D5 of order 10, the group of symmetries of a regular pentagon, including both rotations and reflections. These symmetries have three orbits of vertices: the degree-5 vertex (by itself), its five neighbors, and its five non-neighbors. Similarly, there are three orbits of edges, distinguished by their distance from the degree-5 vertex. The characteristic polynomial of the Grötzsch graph is formula_0 Although it is not a planar graph, it can be embedded in the projective plane without crossings. This embedding has ten faces, all of which are quadrilaterals. Applications. The existence of the Grötzsch graph demonstrates that the assumption of planarity is necessary in Grötzsch's theorem that every triangle-free planar graph is 3-colorable. It has odd girth five but girth four, and does not have any graph homomorphism to a graph whose girth is five or more, so it forms an example that distinguishes odd girth from the maximum girth that can be obtained from a homomorphism. used a modified version of the Grötzsch graph to disprove a conjecture of Paul Erdős and Miklos Simonovits (1973) on the chromatic number of triangle-free graphs with high degree. Häggkvist's modification consists of replacing each of the five degree-four vertices of the Grötzsch graph by a set of three vertices, replacing each of the five degree-three vertices of the Grötzsch graph by a set of two vertices, and replacing the remaining degree-five vertex of the Grötzsch graph by a set of four vertices. Two vertices in this expanded graph are connected by an edge if they correspond to vertices connected by an edge in the Grötzsch graph. The result of Häggkvist's construction is a 10-regular triangle-free graph with 29 vertices and chromatic number 4, disproving the conjecture that there is no 4-chromatic triangle-free formula_1-vertex graph in which each vertex has more than formula_2 neighbours. Every such graph contains the Grötzsch graph as an induced subgraph. Related graphs. The Grötzsch graph shares several properties with the Clebsch graph, a distance-transitive graph with 16 vertices and 40 edges: both the Grötzsch graph and the Clebsch graph are triangle-free and four-chromatic, and neither of them has any six-vertex induced paths. These properties are close to being enough to characterize these graphs: the Grötzsch graph is an induced subgraph of the Clebsch graph, and every triangle-free four-chromatic formula_3-free graph is likewise an induced subgraph of the Clebsch graph that in turn contains the Grötzsch graph as an induced subgraph. The Chvátal graph is another small triangle-free 4-chromatic graph. However, unlike the Grötzsch graph and the Clebsch graph, the Chvátal graph has a six-vertex induced path. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(x-1)^5 (x^2-x-10) (x^2+3 x+1)^2." }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "n/3" }, { "math_id": 3, "text": "P_6" } ]
https://en.wikipedia.org/wiki?curid=7685783
76859820
Martinogale
Extinct genus of skunk Martinogale is an extinct genus of skunk from the Late Miocene of central North America. There exist three accepted species, "M." "alveodens", "M.chisoensis" and "M." "faulli", which may have overlapped in range but occupied somewhat distinct moments of the Late Miocene. As well as the dubious "Martinogale? nambiana". Description and species. Martinogale, as happens with most fossil skunks, has been solely described off of fragmentary craneal remains. The genus is mainly characterized by its jaws: in the upper jaw there's an absent upper Molarformula_0, and greatly enlarged but thin upper Premolarformula_1 and Molarformula_2; while the lower jaw has a small and forward Pformula_0, the absence of a lingual or labial cingulum around the Pformula_1 and a well developed Mformula_2; neither jaw has a present Premolarformula_2. In regards to skull morphology, it is smoother and narrower than in living skunks, with a large, flask-shaped basicranial bulla. "Martinogale alveodens". This species was described in 1930 as a small mustelid from a fragmentary lower jaw found in the Edson Quarry, from late Hemphillian Kansas. It was described as the type of the new genus. Due to its fragmentary nature, the placement of "Martinogale" within Mustelidae was uncertain, but seemed feasible due to some similarities to the earlier "Martes nambianus". In 1938, a better preserved jaw indicated similarities with the spotted skunks of Mephitidae. The species name, "alveodens", hails from Latin "alveus", “a hollow, cavity or channel" and "dens", "tooth" "Martinogale chisoensis". The largest species, "M. chisoensis" hails from the early Hemphillian Crew Bean Local, it was described in 2003 based on a rather complete skull. Due to the cranial similarities with "Buisnictis" it was named ""Buisnictis" chisoensis". In 2005, along with the description of "M. faulli", it was reassigned to "Martinogale". The species name, "chisoensis", comes from Chisos Mountains in Big Bend National Park, Texas, and "ensis", Latin for “from”. "Martinogale faulli". The oldest and smallest of the species, "M. faulli" was described in 2005 from a partial skull found in the Late Clarendonian Dove Spring Formation, from Kern County, California. "M. faulli" has a smoother skull than "M. chisoensis" and relatively smaller teeth, with a better defined basicranial bulla. The species name "faulli" is in honor of Mark Faull, a former ranger at Red Rock Canyon State Park. "Martinogale? nambiana". In 1874, a Pformula_3, Pformula_4and an incredibly fragmentary Mformula_5were discovered in the Santa Fé Marls, New Mexico. Cope originally identified it as "Martes nambianus", uncertain of this association, a year later Cope moved it to "Mustela nambiana". When Hall erected "Martinogale," he moved "M. nambiana" into his new genus, where it has since remained. In 2005, Wang et. al. argued that the few characteristics present in these teeth were too non-specific, arguing that they simply represent the basal mustelid condition and that "M? nambiana" should not be considered a part of "Martinogale"; that the specimen can't be ascribed to a concrete genus. Phylogeny. When compared to modern genera, both extant: "Spilogale", "Mephitis" and "Conepatus", as well as extinct: "Brachyprotoma" and "Osmotherium", "Martinogale" presents reasonable differences in the premolar structure, thin postorbital skull, slightly expanded mastoid process and the general structure of the basicranial bulla. In 2005 Wang et al.'s phylogenetic analysis recovered "Martinogale" as a somewhat paraphyletic association, although as their chronology advances so does their derivation: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "^2" }, { "math_id": 1, "text": "^4" }, { "math_id": 2, "text": "^1" }, { "math_id": 3, "text": "^3\n" }, { "math_id": 4, "text": "^4\n" }, { "math_id": 5, "text": "^1\n" } ]
https://en.wikipedia.org/wiki?curid=76859820
76861296
Yamada–Watanabe theorem
Theorem in probability theory The Yamada–Watanabe theorem is a result from probability theory saying that for a large class of stochastic differential equations a "weak solution" with "pathwise uniqueness" implies a "strong solution" and "uniqueness in distribution". In its original form, the theorem was stated for formula_0-dimensional "Itô equations" and was proven by Toshio Yamada and Shinzō Watanabe in 1971. Since then, many generalizations appeared particularly one for general semimartingales by Jean Jacod from 1980. Yamada–Watanabe theorem. History, generalizations and related results. Jean Jacod generalized the result to SDEs of the form formula_1 where formula_2 is a semimartingale and the coefficient formula_3 can depend on the path of formula_4. Further generalisations were done by Hans-Jürgen Engelbert (1991) and Thomas G. Kurtz (2007). For SDEs in Banach spaces there is a result from Martin Ondrejat (2004), one by Michael Röckner, Byron Schmuland and Xicheng Zhang (2008) and one by Stefan Tappe (2013). The converse of the theorem is also true and called the "dual Yamada–Watanabe theorem". The first version of this theorem was proven by Engelbert (1991) and a more general version by Alexander Cherny (2002). Setting. Let formula_5 and formula_6 be the space of continuous functions. Consider the formula_0-dimensional Itô equation formula_7 where Basic terminology. We say "uniqueness in distribution" (or "weak uniqueness"), if for two arbitrary solutions formula_13 and formula_14 defined on (possibly different) filtered probability spaces formula_15 and formula_16, we have for their distributions formula_17, where formula_18. We say "pathwise uniqueness" (or "strong uniqueness") if any two solutions formula_19 and formula_20, defined on the same filtered probability spaces formula_21 with the same formula_22-Brownian motion, are indistinguishable processes, i.e. we have formula_23-almost surely that formula_24 Theorem. Assume the described setting above is valid, then the theorem is: If there is "pathwise uniqueness", then there is also "uniqueness in distribution". And if for every initial distribution, there exists a weak solution, then for every initial distribution, also a pathwise unique strong solution exists. Jacod's result improved the statement with the additional statement that If a weak solutions exists and pathwise uniqueness holds, then this solution is also a strong solution.
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "dX_t=u(X,Z)dZ_t," }, { "math_id": 2, "text": "(Z_t)_{t\\geq 0}" }, { "math_id": 3, "text": "u" }, { "math_id": 4, "text": "Z" }, { "math_id": 5, "text": "n,r\\in\\mathbb{N}" }, { "math_id": 6, "text": "C(\\R_+,\\R^n)" }, { "math_id": 7, "text": "dX_t=b(t,X)dt+\\sigma(t,X)dW_t,\\quad X_0=x_0" }, { "math_id": 8, "text": "b\\colon \\R_+\\times C(\\R_+,\\R^n)\\to\\R^n" }, { "math_id": 9, "text": "\\sigma \\colon \\R_+\\times C(\\R_+,\\R^n)\\to\\R^{n\\times r}" }, { "math_id": 10, "text": "(W_t)_{t\\geq 0}=\\left((W^{(1)}_t,\\dots,W^{(r)}_t)\\right)_{t\\geq 0}" }, { "math_id": 11, "text": "r" }, { "math_id": 12, "text": "x_0\\in \\R^n" }, { "math_id": 13, "text": "(X^{(1)},W^{(1)})" }, { "math_id": 14, "text": "(X^{(2)},W^{(2)})" }, { "math_id": 15, "text": "(\\Omega_1,\\mathcal{F}_1,\\mathbf{F}_1,P_1)" }, { "math_id": 16, "text": "(\\Omega_2,\\mathcal{F}_2,\\mathbf{F}_2,P_2)" }, { "math_id": 17, "text": "P_{X^{(1)}}=P_{X^{(2)}}" }, { "math_id": 18, "text": "P_{X^{(1)}}:=\\operatorname{Law}(X_t^{1},t\\geq 0)" }, { "math_id": 19, "text": "(X^{(1)},W)" }, { "math_id": 20, "text": "(X^{(2)},W)" }, { "math_id": 21, "text": "(\\Omega,\\mathcal{F},\\mathbf{F},P)" }, { "math_id": 22, "text": "\\mathbf{F}" }, { "math_id": 23, "text": "P" }, { "math_id": 24, "text": "\\{X_t^{(1)}=X_t^{(2)},t\\geq 0\\}" } ]
https://en.wikipedia.org/wiki?curid=76861296
76862447
Probability of superiority
The probability of superiority or common language effect size is the probability that, when sampling a pair of observations from two groups, the observation from the second group will be larger than the sample from the first group. It is used to describe a difference between two groups. D. Wolfe and R. Hogg introduced the concept in 1971. Kenneth McGraw and S. P. Wong returned to the concept in 1992 preferring the term "common language effect size". The term "probability of superiority" was proposed by R. J. Grissom a couple of years later. The probability of superiority can be formalized as formula_0. (D. Wolfe and R. Hogg originally discussed it in the inverted form formula_1). formula_0 is the probability that some value (formula_2) sampled at random from one population is larger than the corresponding score (formula_3) sampled from another population. Examples. McGraw and Wong gave the example of sex differences in height, noting that when comparing a random man with a random woman, the probability that the man will be taller is 92%. (Alternatively, in 92 out of 100 blind dates, the male will be taller than the female.) The population value for the common language effect size is often reported like this, in terms of pairs randomly chosen from the population. Kerby (2014) notes that "a pair", defined as a score in one group paired with a score in another group, is a core concept of the common language effect size. As another example, consider a scientific study (maybe of a treatment for some chronic disease, such as arthritis) with ten people in the treatment group and ten people in a control group. If everyone in the treatment group is compared to everyone in the control group, then there are (10×10=) 100 pairs. At the end of the study, the outcome is rated into a score, for each individual (for example on a scale of mobility and pain, in the case of an arthritis study), and then all the scores are compared between the pairs. The result, as the percent of pairs that support the hypothesis, is the common language effect size. In the example study it could be (let's say) .80, if 80 out of the 100 comparison pairs show a better outcome for the treatment group than the control group, and the report may read as follows: "When a patient in the treatment group was compared to a patient in the control group, in 80 of 100 pairs the treated patient showed a better treatment outcome." The sample value, in for example a study like this, is an unbiased estimator of the population value. Equivalent statistics. An effect size related to the common language effect size is the rank-biserial correlation. This measure was introduced by Cureton as an effect size for the Mann–Whitney "U" test. That is, there are two groups, and scores for the groups have been converted to ranks. The Kerby simple difference formula computes the rank-biserial correlation from the common language effect size. Letting f be the proportion of pairs favorable to the hypothesis (the common language effect size), and letting u be the proportion of pairs not favorable, the rank-biserial r is the simple difference between the two proportions: "r" = "f" − "u". In other words, the correlation is the difference between the common language effect size and its complement. For example, if the common language effect size is 60%, then the rank-biserial r equals 60% minus 40%, or "r" = 0.20. The Kerby formula is directional, with positive values indicating that the results support the hypothesis. A non-directional formula for the rank-biserial correlation was provided by Wendt, such that the correlation is always positive.
[ { "math_id": 0, "text": " P(X > Y) " }, { "math_id": 1, "text": " P(X < Y) " }, { "math_id": 2, "text": " X " }, { "math_id": 3, "text": " Y " } ]
https://en.wikipedia.org/wiki?curid=76862447
76865988
Minimum effort game
In Game theory, the minimum effort game or weakest link game is a game in which each person decides how much effort to put in and is rewarded based on the least amount of effort anyone puts in. It is assumed that the reward per unit of effort is greater than the cost per unit effort, otherwise there would be no reason to put in effort. Nash equilibria. If there are formula_0 players, the set of effort levels is formula_1, it costs each player formula_2 dollars to put in one unit of effort, and each player is rewarded formula_3 dollars for each unit of effort the laziest person puts in, then there are formula_4 pure-strategy Nash equilibria, one for each formula_5, with each player putting in the same amount of effort formula_6, because putting more effort costs more money without extra reward, and because putting less effort reduces the reward earned. There are formula_7 non pure Nash equilibria, given as follows: each player chooses two effort levels formula_8 and puts in formula_6 units of effort with probability formula_9 and formula_10 units of effort with probability formula_11. In practice. The amount of effort players put in depends on the amount of effort they think other players will put in. In addition, some players will put more effort than expected in an attempt to get others to put in more effort.
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "A=\\{1,...,K\\}" }, { "math_id": 2, "text": "c" }, { "math_id": 3, "text": "b" }, { "math_id": 4, "text": "K" }, { "math_id": 5, "text": "k\\in A" }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "\\frac{K(K-1)}{2}" }, { "math_id": 8, "text": "k < l" }, { "math_id": 9, "text": "\\left (\\frac{c}{b}\\right )^{\\frac{1}{n-1}}" }, { "math_id": 10, "text": "l" }, { "math_id": 11, "text": "1-\\left (\\frac{c}{b}\\right )^{\\frac{1}{n-1}}" } ]
https://en.wikipedia.org/wiki?curid=76865988
76866010
Center squeeze
Bias of some electoral systems that favors extremists In social choice theory, a center squeeze is a specific type of spoiler effect. In a center squeeze, a majority-preferred and socially-optimal candidate is eliminated in favor of a more extreme alternarive under plurality-with-elimination methods like the two-round and ranked-choice runoff (RCV) rules. The presence of more-extreme candidates (with strong core support from their base) "squeezes" a candidate trapped between them, starving them of the first-preference votes they need to survive in early rounds. The term "center squeeze" refers to candidates who are close to the center of public opinion"," and as a result is not limited to centrists along the traditional political spectrum. Center squeezes can occur in any situation where voters prefer candidates who hold views similar to their own. By Black's theorem, the candidate who appeals most to the median voter will be the majority-preferred candidate, which means they will be elected by any method compatible with majority-rule. However, in methods that strongly prioritize first preferences, majority-preferred candidates often get eliminated early because they aim for broad appeal rather than strong base support. Voting systems that suffer from the center-squeeze effect have a bias in favor of more extreme ideologies. This incentivizes candidates to avoid the political center, creating unrepresentative winners and political polarization in the long run. This effect, long predicted by social choice theorists (particularly mathematicians and economists), has since been observed empirically in Australia, California, and Maine, and is well known for causing polarization in the American primary election process. Famous examples of center-squeezes include the 2022 Alaska special election, where moderate Nick Begich III was eliminated in the first round by right-wing Sarah Palin, despite most voters ranking Begich above Palin on their ballots. Another potential example can be found in the 2016 United States presidential election, where polls showed several alternatives defeating both Donald Trump and Hillary Clinton under a majority- or rated rule, but being squeezed out under both IRV and primary elections. The opposite situation (a bias toward "bland" or inoffensive candidates, particularly dark horse candidates) is substantially less common. However, such a situation arguably exists for bottom-heavy methods that elect the "least opposed" candidates, such as anti-plurality voting or Coombs' method. Methods that pass reversal symmetry treat "negative" and "positive" ratings (those near the top and bottom of a voter's ballot) equally. Voting systems that have serious problems with center squeeze include first-preference plurality, plurality-with-primaries, two-round runoff, and ranked-choice runoff voting (RCV). By contrast, Condorcet and rated voting methods are not affected by such pathologies. Condorcet methods are insulated from center-squeezes by the median voter theorem, while rated voting systems like score or approval voting are protected by closely-related results. Susceptibility. Primary elections. Center squeeze is a major feature of two-party systems using primaries to elect candidates. In this case, the two parties tend to separate ideologically, and a "center" candidate, ideologically between the two, would find themselves unable to win a primary against another candidate closer to the centroid of the party. The center candidate would win in any one-on-one vote over the whole voting population, but will not win in the subset of the population represented by a party. Surprisingly, this implies electoral reform at the primary level (in the absence of a multi-party system) will tend to have perverse effects and backfire, resulting in greater extremism, because candidates who are more representative of their political parties will tend to be more extreme compared to the population as a whole. Cardinal and Condorcet methods. If voters assign scores to candidates based on ideological distance, score voting will always select the candidate closest to some central tendency of the voter distribution. As a result, while score voting does not pass the median voter theorem "per se", it tends to behave much like methods that do. The specific measure of central tendency minimized by the method depends on the precise way voters score candidates, with different measures of central tendency minimizing different distance metrics. Under the most common models of strategic voting, all spoilerproof cardinal methods will tend to behave like approval voting, and tend to converge on the Condorcet winner. Examples. Alphabet example. In Alphabet Land, voters are divided based on how names should be arranged on lists. formula_0 thinks names should always be in alphabetical order; "formula_1" thinks they should be in reverse-alphabetical order; and formula_2 thinks the order should be randomized. Voters pick the candidate whose name is closest to theirs in alphabetical order. Because formula_2 is preferred to both formula_0 and formula_1 in head-to-head match-ups, formula_2 is the majority-preferred (Condorcet) winner. Under the common assumption that voters' happiness with the outcome fall linearly with the distance between the voter and the candidate, formula_2 is the socially-optimal winner as well. Thus, formula_2 can be considered the "best" or "most popular" candidate by all commonly-used metrics in social choice, and as a result will be elected in the vast majority of electoral systems (including score voting, approval voting, and all Condorcet methods). First-past-the-post. formula_1 wins under a single-round of FPTP, with 35.9% of voters choosing them as their favorite. However, over substantially more voters considered formula_1 to be their least favorite, with 63.1% of voters preferring formula_2. formula_1 is elected, despite an overwhelming two-thirds majority preferring formula_2. Ranked-choice runoff (Alternative, Two-round). Ranked-choice runoff voting (RCV) tries to address vote-splitting in first-past-the-post by replacing it with a series of first-past-the-post elections, where the loser is eliminated each round. While this can prevent an unpopular minor candidate from spoiling a race, as in the 2000 Florida election, it is not able to prevent vote-splitting outside of two-party-dominant elections, as shown here. The first round of the election is the same as the first-past-the-post election, with formula_1 having a slight lead. formula_2 has the least first preferences and is eliminated. Their votes are reassigned to formula_0 and formula_1, according to their voter's ballot preferences. In the second round, enough voters who preferred formula_2 as their first choice took formula_0 as their second choice and formula_0 wins the election. RCV fails to have any moderating impact on the election, instead only causing a swing from one extreme to the other. 2022 Alaska Special Election. The 2022 Alaska special election seat was an infamous example of a center squeeze. The ranked-choice runoff election involved one Democrat (Mary Peltola) and two Republicans (Sarah Palin and Nick Begich III). Because the full ballot data for the race was released, social choice theorists were able to confirm that Palin spoiled the race for Begich, with Peltola winning the race as a result of several pathological behaviors that tend to characterize center-squeeze elections. The election produced a winner opposed by a majority of voters, with most ranking Begich above Peltola and more than half giving Peltola no support at all. The election was also notable as a no-show paradox, where a candidate is eliminated as a result of votes cast in "support" of their candidacy. In this case, ballots ranking Palin first and Begich second instead helped Peltola to win. Many social choice theorists criticized the ranked-choice runoff procedure for its pathological behavior. Along with being a center-squeeze, the election was a negative voting weight event, where a voter's ballot has the opposite of its intended effect (i.e. a candidate winning as a result of "not having enough votes to lose"). In this race, Peltola would have lost if she had received more support from Palin voters, and won as a result of 5,200 ballots that ranked her last (after Palin then Begich). However, social choice theorists were careful to note the results likely would have been the same under Alaska's previous primary system as well. This led several to recommend replacing the system with any of several alternatives without these behaviors, such as STAR, approval, or Condorcet voting. 2009 Burlington mayoral election. The 2009 Burlington mayoral election was held in March 2009 for the city of Burlington, Vermont, and serves as an example of a four-candidate center squeeze. This was the second mayoral election since the city's 2005 change to ranked-choice runoff voting, after the 2006 mayoral election. In the 2009 election, incumbent Burlington mayor (Bob Kiss) won reelection as a member of the Vermont Progressive Party, defeating Kurt Wright in the final round with 48% of the vote (51.5% excluding exhausted ballots). Some mathematicians and voting theorists criticized the election results as revealing several pathologies associated with ranked-choice runoff voting, including the monotonicity criterion, noting that Kiss was elected as a result of 750 votes cast against him (ranking Kiss in last place). Several electoral reform advocates branded the election a failure after Kiss was elected despite 54% of voters voting for Montroll over Kiss, violating the principle of majority rule. Later analyses showed the race was spoiled, with Wright acting as a spoiler pulling moderate votes from Montroll, who would have beaten Kiss in a one-on-one race. The resulting controversy culminated in a successful 2010 initiative repealing RCV by a vote of 52% to 48%. Tournament matrix. The results of every possible one-on-one election can be completed as follows: This leads to an overall preference ranking of: Montroll was therefore preferred over Kiss by 54% of voters, was preferred over Wright by 56% of voters, and over Smith by 60%, and over Simpson by 91% of voters. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "Z" }, { "math_id": 2, "text": "M" } ]
https://en.wikipedia.org/wiki?curid=76866010