id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
14681615
Thiamine-diphosphate kinase
In enzymology, a thiamine-diphosphate kinase is an enzyme involved in thiamine metabolism. It catalyzes the chemical reaction thiamine diphosphate + ATP formula_0 thiamine triphosphate + ADP Thus, the two substrates of this enzyme are ATP and thiamine diphosphate, whereas its two products are ADP and thiamine triphosphate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with a phosphate group as acceptor. The systematic name of this enzyme class is ATP:thiamine-diphosphate phosphotransferase. Other names in common use include ATP:thiamin-diphosphate phosphotransferase, TDP kinase, thiamin diphosphate kinase, thiamin diphosphate phosphotransferase, thiamin pyrophosphate kinase, thiamine diphosphate kinase, and protein bound thiamin diphosphate:ATP phosphoryltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14681615
14681638
Thiamine diphosphokinase
Class of enzymes In enzymology, a thiamine diphosphokinase (EC 2.7.6.2) is an enzyme that catalyzes the chemical reaction ATP + thiamine formula_0 AMP + thiamine diphosphate Thus, the two substrates of this enzyme are ATP and thiamine, whereas its two products are AMP and thiamine diphosphate. This enzyme belongs to the family of transferases, specifically those transferring two phosphorus-containing groups (diphosphotransferases). The systematic name of this enzyme class is ATP:thiamine diphosphotransferase. Other names in common use include thiamin kinase, thiamine pyrophosphokinase, ATP:thiamin pyrophosphotransferase, thiamin pyrophosphokinase, thiamin pyrophosphotransferase, thiaminokinase, thiamin:ATP pyrophosphotransferase, and TPTase. This enzyme participates in thiamine metabolism. Structural studies. As of late 2007, six structures have been solved for this class of enzymes, with PDB accession codes 1IG0, 1IG3, 2F17, 2G9Z, 2HH9, and 2OMK. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14681638
14681675
Thiamine kinase
In enzymology, a thiamine kinase (EC 2.7.1.89) is an enzyme that catalyzes the chemical reaction ATP + thiamine formula_0 ADP + thiamine phosphate Thus, the two substrates of this enzyme are ATP and thiamine, whereas its two products are ADP and thiamine phosphate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:thiamine phosphotransferase. Other names in common use include thiamin kinase (phosphorylating), thiamin phosphokinase, ATP:thiamin phosphotransferase, and thiamin kinase. This enzyme participates in thiamine metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14681675
14681695
Thiamine-phosphate kinase
In enzymology, a thiamine-phosphate kinase (EC 2.7.4.16) is an enzyme that catalyzes the chemical reaction ATP + thiamine phosphate formula_0 ADP + thiamine diphosphate Thus, the two substrates of this enzyme are ATP and thiamine phosphate, whereas its two products are ADP and thiamine diphosphate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with a phosphate group as acceptor. The systematic name of this enzyme class is ATP:thiamine-phosphate phosphotransferase. Other names in common use include thiamin-monophosphate kinase, thiamin monophosphatase, and thiamin monophosphokinase. This enzyme participates in thiamine metabolism. Structural studies. As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1VQV. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14681695
14681732
Triokinase
In enzymology, a triokinase (EC 2.7.1.28) is an enzyme that catalyzes the chemical reaction ATP + D-glyceraldehyde formula_0 ADP + D-glyceraldehyde 3-phosphate Thus, the two substrates of this enzyme are ATP and D-glyceraldehyde, whereas its two products are ADP and D-glyceraldehyde 3-phosphate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:D-glyceraldehyde 3-phosphotransferase. This enzyme is also called triose kinase. This enzyme participates in fructose metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14681732
14681752
Triphosphate—protein phosphotransferase
In enzymology, a triphosphate-protein phosphotransferase (EC 2.7.99.1) is an enzyme that catalyzes the chemical reaction triphosphate + [microsomal-membrane protein] formula_0 diphosphate + phospho-[microsomal-membrane protein] Thus, the two substrates of this enzyme are triphosphate and microsomal-membrane protein, whereas its two products are diphosphate and phospho-[microsomal-membrane protein]. Classification. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups that are not covered by other phosphotransferase families. Nomenclature. The systematic name of this enzyme class is triphosphate:[microsomal-membrane-protein] phosphotransferase. Other names in common use include diphosphate:microsomal-membrane-protein O-phosphotransferase, (erroneous), DiPPT (erroneous), pyrophosphate:protein phosphotransferase (erroneous), diphosphate-protein phosphotransferase (erroneous), diphosphate:[microsomal-membrane-protein] O-phosphotransferase, and (erroneous). References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14681752
14681776
Triphosphoribosyl-dephospho-CoA synthase
In enzymology, a triphosphoribosyl-dephospho-CoA synthase (EC 2.7.8.25) is an enzyme that catalyzes the chemical reaction ATP + 3-dephospho-CoA formula_0 2'-(5"-triphosphoribosyl)-3'-dephospho-CoA + adenine Thus, the two substrates of this enzyme are ATP and 3-dephospho-CoA, whereas its two products are 2'-(5"-triphosphoribosyl)-3'-dephospho-CoA and adenine. This enzyme belongs to the family of transferases, specifically those transferring non-standard substituted phosphate groups. The systematic name of this enzyme class is ATP:3-dephospho-CoA 5"-triphosphoribosyltransferase. Other names in common use include 2'-(5"-triphosphoribosyl)-3-dephospho-CoA synthase, ATP:dephospho-CoA 5-triphosphoribosyl transferase, and CitG. This enzyme participates in two-component system - general. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14681776
14681822
CCA tRNA nucleotidyltransferase
CCA tRNA nucleotidyltransferase (EC 2.7.7.72, "CCA-adding enzyme", "tRNA adenylyltransferase", "tRNA CCA-pyrophosphorylase", "tRNA-nucleotidyltransferase", "transfer-RNA nucleotidyltransferase", "transfer ribonucleic acid nucleotidyl transferase", "CTP(ATP):tRNA nucleotidyltransferase", "transfer ribonucleate adenylyltransferase", "transfer ribonucleate adenyltransferase", "transfer RNA adenylyltransferase", "transfer ribonucleate nucleotidyltransferase", "ATP (CTP):tRNA nucleotidyltransferase", "ribonucleic cytidylic cytidylic adenylic pyrophosphorylase", "transfer ribonucleic adenylyl (cytidylyl) transferase", "transfer ribonucleic-terminal trinucleotide nucleotidyltransferase", "transfer ribonucleate cytidylyltransferase", "ribonucleic cytidylyltransferase", "-C-C-A pyrophosphorylase", "ATP(CTP)-tRNA nucleotidyltransferase", "tRNA adenylyl(cytidylyl)transferase", "CTP:tRNA cytidylyltransferase") is an enzyme with systematic name "CTP,CTP,ATP:tRNA cytidylyl,cytidylyl,adenylyltransferase". This enzyme catalyses the following chemical reaction a tRNA precursor + 2 CTP + ATP formula_0 a tRNA with a 3' CCA end + 3 diphosphate (overall reaction) (1a) a tRNA precursor + CTP formula_0 a tRNA with a 3' cytidine end + diphosphate (1b) a tRNA with a 3' cytidine + CTP formula_0 a tRNA with a 3' CC end + diphosphate (1c) a tRNA with a 3' CC end + ATP formula_0 a tRNA with a 3' CCA end + diphosphate The acylation of all tRNAs with an amino acid occurs at the terminal ribose of a 3' CCA sequence. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14681822
14681846
TRNA nucleotidyltransferase
In enzymology, a tRNA nucleotidyltransferase (EC 2.7.7.56) is an enzyme that catalyzes the chemical reaction tRNAn+1 + phosphate formula_0 tRNAn + a nucleoside diphosphate where tRNA-N is a product of transcription, and tRNA Nucleotidyltransferase catalyzes this cytidine-cytidine-adenosine (CCA) addition to form the tRNA-NCCA product. Function. Protein synthesis takes place in cytosolic ribosomes, mitochondria (mitoribosomes), and in plants, the plastids (chloroplast ribosomes). Each of these compartments requires a complete set of functional tRNAs to carry out protein synthesis. The production of mature tRNAs requires processing and modification steps such as the addition of a 3’-terminal cytidine-cytidine-adenosine (CCA). Since no plant tRNA genes encode this particular sequence, a tRNA nucleotidyltransferase must add this sequence post-transcriptionally and therefore is present in all three compartments. In eukaryotes, multiple forms of tRNA nucleotidyltransferases are synthesized from a single gene and are distributed to different subcellular compartments in the cell. There are multiple in-frame start codons which allow for the production of variant forms of the enzyme containing different targeting information predominantly found in the N-terminal sequence of the protein (reference). In vivo experiments show that the N-terminal sequences are used as transit peptides for import into the mitochondria and plastids. Comparison studies using available tRNA nucleotidyltransferase sequences have identified a single gene coding for this enzyme in plants. Complementation studies in yeast using cDNA derived from "Arabidopsis thaliana" or "Lupinus albus" genes demonstrate the biological activity of these enzymes. The enzyme has also been shown to repair damaged or incomplete CCA sequences in yeast. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14681846
14681852
Segmentation-based object categorization
The image segmentation problem is concerned with partitioning an image into multiple regions according to some homogeneity criterion. This article is primarily concerned with graph theoretic approaches to image segmentation applying graph partitioning via minimum cut or maximum cut. Segmentation-based object categorization can be viewed as a specific case of spectral clustering applied to image segmentation. Segmentation using normalized cuts. Graph theoretic formulation. The set of points in an arbitrary feature space can be represented as a weighted undirected complete graph G = (V, E), where the nodes of the graph are the points in the feature space. The weight formula_0 of an edge formula_1 is a function of the similarity between the nodes formula_2 and formula_3. In this context, we can formulate the image segmentation problem as a graph partitioning problem that asks for a partition formula_4 of the vertex set formula_5, where, according to some measure, the vertices in any set formula_6 have high similarity, and the vertices in two different sets formula_7 have low similarity. Normalized cuts. Let "G" = ("V", "E", "w") be a weighted graph. Let formula_8 and formula_9 be two subsets of vertices. Let: formula_10 formula_11 formula_12 In the normalized cuts approach, for any cut formula_13 in formula_14, formula_15 measures the similarity between different parts, and formula_16 measures the total similarity of vertices in the same part. Since formula_17, a cut formula_18 that minimizes formula_15 also maximizes formula_16. Computing a cut formula_18 that minimizes formula_15 is an NP-hard problem. However, we can find in polynomial time a cut formula_13 of small normalized weight formula_15 using spectral techniques. The ncut algorithm. Let: formula_19 Also, let "D" be an formula_20 diagonal matrix with formula_21 on the diagonal, and let formula_22 be an formula_20 symmetric matrix with formula_23. After some algebraic manipulations, we get: formula_24 subject to the constraints: Minimizing formula_28 subject to the constraints above is NP-hard. To make the problem tractable, we relax the constraints on formula_29, and allow it to take real values. The relaxed problem can be solved by solving the generalized eigenvalue problem formula_30 for the second smallest generalized eigenvalue. The partitioning algorithm: Computational Complexity. Solving a standard eigenvalue problem for all eigenvectors (using the QR algorithm, for instance) takes formula_33 time. This is impractical for image segmentation applications where formula_34 is the number of pixels in the image. Since only one eigenvector, corresponding to the second smallest generalized eigenvalue, is used by the uncut algorithm, efficiency can be dramatically improved if the solve of the corresponding eigenvalue problem is performed in a matrix-free fashion, i.e., without explicitly manipulating with or even computing the matrix W, as, e.g., in the Lanczos algorithm. Matrix-free methods require only a function that performs a matrix-vector product for a given vector, on every iteration. For image segmentation, the matrix W is typically sparse, with a number of nonzero entries formula_35, so such a matrix-vector product takes formula_35 time. For high-resolution images, the second eigenvalue is often ill-conditioned, leading to slow convergence of iterative eigenvalue solvers, such as the Lanczos algorithm. Preconditioning is a key technology accelerating the convergence, e.g., in the matrix-free LOBPCG method. Computing the eigenvector using an optimally preconditioned matrix-free method takes formula_35 time, which is the optimal complexity, since the eigenvector has formula_34 components. Software Implementations. scikit-learn uses LOBPCG from SciPy with algebraic multigrid preconditioning for solving the eigenvalue problem for the graph Laplacian to perform image segmentation via spectral graph partitioning as first proposed in and actually tested in and. OBJ CUT. OBJ CUT is an efficient method that automatically segments an object. The OBJ CUT method is a generic method, and therefore it is applicable to any object category model. Given an image D containing an instance of a known object category, e.g. cows, the OBJ CUT algorithm computes a segmentation of the object, that is, it infers a set of labels "m". Let m be a set of binary labels, and let formula_36 be a shape parameter(formula_36 is a shape prior on the labels from a layered pictorial structure (LPS) model). An energy function formula_37 is defined as follows. formula_38 (1) The term formula_39 is called a unary term, and the term formula_40 is called a pairwise term. A unary term consists of the likelihood formula_41 based on color, and the unary potential formula_42 based on the distance from formula_36. A pairwise term consists of a prior formula_43 and a contrast term formula_44. The best labeling formula_45 minimizes formula_46, where formula_47 is the weight of the parameter formula_48. formula_49 (2) References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "w_{ij}" }, { "math_id": 1, "text": "(i, j) \\in E" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "j" }, { "math_id": 4, "text": "V_1, \\cdots, V_k" }, { "math_id": 5, "text": "V" }, { "math_id": 6, "text": "V_i" }, { "math_id": 7, "text": "V_i, V_j" }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "B" }, { "math_id": 10, "text": "w(A, B) = \\sum \\limits_{i \\in A, j \\in B} w_{ij}" }, { "math_id": 11, "text": "\\operatorname{ncut}(A, B) = \\frac{w(A, B)}{w(A, V)} + \\frac{w(A, B)}{w(B, V)}" }, { "math_id": 12, "text": "\\operatorname{nassoc}(A, B) = \\frac{w(A, A)}{w(A, V)} + \\frac{w(B, B)}{w(B, V)}" }, { "math_id": 13, "text": "(S, \\overline{S})" }, { "math_id": 14, "text": "G" }, { "math_id": 15, "text": "\\operatorname{ncut}(S, \\overline{S})" }, { "math_id": 16, "text": "\\operatorname{nassoc}(S, \\overline{S})" }, { "math_id": 17, "text": "\\operatorname{ncut}(S, \\overline{S}) = 2 - \\operatorname{nassoc}(S, \\overline{S})" }, { "math_id": 18, "text": "(S^{*}, {\\overline{S}}^{*})" }, { "math_id": 19, "text": "d(i) = \\sum \\limits_j w_{ij}" }, { "math_id": 20, "text": "n \\times n" }, { "math_id": 21, "text": "d" }, { "math_id": 22, "text": "W" }, { "math_id": 23, "text": "w_{ij} = w_{ji}" }, { "math_id": 24, "text": "\\min \\limits_{(S, \\overline{S})} \\operatorname{ncut}(S, \\overline{S}) = \\min \\limits_y \\frac{y^T (D - W) y}{y^T D y}" }, { "math_id": 25, "text": "y_i \\in \\{1, -b \\}" }, { "math_id": 26, "text": "-b" }, { "math_id": 27, "text": "y^t D 1 = 0 " }, { "math_id": 28, "text": "\\frac{y^T (D - W) y}{y^T D y}" }, { "math_id": 29, "text": "y" }, { "math_id": 30, "text": "(D - W)y = \\lambda D y" }, { "math_id": 31, "text": "G = (V, E)" }, { "math_id": 32, "text": "D" }, { "math_id": 33, "text": "O(n^3)" }, { "math_id": 34, "text": "n" }, { "math_id": 35, "text": "O(n)" }, { "math_id": 36, "text": "\\Theta" }, { "math_id": 37, "text": "E(m, \\Theta)" }, { "math_id": 38, "text": "E(m, \\Theta) = \\sum \\phi_x(D|m_x) + \\phi_x(m_x|\\Theta) + \\sum \\Psi_{xy}(m_x, m_y) + \\phi(D|m_x, m_y)" }, { "math_id": 39, "text": "\\phi_x(D|m_x) + \\phi_x(m_x|\\Theta)" }, { "math_id": 40, "text": "\\Psi_{xy}(m_x, m_y) + \\phi(D|m_x, m_y)" }, { "math_id": 41, "text": "\\phi_x(D|m_x)" }, { "math_id": 42, "text": "\\phi_x(m_x|\\Theta)" }, { "math_id": 43, "text": "\\Psi_{xy}(m_x, m_y)" }, { "math_id": 44, "text": "\\phi(D|m_x, m_y)" }, { "math_id": 45, "text": "m^{*}" }, { "math_id": 46, "text": "\\sum \\limits_i w_i E(m, \\Theta_i)" }, { "math_id": 47, "text": "w_i" }, { "math_id": 48, "text": "\\Theta_i" }, { "math_id": 49, "text": "m^{*} = \\arg \\min \\limits_m \\sum \\limits_i w_i E(m, \\Theta_i)" }, { "math_id": 50, "text": "\\Theta_1, \\cdots, \\Theta_s" }, { "math_id": 51, "text": "E(m, \\Theta_i)" }, { "math_id": 52, "text": "w_i = g(\\Theta_i|Z)" } ]
https://en.wikipedia.org/wiki?curid=14681852
14681874
Tropomyosin kinase
In enzymology, a tropomyosin kinase (EC 2.7.11.28) is an enzyme that catalyzes the chemical reaction ATP + tropomyosin formula_0 ADP + O-phosphotropomyosin Thus, the two substrates of this enzyme are ATP and tropomyosin, whereas its two products are ADP and O-phosphotropomyosin. This enzyme belongs to the family of transferases, specifically those transferring a phosphate group to the sidechain oxygen atom of serine or threonine residues in proteins (protein-serine/threonine kinases). The systematic name of this enzyme class is ATP:tropomyosin O-phosphotransferase. Other names in common use include tropomyosin kinase (phosphorylating), and STK. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14681874
14681903
(tyrosine 3-monooxygenase) kinase
Class of enzymes In enzymology, a [tyrosine 3-monooxygenase] kinase (EC 2.7.11.6) is an enzyme that catalyzes the chemical reaction ATP + [tyrosine-3-monooxygenase] formula_0 ADP + phospho-[tyrosine-3-monooxygenase] Thus, the two substrates of this enzyme are ATP and tyrosine 3-monooxygenase, whereas its two products are ADP and phospho-(tyrosine-3-monooxygenase). This enzyme belongs to the family of transferases, specifically those transferring a phosphate group to the sidechain oxygen atom of serine or threonine residues in proteins (protein-serine/threonine kinases). The systematic name of this enzyme class is ATP:[tyrosine-3-monoxygenase] phosphotransferase. Other names in common use include pheochromocytoma tyrosine hydroxylase-associated kinase, STK4, and tyrosine 3-monooxygenase kinase (phosphorylating). This enzyme participates in MAPK signaling pathway and non-small cell lung cancer. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14681903
14681942
UDP-glucose—glycoprotein glucose phosphotransferase
Class of enzymes In enzymology, an UDP-glucose—glycoprotein glucose phosphotransferase (EC 2.7.8.19) is an enzyme that catalyzes the chemical reaction UDP-glucose + glycoprotein D-mannose formula_0 UMP + glycoprotein 6-(D-glucose-1-phospho)-D-mannose Thus, the two substrates of this enzyme are UDP-glucose and glycoprotein D-mannose, whereas its two products are UMP and glycoprotein 6-(D-glucose-1-phospho)-D-mannose. This enzyme belongs to the family of transferases, specifically those transferring non-standard substituted phosphate groups. The systematic name of this enzyme class is UDP-glucose:glycoprotein-D-mannose glucosephosphotransferase. Other names in common use include UDP-glucose:glycoprotein glucose-1-phosphotransferase, GlcPTase, Glc-phosphotransferase, and uridine diphosphoglucose-glycoprotein glucose-1-phosphotransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14681942
14681973
UDP-glucose—hexose-1-phosphate uridylyltransferase
Class of enzymes In enzymology, an UDP-glucose—hexose-1-phosphate uridylyltransferase (EC 2.7.7.12) is an enzyme that catalyzes the chemical reaction UDP-glucose + alpha-D-galactose 1-phosphate formula_0 alpha-D-glucose 1-phosphate + UDP-galactose Thus, the two substrates of this enzyme are UDP-glucose and alpha-D-galactose 1-phosphate, whereas its two products are alpha-D-glucose 1-phosphate and UDP-galactose. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). The systematic name of this enzyme class is UDP-glucose:alpha-D-galactose-1-phosphate uridylyltransferase. Other names in common use include uridyl transferase, hexose-1-phosphate uridylyltransferase, uridyltransferase, and hexose 1-phosphate uridyltransferase. This enzyme participates in galactose metabolism and nucleotide sugars metabolism. Structural studies. As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1HXQ, 2H39, 2Q4H, and 2Q4L. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14681973
14681999
UDP-N-acetylglucosamine diphosphorylase
Class of enzymes In enzymology, an UDP-N-acetylglucosamine diphosphorylase (EC 2.7.7.23) is an enzyme that catalyzes the chemical reaction UTP + N-acetyl-alpha-D-glucosamine 1-phosphate formula_0 diphosphate + UDP-N-acetyl-D-glucosamine Thus, the two substrates of this enzyme are UTP and[N-acetyl-alpha-D-glucosamine 1-phosphate, whereas its two products are diphosphate and UDP-N-acetyl-D-glucosamine. This enzyme participates in aminosugars metabolism. Nomenclature. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). The systematic name of this enzyme class is UTP:N-acetyl-alpha-D-glucosamine-1-phosphate uridylyltransferase. Other names in common use include UDP-N-acetylglucosamine pyrophosphorylase, uridine diphosphoacetylglucosamine pyrophosphorylase, UTP:2-acetamido-2-deoxy-alpha-D-glucose-1-phosphate, uridylyltransferase, UDP-GlcNAc pyrophosphorylase, GlmU uridylyltransferase, Acetylglucosamine 1-phosphate uridylyltransferase, UDP-acetylglucosamine pyrophosphorylase, uridine diphosphate-N-acetylglucosamine pyrophosphorylase, uridine diphosphoacetylglucosamine phosphorylase, and acetylglucosamine 1-phosphate uridylyltransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14681999
14682026
UMP kinase
Class of enzymes In enzymology, an UMP kinase (EC 2.7.4.22) is an enzyme that catalyzes the chemical reaction ATP + UMP formula_0 ADP + UDP Thus, the two substrates of this enzyme are ATP and UMP, whereas its two products are ADP and UDP. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with a phosphate group as acceptor. The systematic name of this enzyme class is ATP:UMP phosphotransferase. Other names in common use include uridylate kinase, UMPK, uridine monophosphate kinase, PyrH, UMP-kinase, and SmbA. This enzyme participates in pyrimidine metabolism. Structural studies. As of March 2010, 19 structures have been solved for this class of enzymes, and are deposited in the PDB. All have a 3-layer (aba) sandwich) architecture (CATH code 3.40.1160.10). These include accession codes 2J4J, 2J4K, 2J4L, and 2VA1. Search for all UMP Kinases in the PDB using the enzyme Browser at PDBe. (input the EC number) References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14682026
14682074
Cytochrome c oxidase subunit III
Enzyme of the respiratory chain encoded by the mitochondrial genome Cytochrome c oxidase subunit III (COX3) is an enzyme that in humans is encoded by the "MT-CO3" gene. It is one of main transmembrane subunits of cytochrome c oxidase. It is also one of the three mitochondrial DNA (mtDNA) encoded subunits (MT-CO1, MT-CO2, MT-CO3) of respiratory complex IV. Variants of it have been associated with isolated myopathy, severe encephalomyopathy, Leber hereditary optic neuropathy, mitochondrial complex IV deficiency, and recurrent myoglobinuria . Structure. The "MT-CO3" gene produces a 30 kDa protein composed of 261 amino acids. COX3, the protein encoded by this gene, is a member of the cytochrome c oxidase subunit 3 family. This protein is located on the inner mitochondrial membrane. COX3 is a multi-pass transmembrane protein: in human, it contains 7 transmembrane domains at positions 15–35, 42–59, 81–101, 127–147, 159–179, 197–217, and 239–259. Function. Cytochrome c oxidase (EC 1.9.3.1) is the terminal enzyme of the respiratory chain of mitochondria and many aerobic bacteria. It catalyzes the transfer of electrons from reduced cytochrome c to molecular oxygen: 4 cytochrome c+2 + 4 H+ + O2 formula_0 4 cytochrome c+3 + 2 H2O This reaction is coupled to the pumping of four additional protons across the mitochondrial or bacterial membrane. Cytochrome c oxidase is an oligomeric enzymatic complex that is located in the mitochondrial inner membrane of eukaryotes and in the plasma membrane of aerobic prokaryotes. The core structure of prokaryotic and eukaryotic cytochrome c oxidase contains three common subunits, I, II and III. In prokaryotes, subunits I and III can be fused and a fourth subunit is sometimes found, whereas in eukaryotes there are a variable number of additional small subunits. As the bacterial respiratory systems are branched, they have a number of distinct terminal oxidases, rather than the single cytochrome c oxidase present in the eukaryotic mitochondrial systems. Although the cytochrome o oxidases do not catalyze the cytochrome c but the quinol (ubiquinol) oxidation they belong to the same haem-copper oxidase superfamily as cytochrome c oxidases. Members of this family share sequence similarities in all three core subunits: subunit I is the most conserved subunit, whereas subunit II is the least conserved. Clinical significance. Mutations in mtDNA-encoded cytochrome c oxidase subunit genes have been observed to be associated with isolated myopathy, severe encephalomyopathy, Leber hereditary optic neuropathy, mitochondrial complex IV deficiency, and recurrent myoglobinuria . Leber hereditary optic neuropathy (LHON). LHON is a maternally inherited disease resulting in acute or subacute loss of central vision, due to optic nerve dysfunction. Cardiac conduction defects and neurological defects have also been described in some patients. LHON results from primary mitochondrial DNA mutations affecting the respiratory chain complexes. Mutations at positions 9438 and 9804, which result in glycine-78 to serine and alanine-200 to threonine amino acid changes, have been associated with this disease. Mitochondrial complex IV deficiency (MT-C4D). Complex IV deficiency (COX deficiency) is a disorder of the mitochondrial respiratory chain with heterogeneous clinical manifestations, ranging from isolated myopathy to severe multisystem disease affecting several tissues and organs. Features include hypertrophic cardiomyopathy, hepatomegaly and liver dysfunction, hypotonia, muscle weakness, exercise intolerance, developmental delay, delayed motor development, mental retardation, lactic acidemia, encephalopathy, ataxia, and cardiac arrhythmia. Some affected individuals manifest a fatal hypertrophic cardiomyopathy resulting in neonatal death and a subset of patients manifest Leigh syndrome. The mutations G7970T and G9952A have been associated with this disease."" Recurrent myoglobinuria mitochondrial (RM-MT). Recurrent myoglobinuria is characterized by recurrent attacks of rhabdomyolysis (necrosis or disintegration of skeletal muscle) associated with muscle pain and weakness, and followed by excretion of myoglobin in the urine. It has been associated with mitochondrial complex IV deficiency. Interactions. COX3 has been shown to have 15 binary protein-protein interactions including 8 co-complex interactions. COX3 appears to interact with SNCA, KRAS, RAC1, and HSPB2. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" /> External links. "This article incorporates text from the United States National Library of Medicine, which is in the public domain."
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14682074
14682075
Undecaprenol kinase
Class of enzymes In enzymology, an undecaprenol kinase (EC 2.7.1.66) is an enzyme that catalyzes the chemical reaction ATP + undecaprenol formula_0 ADP + undecaprenyl phosphate Thus, the two substrates of this enzyme are ATP and undecaprenol, whereas its two products are ADP and undecaprenyl phosphate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:undecaprenol phosphotransferase. Other names in common use include isoprenoid alcohol kinase, isoprenoid alcohol phosphokinase, C55-isoprenoid alcohol phosphokinase, isoprenoid alcohol kinase (phosphorylating), C55-isoprenoid alcohol kinase, C55-isoprenyl alcohol phosphokinase, and polyisoprenol kinase. This enzyme participates in peptidoglycan biosynthesis. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14682075
14682099
Undecaprenyl-phosphate galactose phosphotransferase
Class of enzymes In enzymology, an undecaprenyl-phosphate galactose phosphotransferase (EC 2.7.8.6) is an enzyme that catalyzes the chemical reaction UDP-galactose + undecaprenyl phosphate formula_0 UMP + alpha-D-galactosyl-diphosphoundecaprenol Thus, the two substrates of this enzyme are UDP-galactose and undecaprenyl phosphate, whereas its two products are UMP and alpha-D-galactosyl-diphosphoundecaprenol. This enzyme belongs to the family of transferases, specifically those transferring non-standard substituted phosphate groups. The systematic name of this enzyme class is UDP-galactose:undecaprenyl-phosphate galactose phosphotransferase. Other names in common use include poly(isoprenol)-phosphate galactose phosphotransferase, poly(isoprenyl)phosphate galactosephosphatetransferase, and undecaprenyl phosphate galactosyl-1-phosphate transferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14682099
14682136
Uridine kinase
Class of enzymes In enzymology, an uridine kinase (EC 2.7.1.48) is an enzyme that catalyzes the chemical reaction ATP + uridine formula_0 ADP + UMP Thus, the two substrates of this enzyme are ATP and uridine, whereas its two products are ADP and UMP. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:uridine 5'-phosphotransferase. Other names in common use include pyrimidine ribonucleoside kinase, uridine-cytidine kinase, uridine kinase (phosphorylating), and uridine phosphokinase. This enzyme participates in pyrimidine metabolism. Structural studies. As of late 2007, 8 structures have been solved for this class of enzymes, with PDB accession codes 1UDW, 1UEI, 1UEJ, 1UFQ, 1UJ2, 1XRJ, 2JEO, and 2UVQ. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14682136
14682166
UTP—hexose-1-phosphate uridylyltransferase
Class of enzymes In enzymology, an UTP—hexose-1-phosphate uridylyltransferase (EC 2.7.7.10) is an enzyme that catalyzes the chemical reaction UTP + alpha-D-galactose 1-phosphate formula_0 diphosphate + UDP-galactose Thus, the two substrates of this enzyme are UTP and alpha-D-galactose 1-phosphate, whereas its two products are diphosphate and UDP-galactose. Enzyme family. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). The systematic name of this enzyme class is UTP:alpha-D-hexose-1-phosphate uridylyltransferase. Other names in common use include galactose-1-phosphate uridylyltransferase, galactose 1-phosphate uridylyltransferase, alpha-D-galactose 1-phosphate uridylyltransferase, galactose 1-phosphate uridyltransferase, UDPgalactose pyrophosphorylase, uridine diphosphate galactose pyrophosphorylase, and uridine diphosphogalactose pyrophosphorylase. This enzyme participates in galactose metabolism and nucleotide sugars metabolism. Structural studies. As of late 2007[ [update]], 3 structures have been solved for this class of enzymes, with PDB accession codes 1GUP, 1GUQ, and 1HXP. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14682166
14682192
UTP-monosaccharide-1-phosphate uridylyltransferase
Class of enzymes In enzymology, an UTP-monosaccharide-1-phosphate uridylyltransferase (EC 2.7.7.64) is an enzyme that catalyzes the chemical reaction UTP + a monosaccharide 1-phosphate formula_0 diphosphate + UDP-monosaccharide Thus, the two substrates of this enzyme are UTP and monosaccharide 1-phosphate, whereas its two products are diphosphate and UDP-monosaccharide. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). The systematic name of this enzyme class is . Other names in common use include UDP-sugar pyrophosphorylase, and PsUSP. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14682192
14682226
UTP—xylose-1-phosphate uridylyltransferase
Class of enzymes In enzymology, an UTP—xylose-1-phosphate uridylyltransferase (EC 2.7.7.11) is an enzyme that catalyzes the chemical reaction UTP + alpha-D-xylose 1-phosphate formula_0 diphosphate + UDP-xylose Thus, the two substrates of this enzyme are UTP and alpha-D-xylose 1-phosphate, whereas its two products are diphosphate and UDP-xylose. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). The systematic name of this enzyme class is UTP:alpha-D-xylose-1-phosphate uridylyltransferase. Other names in common use include xylose-1-phosphate uridylyltransferase, uridylyltransferase, xylose 1-phosphate, UDP-xylose pyrophosphorylase, uridine diphosphoxylose pyrophosphorylase, and xylose 1-phosphate uridylyltransferase. This enzyme participates in nucleotide sugars metabolism. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14682226
14682245
Viomycin kinase
In enzymology, viomycin kinase (EC 2.7.1.103) is an enzyme that catalyzes the chemical reaction ATP + viomycin formula_0 ADP + O-phosphoviomycin Thus, the two substrates of this enzyme are ATP and viomycin, whereas its two products are ADP and O-phosphoviomycin. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:viomycin O-phosphotransferase. Other names in common use include viomycin phosphotransferase, and capreomycin phosphotransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14682245
14682279
Xylitol kinase
In enzymology, a xylitol kinase (EC 2.7.1.122) is an enzyme that catalyzes the chemical reaction ATP + xylitol formula_0 ADP + xylitol 5-phosphate Thus, the two substrates of this enzyme are ATP and xylitol, whereas its two products are ADP and xylitol 5-phosphate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:xylitol 5-phosphotransferase. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14682279
14682596
Finite topological space
In mathematics, a finite topological space is a topological space for which the underlying point set is finite. That is, it is a topological space which has only finitely many elements. Finite topological spaces are often used to provide examples of interesting phenomena or counterexamples to plausible sounding conjectures. William Thurston has called the study of finite topologies in this sense "an oddball topic that can lend good insight to a variety of questions". Topologies on a finite set. Let formula_0 be a finite set. A topology on formula_1 is a subset formula_2 of formula_3 (the power set of formula_1) such that In other words, a subset formula_2 of formula_9 is a topology if formula_2 contains both formula_10 and formula_11 and is closed under arbitrary unions and intersections. Elements of formula_12 are called open sets. The general description of topological spaces requires that a topology be closed under arbitrary (finite or infinite) unions of open sets, but only under intersections of finitely many open sets. Here, that distinction is unnecessary. Since the power set of a finite set is finite there can be only finitely many open sets (and only finitely many closed sets). A topology on a finite set can also be thought of as a sublattice of formula_13 which includes both the bottom element formula_14 and the top element formula_1. Examples. 0 or 1 points. There is a unique topology on the empty set ∅. The only open set is the empty one. Indeed, this is the only subset of ∅. Likewise, there is a unique topology on a singleton set {"a"}. Here the open sets are ∅ and {"a"}. This topology is both discrete and trivial, although in some ways it is better to think of it as a discrete space since it shares more properties with the family of finite discrete spaces. For any topological space "X" there is a unique continuous function from ∅ to "X", namely the empty function. There is also a unique continuous function from "X" to the singleton space {"a"}, namely the constant function to "a". In the language of category theory the empty space serves as an initial object in the category of topological spaces while the singleton space serves as a terminal object. 2 points. Let "X" = {"a","b"} be a set with 2 elements. There are four distinct topologies on "X": The second and third topologies above are easily seen to be homeomorphic. The function from "X" to itself which swaps "a" and "b" is a homeomorphism. A topological space homeomorphic to one of these is called a Sierpiński space. So, in fact, there are only three inequivalent topologies on a two-point set: the trivial one, the discrete one, and the Sierpiński topology. The specialization preorder on the Sierpiński space {"a","b"} with {"b"} open is given by: "a" ≤ "a", "b" ≤ "b", and "a" ≤ "b". 3 points. Let "X" = {"a","b","c"} be a set with 3 elements. There are 29 distinct topologies on "X" but only 9 inequivalent topologies: The last 5 of these are all T0. The first one is trivial, while in 2, 3, and 4 the points "a" and "b" are topologically indistinguishable. 4 points. Let "X" = {"a","b","c","d"} be a set with 4 elements. There are 355 distinct topologies on "X" but only 33 inequivalent topologies: The last 16 of these are all T0. Properties. Specialization preorder. Topologies on a finite set "X" are in one-to-one correspondence with preorders on "X". Recall that a preorder on "X" is a binary relation on "X" which is reflexive and transitive. Given a (not necessarily finite) topological space "X" we can define a preorder on "X" by "x" ≤ "y" if and only if "x" ∈ cl{"y"} where cl{"y"} denotes the closure of the singleton set {"y"}. This preorder is called the "specialization preorder" on "X". Every open set "U" of "X" will be an upper set with respect to ≤ (i.e. if "x" ∈ "U" and "x" ≤ "y" then "y" ∈ "U"). Now if "X" is finite, the converse is also true: every upper set is open in "X". So for finite spaces, the topology on "X" is uniquely determined by ≤. Going in the other direction, suppose ("X", ≤) is a preordered set. Define a topology τ on "X" by taking the open sets to be the upper sets with respect to ≤. Then the relation ≤ will be the specialization preorder of ("X", τ). The topology defined in this way is called the Alexandrov topology determined by ≤. The equivalence between preorders and finite topologies can be interpreted as a version of Birkhoff's representation theorem, an equivalence between finite distributive lattices (the lattice of open sets of the topology) and partial orders (the partial order of equivalence classes of the preorder). This correspondence also works for a larger class of spaces called finitely generated spaces. Finitely generated spaces can be characterized as the spaces in which an arbitrary intersection of open sets is open. Finite topological spaces are a special class of finitely generated spaces. Compactness and countability. Every finite topological space is compact since any open cover must already be finite. Indeed, compact spaces are often thought of as a generalization of finite spaces since they share many of the same properties. Every finite topological space is also second-countable (there are only finitely many open sets) and separable (since the space itself is countable). Separation axioms. If a finite topological space is T1 (in particular, if it is Hausdorff) then it must, in fact, be discrete. This is because the complement of a point is a finite union of closed points and therefore closed. It follows that each point must be open. Therefore, any finite topological space which is not discrete cannot be T1, Hausdorff, or anything stronger. However, it is possible for a non-discrete finite space to be T0. In general, two points "x" and "y" are topologically indistinguishable if and only if "x" ≤ "y" and "y" ≤ "x", where ≤ is the specialization preorder on "X". It follows that a space "X" is T0 if and only if the specialization preorder ≤ on "X" is a partial order. There are numerous partial orders on a finite set. Each defines a unique T0 topology. Similarly, a space is R0 if and only if the specialization preorder is an equivalence relation. Given any equivalence relation on a finite set "X" the associated topology is the partition topology on "X". The equivalence classes will be the classes of topologically indistinguishable points. Since the partition topology is pseudometrizable, a finite space is R0 if and only if it is completely regular. Non-discrete finite spaces can also be normal. The excluded point topology on any finite set is a completely normal T0 space which is non-discrete. Connectivity. Connectivity in a finite space "X" is best understood by considering the specialization preorder ≤ on "X". We can associate to any preordered set "X" a directed graph Γ by taking the points of "X" as vertices and drawing an edge "x" → "y" whenever "x" ≤ "y". The connectivity of a finite space "X" can be understood by considering the connectivity of the associated graph Γ. In any topological space, if "x" ≤ "y" then there is a path from "x" to "y". One can simply take "f"(0) = "x" and "f"("t") = "y" for "t" > 0. It is easily to verify that "f" is continuous. It follows that the path components of a finite topological space are precisely the (weakly) connected components of the associated graph Γ. That is, there is a topological path from "x" to "y" if and only if there is an undirected path between the corresponding vertices of Γ. Every finite space is locally path-connected since the set formula_15 is a path-connected open neighborhood of "x" that is contained in every other neighborhood. In other words, this single set forms a local base at "x". Therefore, a finite space is connected if and only if it is path-connected. The connected components are precisely the path components. Each such component is both closed and open in "X". Finite spaces may have stronger connectivity properties. A finite space "X" is For example, the particular point topology on a finite space is hyperconnected while the excluded point topology is ultraconnected. The Sierpiński space is both. Additional structure. A finite topological space is pseudometrizable if and only if it is R0. In this case, one possible pseudometric is given by formula_16 where "x" ≡ "y" means "x" and "y" are topologically indistinguishable. A finite topological space is metrizable if and only if it is discrete. Likewise, a topological space is uniformizable if and only if it is R0. The uniform structure will be the pseudometric uniformity induced by the above pseudometric. Algebraic topology. Perhaps surprisingly, there are finite topological spaces with nontrivial fundamental groups. A simple example is the pseudocircle, which is space "X" with four points, two of which are open and two of which are closed. There is a continuous map from the unit circle "S"1 to "X" which is a weak homotopy equivalence (i.e. it induces an isomorphism of homotopy groups). It follows that the fundamental group of the pseudocircle is infinite cyclic. More generally it has been shown that for any finite abstract simplicial complex "K", there is a finite topological space "X""K" and a weak homotopy equivalence "f" : |"K"| → "X""K" where |"K"| is the geometric realization of "K". It follows that the homotopy groups of |"K"| and "X""K" are isomorphic. In fact, the underlying set of "X""K" can be taken to be "K" itself, with the topology associated to the inclusion partial order. Number of topologies on a finite set. As discussed above, topologies on a finite set are in one-to-one correspondence with preorders on the set, and T0 topologies are in one-to-one correspondence with partial orders. Therefore, the number of topologies on a finite set is equal to the number of preorders and the number of T0 topologies is equal to the number of partial orders. The table below lists the number of distinct (T0) topologies on a set with "n" elements. It also lists the number of inequivalent (i.e. nonhomeomorphic) topologies. Let "T"("n") denote the number of distinct topologies on a set with "n" points. There is no known simple formula to compute "T"("n") for arbitrary "n". The Online Encyclopedia of Integer Sequences presently lists "T"("n") for "n" ≤ 18. The number of distinct T0 topologies on a set with "n" points, denoted "T"0("n"), is related to "T"("n") by the formula formula_17 where "S"("n","k") denotes the Stirling number of the second kind. References. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": " X " }, { "math_id": 1, "text": " X " }, { "math_id": 2, "text": " \\tau " }, { "math_id": 3, "text": " P(X) " }, { "math_id": 4, "text": " \\varnothing \\in \\tau " }, { "math_id": 5, "text": " X\\in \\tau " }, { "math_id": 6, "text": " U, V \\in \\tau " }, { "math_id": 7, "text": " U \\cup V \\in \\tau " }, { "math_id": 8, "text": " U \\cap V \\in \\tau " }, { "math_id": 9, "text": " P(X) " }, { "math_id": 10, "text": " \\varnothing " }, { "math_id": 11, "text": " X " }, { "math_id": 12, "text": " \\tau " }, { "math_id": 13, "text": " (P(X), \\subset) " }, { "math_id": 14, "text": " \\varnothing " }, { "math_id": 15, "text": "\\mathop{\\uarr}x = \\{y \\in X : x \\leq y\\}" }, { "math_id": 16, "text": "d(x,y) = \\begin{cases}0 & x\\equiv y \\\\ 1 & x\\not\\equiv y\\end{cases}" }, { "math_id": 17, "text": "T(n) = \\sum_{k=0}^{n}S(n,k)\\,T_0(k)" } ]
https://en.wikipedia.org/wiki?curid=14682596
1468342
Single-stock futures
Futures contracts for trading company stocks In finance, a single-stock future (SSF) is a type of futures contract between two parties to exchange a specified number of stocks in a company for a price agreed today (the futures price or the strike price) with delivery occurring at a specified future date, the delivery date. The contracts can be later traded on a futures exchange. The party agreeing to take delivery of the underlying stock in the future, the "buyer" of the contract, is said to be "long", and the party agreeing to deliver the stock in the future, the "seller" of the contract, is said to be "short." The terminology reflects the expectations of the parties - the buyer hopes or expects that the stock price is going to increase, while the seller hopes or expects that it will decrease. Because entering the contract itself costs nothing, the buy/sell terminology is a linguistic convenience reflecting the position each party is taking - long or short. SSFs are usually traded in increments/lots/batches of 100. When purchased, no transmission of share rights or dividends occurs. Being futures contracts they are traded on margin, thus offering leverage, and they are not subject to the short selling limitations that stocks are subjected to. They are traded in various financial markets, including those of the United States, United Kingdom, Spain, India and others. South Africa currently hosts the largest single-stock futures market in the world, trading on average 700,000 contracts daily. SSFs in the U.S.. In the United States, they were disallowed from any exchange listing in the 1980s because the Commodity Futures Trading Commission and the U.S. Securities and Exchange Commission were unable to decide which would have the regulatory authority over these products. After the Commodity Futures Modernization Act of 2000 became law, the two agencies eventually agreed on a jurisdiction-sharing plan and SSF's began trading on November 8, 2002. Two new exchanges initially offered "security futures" products, including single-stock futures, although one of these exchanges has since closed. The remaining market is known as OneChicago, a joint venture of three previously-existing Chicago-based exchanges, the Chicago Board Options Exchange, Chicago Mercantile Exchange and the Chicago Board of Trade. In 2006, the brokerage firm Interactive Brokers made an equity investment in OneChicago and is now a part-owner of the exchange. As of September 2020 OneChicago has been closed. Pricing. Single stock futures values are priced by the market in accordance with the standard theoretical pricing model for forward and futures contracts, which is: formula_0 where F is the current (time t) cost of establishing a futures contract, S is the current price (spot price) of the underlying stock, r is the annualized risk-free interest rate, t is the present time, T is the time when the contract expires and PV(Div) is the Present value of any dividends generated by the underlying stock between t and T. When the risk-free rate is expressed as a continuous return, the contract price is: formula_1 where r is the risk free rate expressed as a continuous return, and e is the base of the natural log. Note the value of r will be slightly different in the two equations. The relationship between continuous returns and annualized returns is rc = ln(1 + r). The value of a futures contract is zero at the moment it is established, but changes thereafter until time T, at which point its value equals ST - Ft, i.e., the current cost of the stock minus the originally established cost of the futures contract.
[ { "math_id": 0, "text": "F = [S - PV(Div)] \\cdot (1 + r)^{(T-t)} \\ " }, { "math_id": 1, "text": "F = [S - PV(Div)] \\cdot e^{r \\cdot (T-t)} \\ " } ]
https://en.wikipedia.org/wiki?curid=1468342
14687550
Glycoprotein-fucosylgalactoside a-N-acetylgalactosaminyltransferase
Class of enzymes In enzymology, a glycoprotein-fucosylgalactoside alpha-N-acetylgalactosaminyltransferase (EC 2.4.1.40) is an enzyme that catalyzes the chemical reaction UDP-N-acetyl-D-galactosamine + glycoprotein-alpha-L-fucosyl-(1,2)-D-galactose formula_0 UDP + glycoprotein-N-acetyl-alpha-D-galactosaminyl-(1,3)-[alpha-L-fucosyl- (1,2)]-D-galactose Thus, the two substrates of this enzyme are UDP-N-acetyl-D-galactosamine and glycoprotein-alpha-L-fucosyl-(1,2)-D-galactose, whereas its 3 products are UDP, glycoprotein-N-acetyl-alpha-D-galactosaminyl-(1,3)-[alpha-L-fucosyl-, and (1,2)]-D-galactose. This enzyme belongs to the family of transferases, specifically those glycosyltransferases hexosyltransferases. The systematic name of this enzyme class is UDP-N-acetyl-D-galactosamine:glycoprotein-alpha-L-fucosyl-(1,2)-D-ga lactose 3-N-acetyl-D-galactosaminyltransferase. Other names in common use include A-transferase, histo-blood group A glycosyltransferase, (Fucalpha1→2Galalpha1→3-N-acetylgalactosaminyltransferase), UDP-GalNAc:Fucalpha1→2Galalpha1→3-N-acetylgalactosaminyltransferase, alpha-3-N-acetylgalactosaminyltransferase, blood-group substance alpha-acetyltransferase, blood-group substance A-dependent acetylgalactosaminyltransferase, fucosylgalactose acetylgalactosaminyltransferase, histo-blood group A acetylgalactosaminyltransferase, histo-blood group A transferase, UDP-N-acetyl-D-galactosamine:alpha-L-fucosyl-1,2-D-galactose, and 3-N-acetyl-D-galactosaminyltransferase. This enzyme participates in 3 metabolic pathways: the lactoseries and neolactoseries of glycosphingolipid biosynthesis, as well as the biosynthesis of glycan structures. Structural studies. As of late 2007, 19 structures have been solved for this class of enzymes, with PDB accession codes 1LZ0, 1LZI, 1R7T, 1R7V, 1R7Y, 1R81, 1WSZ, 1WT0, 1WT1, 1WT2, 1WT3, 1XZ6, 1ZHJ, 1ZI1, 1ZI3, 1ZI4, 1ZI5, 1ZJO, and 2A8W. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14687550
14687650
Euclid's orchard
Array of line segments normal to points of a square lattice In mathematics, informally speaking, Euclid's orchard is an array of one-dimensional "trees" of unit height planted at the lattice points in one quadrant of a square lattice. More formally, Euclid's orchard is the set of line segments from ("x", "y", 0) to ("x", "y", 1), where x and y are positive integers. The trees visible from the origin are those at lattice points ("x", "y", 0), where x and y are coprime, i.e., where the fraction is in reduced form. The name "Euclid's orchard" is derived from the Euclidean algorithm. If the orchard is projected relative to the origin onto the plane "x" + "y" = 1 (or, equivalently, drawn in perspective from a viewpoint at the origin) the tops of the trees form a graph of Thomae's function. The point ("x", "y", 1) projects to formula_0 The solution to the Basel problem can be used to show that the proportion of points in the ⁠⁠ grid that have trees on them is approximately formula_1 and that the error of this approximation goes to zero in the limit as n goes to infinity. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\left ( \\frac {x}{x+y}, \\frac {y}{x+y}, \\frac {1}{x+y} \\right )." }, { "math_id": 1, "text": "\\tfrac{6}{\\pi^2}" } ]
https://en.wikipedia.org/wiki?curid=14687650
14688049
Character variety
In the mathematics of moduli theory, given an algebraic, reductive, Lie group formula_0 and a finitely generated group formula_1, the formula_0-"character variety of" formula_1 is a space of equivalence classes of group homomorphisms from formula_1 to formula_0: formula_2 More precisely, formula_0 acts on formula_3 by conjugation, and two homomorphisms are defined to be equivalent (denoted formula_4) if and only if their orbit closures intersect. This is the weakest equivalence relation on the set of conjugation orbits, formula_5, that yields a Hausdorff space. Formulation. Formally, and when the reductive group is defined over the complex numbers formula_6, the formula_0-character variety is the spectrum of prime ideals of the ring of invariants (i.e., the affine GIT quotient). formula_7 Here more generally one can consider algebraically closed fields of prime characteristic. In this generality, character varieties are only algebraic sets and are not actual varieties. To avoid technical issues, one often considers the associated reduced space by dividing by the radical of 0 (eliminating nilpotents). However, this does not necessarily yield an irreducible space either. Moreover, if we replace the complex group by a real group we may not even get an algebraic set. In particular, a maximal compact subgroup generally gives a semi-algebraic set. On the other hand, whenever formula_1 is free we always get an honest variety; it is singular however. Examples. An interesting class of examples arise from Riemann surfaces: if formula_8 is a Riemann surface then the formula_0-"character variety of" formula_9, or "Betti moduli space", is the character variety of the surface group formula_10 formula_11. For example, if formula_12 and formula_9 is the Riemann sphere punctured three times, so formula_10 is free of rank two, then Henri G. Vogt, Robert Fricke, and Felix Klein proved that the character variety is formula_13; its coordinate ring is isomorphic to the complex polynomial ring in 3 variables, formula_14. Restricting to formula_15 gives a closed real three-dimensional ball (semi-algebraic, but not algebraic). Another example, also studied by Vogt and Fricke–Klein is the case with formula_12 and formula_9 is the Riemann sphere punctured four times, so formula_10 is free of rank three. Then the character variety is isomorphic to the hypersurface in formula_16 given by the equation formula_17 This character variety appears in the theory of the sixth Painleve equation, and has a natural Poisson structure such that formula_18 are Casimir functions, so the symplectic leaves are affine cubic surfaces of the form formula_19 Variants. This construction of the character variety is not necessarily the same as that of Marc Culler and Peter Shalen (generated by evaluations of traces), although when formula_20 they do agree, since Claudio Procesi has shown that in this case the ring of invariants is in fact generated by only traces. Since trace functions are invariant by all inner automorphisms, the Culler–Shalen construction essentially assumes that we are acting by formula_20 on formula_21 even if formula_22. For instance, when formula_1 is a free group of rank 2 and formula_23, the conjugation action is trivial and the formula_0-character variety is the torus formula_24 But the trace algebra is a strictly small subalgebra (there are fewer invariants). This provides an involutive action on the torus that needs to be accounted for to yield the Culler–Shalen character variety. The involution on this torus yields a 2-sphere. The point is that up to formula_25-conjugation all points are distinct, but the trace identifies elements with differing anti-diagonal elements (the involution). Connection to geometry. There is an interplay between these moduli spaces and the moduli spaces of principal bundles, vector bundles, Higgs bundles, and geometric structures on topological spaces, given generally by the observation that, at least locally, equivalent objects in these categories are parameterized by conjugacy classes of holonomy homomorphisms of flat connections. In other words, with respect to a base space formula_26 for the bundles or a fixed topological space for the geometric structures, the holonomy homomorphism is a group homomorphism from formula_27 to the structure group formula_0 of the bundle. Connection to skein modules. The coordinate ring of the character variety has been related to skein modules in knot theory. The skein module is roughly a deformation (or quantization) of the character variety. It is closely related to topological quantum field theory in dimension 2+1.
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "\\pi" }, { "math_id": 2, "text": "\\mathfrak{R}(\\pi,G)=\\operatorname{Hom}(\\pi,G)/\\!\\sim \\, ." }, { "math_id": 3, "text": "\\operatorname{Hom}(\\pi,G) " }, { "math_id": 4, "text": "\\sim" }, { "math_id": 5, "text": "\\operatorname{Hom}(\\pi,G)/G" }, { "math_id": 6, "text": "\\Complex" }, { "math_id": 7, "text": " \\Complex[\\operatorname{Hom}(\\pi,G)]^G ." }, { "math_id": 8, "text": " X " }, { "math_id": 9, "text": "X" }, { "math_id": 10, "text": "\\pi=\\pi_1(X)" }, { "math_id": 11, "text": "\\mathcal{M}_B(X,G) = \\mathfrak{R}(\\pi_1(X),G) " }, { "math_id": 12, "text": "G=\\mathrm{SL}(2,\\Complex)" }, { "math_id": 13, "text": "\\Complex^3" }, { "math_id": 14, "text": "\\Complex[x,y,z]" }, { "math_id": 15, "text": "G=\\mathrm{SU}(2)" }, { "math_id": 16, "text": "\\Complex^7" }, { "math_id": 17, "text": "a^2+b^2+c^2+d^2 + x^2+y^2+z^2 -(ab+cd)x-(ad+bc)y-(ac+bd)z + abcd + xyz - 4 = 0." }, { "math_id": 18, "text": "a,b,c,d" }, { "math_id": 19, "text": " xyz+x^2+y^2+z^2 +c_1x+ c_2 y + c_3z = c_4" }, { "math_id": 20, "text": "G=\\mathrm{SL}(n,\\Complex)" }, { "math_id": 21, "text": "\\mathfrak{R}=\\operatorname{Hom}(\\pi,H)" }, { "math_id": 22, "text": "G \\neq H" }, { "math_id": 23, "text": "G=\\mathrm{SO}(2)" }, { "math_id": 24, "text": "S^1\\times S^1." }, { "math_id": 25, "text": "\\mathrm{SO}(2)" }, { "math_id": 26, "text": "M" }, { "math_id": 27, "text": "\\pi_1(M)" } ]
https://en.wikipedia.org/wiki?curid=14688049
1468817
Variational inequality
In mathematics, a variational inequality is an inequality involving a functional, which has to be solved for all possible values of a given variable, belonging usually to a convex set. The mathematical theory of variational inequalities was initially developed to deal with equilibrium problems, precisely the Signorini problem: in that model problem, the functional involved was obtained as the first variation of the involved potential energy. Therefore, it has a variational origin, recalled by the name of the general abstract problem. The applicability of the theory has since been expanded to include problems from economics, finance, optimization and game theory. History. The first problem involving a variational inequality was the Signorini problem, posed by Antonio Signorini in 1959 and solved by Gaetano Fichera in 1963, according to the references and : the first papers of the theory were and , . Later on, Guido Stampacchia proved his generalization to the Lax–Milgram theorem in in order to study the regularity problem for partial differential equations and coined the name "variational inequality" for all the problems involving inequalities of this kind. Georges Duvaut encouraged his graduate students to study and expand on Fichera's work, after attending a conference in Brixen on 1965 where Fichera presented his study of the Signorini problem, as reports: thus the theory become widely known throughout France. Also in 1965, Stampacchia and Jacques-Louis Lions extended earlier results of , announcing them in the paper : full proofs of their results appeared later in the paper . Definition. Following , the definition of a variational inequality is the following one. Definition 1. Given a Banach space formula_0, a subset formula_1 of formula_0, and a functional formula_2 from formula_1 to the dual space formula_3 of the space formula_0, the variational inequality problem is the problem of solving for the variable formula_4 belonging to formula_1 the following inequality: formula_5 where formula_6 is the duality pairing. In general, the variational inequality problem can be formulated on any finite – or infinite-dimensional Banach space. The three obvious steps in the study of the problem are the following ones: Examples. The problem of finding the minimal value of a real-valued function of real variable. This is a standard example problem, reported by : consider the problem of finding the minimal value of a differentiable function formula_7 over a closed interval formula_8. Let formula_9 be a point in formula_10 where the minimum occurs. Three cases can occur: These necessary conditions can be summarized as the problem of finding formula_17 such that formula_18 for formula_19 The absolute minimum must be searched between the solutions (if more than one) of the preceding inequality: note that the solution is a real number, therefore this is a finite dimensional variational inequality. The general finite-dimensional variational inequality. A formulation of the general problem in formula_20 is the following: given a subset formula_21 of formula_22 and a mapping formula_23, the finite-dimensional variational inequality problem associated with formula_21 consist of finding a formula_24-dimensional vector formula_4 belonging to formula_21 such that formula_25 where formula_26 is the standard inner product on the vector space formula_22. The variational inequality for the Signorini problem. In the historical survey , Gaetano Fichera describes the genesis of his solution to the Signorini problem: the problem consist in finding the elastic equilibrium configuration formula_27 of an anisotropic non-homogeneous elastic body that lies in a subset formula_28 of the three-dimensional euclidean space whose boundary is formula_29, resting on a rigid frictionless surface and subject only to its mass forces. The solution formula_30 of the problem exists and is unique (under precise assumptions) in the set of admissible displacements formula_31 i.e. the set of displacement vectors satisfying the system of ambiguous boundary conditions if and only if formula_32 where formula_33 and formula_34 are the following functionals, written using the Einstein notation formula_35,    formula_36,    formula_37 where, for all formula_38, formula_45 where formula_46 is the elastic potential energy and formula_47 is the elasticity tensor.
[ { "math_id": 0, "text": "\\boldsymbol{E}" }, { "math_id": 1, "text": "\\boldsymbol{K}" }, { "math_id": 2, "text": "F\\colon \\boldsymbol{K}\\to \\boldsymbol{E}^{\\ast}" }, { "math_id": 3, "text": "\\boldsymbol{E}^{\\ast}" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "\\langle F(x), y-x \\rangle \\geq 0\\qquad\\forall y \\in \\boldsymbol{K}" }, { "math_id": 6, "text": "\\langle\\cdot,\\cdot\\rangle\\colon \n\\boldsymbol{E}^{\\ast}\\times\\boldsymbol{E}\\to\n\\mathbb{R}" }, { "math_id": 7, "text": "f" }, { "math_id": 8, "text": "I = [a,b]" }, { "math_id": 9, "text": "x^{\\ast}" }, { "math_id": 10, "text": "I" }, { "math_id": 11, "text": "a<x^{\\ast}< b," }, { "math_id": 12, "text": "f^{\\prime}(x^{\\ast}) = 0;" }, { "math_id": 13, "text": "x^{\\ast}=a," }, { "math_id": 14, "text": "f^{\\prime}(x^{\\ast}) \\ge 0;" }, { "math_id": 15, "text": "x^{\\ast}=b," }, { "math_id": 16, "text": "f^{\\prime}(x^{\\ast}) \\le 0." }, { "math_id": 17, "text": "x^{\\ast}\\in I" }, { "math_id": 18, "text": "f^{\\prime}(x^{\\ast})(y-x^{\\ast}) \\geq 0\\quad" }, { "math_id": 19, "text": "\\quad\\forall y \\in I." }, { "math_id": 20, "text": "\\mathbb{R}^n" }, { "math_id": 21, "text": "K" }, { "math_id": 22, "text": "\\mathbb{R}^{n}" }, { "math_id": 23, "text": "F\\colon K\\to\\mathbb{R}^{n}" }, { "math_id": 24, "text": "n" }, { "math_id": 25, "text": "\\langle F(x), y-x \\rangle \\geq 0\\qquad\\forall y \\in K" }, { "math_id": 26, "text": "\\langle\\cdot,\\cdot\\rangle\\colon\\mathbb{R}^{n}\\times\\mathbb{R}^{n}\\to\\mathbb{R}" }, { "math_id": 27, "text": "\\boldsymbol{u}(\\boldsymbol{x})\n=\\left(u_1(\\boldsymbol{x}),u_2(\\boldsymbol{x}),u_3(\\boldsymbol{x})\\right)" }, { "math_id": 28, "text": "A" }, { "math_id": 29, "text": "\\partial A" }, { "math_id": 30, "text": "u" }, { "math_id": 31, "text": "\\mathcal{U}_\\Sigma" }, { "math_id": 32, "text": "B(\\boldsymbol{u},\\boldsymbol{v} - \\boldsymbol{u}) - F(\\boldsymbol{v} - \\boldsymbol{u}) \\geq 0 \\qquad \\forall \\boldsymbol{v} \\in \\mathcal{U}_\\Sigma " }, { "math_id": 33, "text": "B(\\boldsymbol{u},\\boldsymbol{v}) " }, { "math_id": 34, "text": "F(\\boldsymbol{v}) " }, { "math_id": 35, "text": "B(\\boldsymbol{u},\\boldsymbol{v}) = -\\int_A \\sigma_{ik}(\\boldsymbol{u})\\varepsilon_{ik}(\\boldsymbol{v})\\,\\mathrm{d}x" }, { "math_id": 36, "text": "F(\\boldsymbol{v}) = \\int_A v_i f_i\\,\\mathrm{d}x + \\int_{\\partial A\\setminus\\Sigma}\\!\\!\\!\\!\\! v_i g_i \\,\\mathrm{d}\\sigma" }, { "math_id": 37, "text": "\\boldsymbol{u},\\boldsymbol{v} \\in \\mathcal{U}_\\Sigma " }, { "math_id": 38, "text": "\\boldsymbol{x}\\in A" }, { "math_id": 39, "text": "\\Sigma" }, { "math_id": 40, "text": "\\boldsymbol{f}(\\boldsymbol{x}) = \\left( f_1(\\boldsymbol{x}), f_2(\\boldsymbol{x}), f_3(\\boldsymbol{x}) \\right)" }, { "math_id": 41, "text": "\\boldsymbol{g}(\\boldsymbol{x})=\\left(g_1(\\boldsymbol{x}),g_2(\\boldsymbol{x}),g_3(\\boldsymbol{x})\\right)" }, { "math_id": 42, "text": "\\partial A\\!\\setminus\\!\\Sigma" }, { "math_id": 43, "text": "\\boldsymbol{\\varepsilon}=\\boldsymbol{\\varepsilon}(\\boldsymbol{u})=\\left(\\varepsilon_{ik}(\\boldsymbol{u})\\right)=\\left(\\frac{1}{2} \\left( \\frac{\\partial u_i}{\\partial x_k} + \\frac{\\partial u_k}{\\partial x_i} \\right)\\right)" }, { "math_id": 44, "text": "\\boldsymbol{\\sigma}=\\left(\\sigma_{ik}\\right)" }, { "math_id": 45, "text": "\\sigma_{ik}= - \\frac{\\partial W}{\\partial \\varepsilon_{ik}} \\qquad\\forall i,k=1,2,3" }, { "math_id": 46, "text": "W(\\boldsymbol{\\varepsilon})=a_{ikjh}(\\boldsymbol{x})\\varepsilon_{ik}\\varepsilon_{jh}" }, { "math_id": 47, "text": "\\boldsymbol{a}(\\boldsymbol{x})=\\left(a_{ikjh}(\\boldsymbol{x})\\right)" } ]
https://en.wikipedia.org/wiki?curid=1468817
14689525
Loop-level parallelism
Loop-level parallelism is a form of parallelism in software programming that is concerned with extracting parallel tasks from loops. The opportunity for loop-level parallelism often arises in computing programs where data is stored in random access data structures. Where a sequential program will iterate over the data structure and operate on indices one at a time, a program exploiting loop-level parallelism will use multiple threads or processes which operate on some or all of the indices at the same time. Such parallelism provides a speedup to overall execution time of the program, typically in line with Amdahl's law. Description. For simple loops, where each iteration is independent of the others, loop-level parallelism can be embarrassingly parallel, as parallelizing only requires assigning a process to handle each iteration. However, many algorithms are designed to run sequentially, and fail when parallel processes race due to dependence within the code. Sequential algorithms are sometimes applicable to parallel contexts with slight modification. Usually, though, they require process synchronization. Synchronization can be either implicit, via message passing, or explicit, via synchronization primitives like semaphores. Example. Consider the following code operating on a list codice_0 of length codice_1. Each iteration of the loop takes the value from the current index of codice_0, and increments it by 10. If statement codice_3 takes codice_4 time to execute, then the loop takes time codice_5 to execute sequentially, ignoring time taken by loop constructs. Now, consider a system with codice_6 processors where codice_7. If codice_1 threads run in parallel, the time to execute all codice_1 steps is reduced to codice_4. Less simple cases produce inconsistent, i.e. non-serializable outcomes. Consider the following loop operating on the same list codice_0. Each iteration sets the current index to be the value of the previous plus ten. When run sequentially, each iteration is guaranteed that the previous iteration will already have the correct value. With multiple threads, process scheduling and other considerations prevent the execution order from guaranteeing an iteration will execute only after its dependence is met. It very well may happen before, leading to unexpected results. Serializability can be restored by adding synchronization to preserve the dependence on previous iterations. Dependencies in code. There are several types of dependences that can be found within code. In order to preserve the sequential behaviour of a loop when run in parallel, True Dependence must be preserved. Anti-Dependence and Output Dependence can be dealt with by giving each process its own copy of variables (known as privatization). Example of true dependence. codice_12, meaning that S2 has a true dependence on S3 because S2 writes to the variable codice_13, which S3 reads from. Example of anti-dependence. codice_14, meaning that S2 has an anti-dependence on S3 because S2 reads from the variable codice_15 before S3 writes to it. Example of output-dependence. codice_16, meaning that S2 has an output dependence on S3 because both write to the variable codice_13. Example of input-dependence. codice_18, meaning that S2 has an input dependence on S3 because S2 and S3 both read from variable codice_19. Dependence in loops. Loop-carried vs loop-independent dependence. Loops can have two types of dependence: In loop-independent dependence, loops have inter-iteration dependence, but do not have dependence between iterations. Each iteration may be treated as a block and performed in parallel without other synchronization efforts. In the following example code used for swapping the values of two array of length n, there is a loop-independent dependence of codice_20. In loop-carried dependence, statements in an iteration of a loop depend on statements in another iteration of the loop. Loop-Carried Dependence uses a modified version of the dependence notation seen earlier. Example of loop-carried dependence where codice_21, where codice_22 indicates the current iteration, and codice_23 indicates the next iteration. Loop carried dependence graph. A Loop-carried dependence graph graphically shows the loop-carried dependencies between iterations. Each iteration is listed as a node on the graph, and directed edges show the true, anti, and output dependencies between each iteration. Types. There are a variety of methodologies for parallelizing loops. Each implementation varies slightly in how threads synchronize, if at all. In addition, parallel tasks must somehow be mapped to a process. These tasks can either be allocated statically or dynamically. Research has shown that load-balancing can be better achieved through some dynamic allocation algorithms than when done statically. The process of parallelizing a sequential program can be broken down into the following discrete steps. Each concrete loop-parallelization below implicitly performs them. DISTRIBUTED loop. When a loop has a loop-carried dependence, one way to parallelize it is to distribute the loop into several different loops. Statements that are not dependent on each other are separated so that these distributed loops can be executed in parallel. For example, consider the following code. The loop has a loop carried dependence codice_24 but S2 and S1 do not have a loop-independent dependence so we can rewrite the code as follows. Note that now loop1 and loop2 can be executed in parallel. Instead of single instruction being performed in parallel on different data as in data level parallelism, here different loops perform different tasks on different data. Let's say the time of execution of S1 and S2 be formula_0 and formula_1 then the execution time for sequential form of above code is formula_2, Now because we split the two statements and put them in two different loops, gives us an execution time of formula_3. We call this type of parallelism either function or task parallelism. DOALL parallelism. DOALL parallelism exists when statements within a loop can be executed independently (situations where there is no loop-carried dependence). For example, the following code does not read from the array codice_13, and does not update the arrays codice_26. No iterations have a dependence on any other iteration. Let's say the time of one execution of S1 be formula_0 then the execution time for sequential form of above code is formula_4, Now because DOALL Parallelism exists when all iterations are independent, speed-up may be achieved by executing all iterations in parallel which gives us an execution time of formula_0, which is the time taken for one iteration in sequential execution. The following example, using a simplified pseudo code, shows how a loop might be parallelized to execute each iteration independently. DOACROSS parallelism. DOACROSS Parallelism exists where iterations of a loop are parallelized by extracting calculations that can be performed independently and running them simultaneously. Synchronization exists to enforce loop-carried dependence. Consider the following, synchronous loop with dependence codice_24. Each loop iteration performs two actions Calculating the value codice_28, and then performing the assignment can be decomposed into two lines(statements S1 and S2): The first line, codice_31, has no loop-carried dependence. The loop can then be parallelized by computing the temp value in parallel, and then synchronizing the assignment to codice_29. Let's say the time of execution of S1 and S2 be formula_0 and formula_1 then the execution time for sequential form of above code is formula_2, Now because DOACROSS Parallelism exists, speed-up may be achieved by executing iterations in a pipelined fashion which gives us an execution time of formula_5. DOPIPE parallelism. DOPIPE Parallelism implements pipelined parallelism for loop-carried dependence where a loop iteration is distributed over multiple, synchronized loops. The goal of DOPIPE is to act like an assembly line, where one stage is started as soon as there is sufficient data available for it from the previous stage. Consider the following, synchronous code with dependence codice_24. S1 must be executed sequentially, but S2 has no loop-carried dependence. S2 could be executed in parallel using DOALL Parallelism after performing all calculations needed by S1 in series. However, the speedup is limited if this is done. A better approach is to parallelize such that the S2 corresponding to each S1 executes when said S1 is finished. Implementing pipelined parallelism results in the following set of loops, where the second loop may execute for an index as soon as the first loop has finished its corresponding index. Let's say the time of execution of S1 and S2 be formula_0 and formula_1 then the execution time for sequential form of above code is formula_2, Now because DOPIPE Parallelism exists, speed-up may be achieved by executing iterations in a pipelined fashion which gives us an execution time of formula_6, where p is the number of processor in parallel. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_{S_1}" }, { "math_id": 1, "text": "T_{S_2}\n" }, { "math_id": 2, "text": "n*(T_{S_1}+T_{S_2})" }, { "math_id": 3, "text": "n*T_{S_1} + T_{S_2}" }, { "math_id": 4, "text": "n*T_{S_1}" }, { "math_id": 5, "text": "T_{S_1} + n*T_{S_2}" }, { "math_id": 6, "text": "n*T_{S_1} + (n/p)*T_{S_2}" } ]
https://en.wikipedia.org/wiki?curid=14689525
146903
Texture mapping
Method of defining surface detail on a computer-generated graphic or 3D model Texture mapping is a method for mapping a texture on a computer-generated graphic. "Texture" in this context can be high frequency detail, surface texture, or color. History. The original technique was pioneered by Edwin Catmull in 1974 as part of his doctoral thesis. Texture mapping originally referred to diffuse mapping, a method that simply mapped pixels from a texture to a 3D surface ("wrapping" the image around the object). In recent decades, the advent of multi-pass rendering, multitexturing, mipmaps, and more complex mappings such as height mapping, bump mapping, normal mapping, displacement mapping, reflection mapping, specular mapping, occlusion mapping, and many other variations on the technique (controlled by a materials system) have made it possible to simulate near-photorealism in real time by vastly reducing the number of polygons and lighting calculations needed to construct a realistic and functional 3D scene. Texture maps. A &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;texture map is an image applied (mapped) to the surface of a shape or polygon. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3D model formats or material definitions, and assembled into resource bundles. They may have one to three dimensions, although two dimensions are most common for visible surfaces. For use with modern hardware, texture map data may be stored in swizzled or tiled orderings to improve cache coherency. Rendering APIs typically manage texture map resources (which may be located in device memory) as buffers or surfaces, and may allow 'render to texture' for additional effects such as post processing or environment mapping. They usually contain RGB color data (either stored as direct color, compressed formats, or indexed color), and sometimes an additional channel for alpha blending (RGBA) especially for billboards and "decal" overlay textures. It is possible to use the alpha channel (which may be convenient to store in formats parsed by hardware) for other uses such as specularity. Multiple texture maps (or channels) may be combined for control over specularity, normals, displacement, or subsurface scattering e.g. for skin rendering. Multiple texture images may be combined in texture atlases or array textures to reduce state changes for modern hardware. (They may be considered a modern evolution of tile map graphics). Modern hardware often supports cube map textures with multiple faces for environment mapping. Creation. Texture maps may be acquired by scanning/digital photography, designed in image manipulation software such as GIMP, Photoshop, or painted onto 3D surfaces directly in a 3D paint tool such as Mudbox or ZBrush. Texture application. This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a texture coordinate (which in the 2d case is also known as UV coordinates). This may be done through explicit assignment of vertex attributes, manually edited in a 3D modelling package through UV unwrapping tools. It is also possible to associate a procedural transformation from 3D space to texture space with the material. This might be accomplished via planar projection or, alternatively, cylindrical or spherical mapping. More complex mappings may consider the distance along a surface to minimize distortion. These coordinates are interpolated across the faces of polygons to sample the texture map during rendering. Textures may be repeated or mirrored to extend a finite rectangular bitmap over a larger area, or they may have a one-to-one unique "injective" mapping from every piece of a surface (which is important for render mapping and light mapping, also known as baking). Texture space. Texture mapping maps the model surface (or screen space during rasterization) into texture space; in this space, the texture map is visible in its undistorted form. UV unwrapping tools typically provide a view in texture space for manual editing of texture coordinates. Some rendering techniques such as subsurface scattering may be performed approximately by texture-space operations. Multitexturing. Multitexturing is the use of more than one texture at a time on a polygon. For instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Microtextures or detail textures are used to add higher frequency details, and dirt maps may add weathering and variation; this can greatly reduce the apparent periodicity of repeating textures. Modern graphics may use more than 10 layers, which are combined using shaders, for greater fidelity. Another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface (such as tree bark or rough concrete) that takes on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular in recent video games, as graphics hardware has become powerful enough to accommodate it in real-time. Texture filtering. The way that samples (e.g. when viewed as pixels on the screen) are calculated from the texels (texture pixels) is governed by texture filtering. The cheapest method is to use the nearest-neighbour interpolation, but bilinear interpolation or trilinear interpolation between mipmaps are two commonly used alternatives which reduce aliasing or jaggies. In the event of a texture coordinate being outside the texture, it is either clamped or wrapped. Anisotropic filtering better eliminates directional artefacts when viewing textures from oblique viewing angles. Texture streaming. Texture streaming is a means of using data streams for textures, where each texture is available in two or more different resolutions, as to determine which texture should be loaded into memory and used based on draw distance from the viewer and how much memory is available for textures. Texture streaming allows a rendering engine to use low resolution textures for objects far away from the viewer's camera, and resolve those into more detailed textures, read from a data source, as the point of view nears the objects. Baking. As an optimization, it is possible to render detail from a complex, high-resolution model or expensive process (such as global illumination) into a surface texture (possibly on a low-resolution model). "Baking" is also known as render mapping. This technique is most commonly used for light maps, but may also be used to generate normal maps and displacement maps. Some computer games (e.g. Messiah) have used this technique. The original Quake software engine used on-the-fly baking to combine light maps and colour maps ("surface caching"). Baking can be used as a form of level of detail generation, where a complex scene with many different elements and materials may be approximated by a "single" element with a "single" texture, which is then algorithmically reduced for lower rendering cost and fewer drawcalls. It is also used to take high-detail models from 3D sculpting software and point cloud scanning and approximate them with meshes more suitable for realtime rendering. Rasterisation algorithms. Various techniques have evolved in software and hardware implementations. Each offers different trade-offs in precision, versatility and performance. Affine texture mapping. Affine texture mapping linearly interpolates texture coordinates across a surface, and so is the fastest form of texture mapping. Some software and hardware (such as the original PlayStation) project vertices in 3D space onto the screen during rendering and linearly interpolate the texture coordinates "in screen space" between them. This may be done by incrementing fixed point UV coordinates, or by an incremental error algorithm akin to Bresenham's line algorithm. In contrast to perpendicular polygons, this leads to noticeable distortion with perspective transformations (see figure: the checker box texture appears bent), especially as primitives near the camera. Such distortion may be reduced with the subdivision of the polygon into smaller ones. For the case of rectangular objects, using quad primitives can look less incorrect than the same rectangle split into triangles, but because interpolating 4 points adds complexity to the rasterization, most early implementations preferred triangles only. Some hardware, such as the forward texture mapping used by the Nvidia NV1, was able to offer efficient quad primitives. With perspective correction (see below) triangles become equivalent and this advantage disappears. For rectangular objects that are at right angles to the viewer, like floors and walls, the perspective only needs to be corrected in one direction across the screen, rather than both. The correct perspective mapping can be calculated at the left and right edges of the floor, and then an affine linear interpolation across that horizontal span will look correct, because every pixel along that line is the same distance from the viewer. Perspective correctness. Perspective correct texturing accounts for the vertices' positions in 3D space, rather than simply interpolating coordinates in 2D screen space. This achieves the correct visual effect but it is more expensive to calculate. To perform perspective correction of the texture coordinates formula_0 and formula_1, with formula_2 being the depth component from the viewer's point of view, we can take advantage of the fact that the values formula_3, formula_4, and formula_5 are linear in screen space across the surface being textured. In contrast, the original formula_2, formula_0 and formula_1, before the division, are not linear across the surface in screen space. We can therefore linearly interpolate these reciprocals across the surface, computing corrected values at each pixel, to result in a perspective correct texture mapping. To do this, we first calculate the reciprocals at each vertex of our geometry (3 points for a triangle). For vertex formula_6 we have formula_7. Then, we linearly interpolate these reciprocals between the formula_6 vertices (e.g., using barycentric coordinates), resulting in interpolated values across the surface. At a given point, this yields the interpolated formula_8, and formula_9. Note that this formula_8 cannot be yet used as our texture coordinates as our division by formula_2 altered their coordinate system. To correct back to the formula_10 space we first calculate the corrected formula_2 by again taking the reciprocal formula_11. Then we use this to correct our formula_8: formula_12 and formula_13. This correction makes it so that in parts of the polygon that are closer to the viewer the difference from pixel to pixel between texture coordinates is smaller (stretching the texture wider) and in parts that are farther away this difference is larger (compressing the texture). Affine texture mapping directly interpolates a texture coordinate formula_14 between two endpoints formula_15 and formula_16: formula_17 where formula_18 Perspective correct mapping interpolates after dividing by depth formula_19, then uses its interpolated reciprocal to recover the correct coordinate: formula_20 3D graphics hardware typically supports perspective correct texturing. Various techniques have evolved for rendering texture mapped geometry into images with different quality/precision tradeoffs, which can be applied to both software and hardware. Classic software texture mappers generally did only simple mapping with at most one lighting effect (typically applied through a lookup table), and the perspective correctness was about 16 times more expensive. Restricted camera rotation. The "Doom engine" restricted the world to vertical walls and horizontal floors/ceilings, with a camera that could only rotate about the vertical axis. This meant the walls would be a constant depth coordinate along a vertical line and the floors/ceilings would have a constant depth along a horizontal line. After performing one perspective correction calculation for the depth, the rest of the line could use fast affine mapping. Some later renderers of this era simulated a small amount of camera pitch with shearing which allowed the appearance of greater freedom whilst using the same rendering technique. Some engines were able to render texture mapped Heightmaps (e.g. Nova Logic's Voxel Space, and the engine for Outcast) via Bresenham-like incremental algorithms, producing the appearance of a texture mapped landscape without the use of traditional geometric primitives. Subdivision for perspective correction. Every triangle can be further subdivided into groups of about 16 pixels in order to achieve two goals. First, keeping the arithmetic mill busy at all times. Second, producing faster arithmetic results. World space subdivision. For perspective texture mapping without hardware support, a triangle is broken down into smaller triangles for rendering and affine mapping is used on them. The reason this technique works is that the distortion of affine mapping becomes much less noticeable on smaller polygons. The Sony PlayStation made extensive use of this because it only supported affine mapping in hardware but had a relatively high triangle throughput compared to its peers. Screen space subdivision. Software renderers generally preferred screen subdivision because it has less overhead. Additionally, they try to do linear interpolation along a line of pixels to simplify the set-up (compared to 2d affine interpolation) and thus again the overhead (also affine texture-mapping does not fit into the low number of registers of the x86 CPU; the 68000 or any RISC is much more suited). A different approach was taken for "Quake", which would calculate perspective correct coordinates only once every 16 pixels of a scanline and linearly interpolate between them, effectively running at the speed of linear interpolation because the perspective correct calculation runs in parallel on the co-processor. The polygons are rendered independently, hence it may be possible to switch between spans and columns or diagonal directions depending on the orientation of the polygon normal to achieve a more constant z but the effort seems not to be worth it. Other techniques. Another technique was approximating the perspective with a faster calculation, such as a polynomial. Still another technique uses 1/z value of the last two drawn pixels to linearly extrapolate the next value. The division is then done starting from those values so that only a small remainder has to be divided but the amount of bookkeeping makes this method too slow on most systems. Finally, the Build engine extended the constant distance trick used for Doom by finding the line of constant distance for arbitrary polygons and rendering along it. Hardware implementations. Texture mapping hardware was originally developed for simulation (e.g. as implemented in the Evans and Sutherland ESIG and Singer-Link Digital Image Generators DIG), and professional graphics workstations such as Silicon Graphics, broadcast digital video effects machines such as the Ampex ADO and later appeared in Arcade cabinets, consumer video game consoles, and PC video cards in the mid-1990s. In flight simulation, texture mapping provided important motion and altitude cues necessary for pilot training not available on untextured surfaces. It was also in flight simulation applications, that texture mapping was implemented for real-time processing with prefiltered texture patterns stored in memory for real-time access by the video processor. Modern graphics processing units (GPUs) provide specialised fixed function units called "texture samplers", or "texture mapping units", to perform texture mapping, usually with trilinear filtering or better multi-tap anisotropic filtering and hardware for decoding specific formats such as DXTn. As of 2016, texture mapping hardware is ubiquitous as most SOCs contain a suitable GPU. Some hardware combines texture mapping with hidden-surface determination in tile based deferred rendering or scanline rendering; such systems only fetch the visible texels at the expense of using greater workspace for transformed vertices. Most systems have settled on the Z-buffering approach, which can still reduce the texture mapping workload with front-to-back sorting. Among earlier graphics hardware, there were two competing paradigms of how to deliver a texture to the screen: Inverse texture mapping is the method which has become standard in modern hardware. Inverse texture mapping. With this method, a pixel on the screen is mapped to a point on the texture. Each vertex of a rendering primitive is projected to a point on the screen, and each of these points is mapped to a u,v texel coordinate on the texture. A rasterizer will interpolate between these points to fill in each pixel covered by the primitive. The primary advantage is that each pixel covered by a primitive will be traversed exactly once. Once a primitive's vertices are transformed, the amount of remaining work scales directly with how many pixels it covers on the screen. The main disadvantage versus forward texture mapping is that the memory access pattern in the texture space will not be linear if the texture is at an angle to the screen. This disadvantage is often addressed by texture caching techniques, such as the swizzled texture memory arrangement. The linear interpolation can be used directly for simple and efficient affine texture mapping, but can also be adapted for perspective correctness. Forward texture mapping. Forward texture mapping maps each texel of the texture to a pixel on the screen. After transforming a rectangular primitive to a place on the screen, a forward texture mapping renderer iterates through each texel on the texture, splatting each one onto a pixel of the frame buffer. This was used by some hardware, such as the 3DO, the Sega Saturn and the NV1. The primary advantage is that the texture will be accessed in a simple linear order, allowing very efficient caching of the texture data. However, this benefit is also its disadvantage: as a primitive gets smaller on screen, it still has to iterate over every texel in the texture, causing many pixels to be overdrawn redundantly. This method is also well suited for rendering quad primitives rather than reducing them to triangles, which provided an advantage when perspective correct texturing was not available in hardware. This is because the affine distortion of a quad looks less incorrect than the same quad split into two triangles (see affine texture mapping above). The NV1 hardware also allowed a quadratic interpolation mode to provide an even better approximation of perspective correctness. The existing hardware implementations did not provide effective UV coordinate mapping, which became an important technique for 3D modelling and assisted in clipping the texture correctly when the primitive goes over the edge of the screen. These shortcomings could have been addressed with further development, but GPU design has since mostly moved toward inverse mapping. Applications. Beyond 3D rendering, the availability of texture mapping hardware has inspired its use for accelerating other tasks: Tomography. It is possible to use texture mapping hardware to accelerate both the reconstruction of voxel data sets from tomographic scans, and to visualize the results. User interfaces. Many user interfaces use texture mapping to accelerate animated transitions of screen elements, e.g. Exposé in Mac OS X. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u" }, { "math_id": 1, "text": "v" }, { "math_id": 2, "text": "z" }, { "math_id": 3, "text": "\\frac{1}{z}" }, { "math_id": 4, "text": "\\frac{u}{z}" }, { "math_id": 5, "text": "\\frac{v}{z}" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "\\frac{u_n}{z_n}, \\frac{v_n}{z_n}, \\frac{1}{z_n}" }, { "math_id": 8, "text": "u_i, v_i" }, { "math_id": 9, "text": "zReciprocal_i = \\frac{1}{z_i}" }, { "math_id": 10, "text": "u, v" }, { "math_id": 11, "text": "z_{correct} = \\frac{1}{zReciprocal_i} = \\frac{1}{\\frac{1}{z_i}}" }, { "math_id": 12, "text": "u_{correct} = u_i \\cdot z_i" }, { "math_id": 13, "text": " v_{correct} = v_i \\cdot z_i" }, { "math_id": 14, "text": "u^{}_{\\alpha}" }, { "math_id": 15, "text": "u^{}_0" }, { "math_id": 16, "text": "u^{}_1" }, { "math_id": 17, "text": "u^{}_{\\alpha}= (1 - \\alpha ) u_0 + \\alpha u_1" }, { "math_id": 18, "text": "0 \\le \\alpha \\le 1" }, { "math_id": 19, "text": "z^{}_{}" }, { "math_id": 20, "text": "u^{}_{\\alpha}= \\frac{ (1 - \\alpha ) \\frac{ u_0 }{ z_0 } + \\alpha \\frac{ u_1 }{ z_1 } }{ (1 - \\alpha ) \\frac{ 1 }{ z_0 } + \\alpha \\frac{ 1 }{ z_1 } }" } ]
https://en.wikipedia.org/wiki?curid=146903
146904
Z-buffering
Type of data buffer in computer graphics A depth buffer, also known as a z-buffer, is a type of data buffer used in computer graphics to represent depth information of objects in 3D space from a particular perspective. The depth is stored as a height map of the scene, the values representing a distance to camera, with 0 being the closest. The encoding scheme may be flipped with the highest number being the value closest to camera. Depth buffers are an aid to rendering a scene to ensure that the correct polygons properly occlude other polygons. Z-buffering was first described in 1974 by Wolfgang Straßer in his PhD thesis on fast algorithms for rendering occluded objects. A similar solution to determining overlapping polygons is the painter's algorithm, which is capable of handling non-opaque scene elements, though at the cost of efficiency and incorrect results. In a 3D-rendering pipeline, when an object is projected on the screen, the depth (z-value) of a generated fragment in the projected screen image is compared to the value already stored in the buffer (depth test), and replaces it if the new value is closer. It works in tandem with the rasterizer, which computes the colored values. The fragment output by the rasterizer is saved if it is not overlapped by another fragment. When viewing an image containing partially or fully overlapping opaque objects or surfaces, it is not possible to fully see those objects that are farthest away from the viewer and behind other objects (i.e., some surfaces are hidden behind others). If there were no mechanism for managing overlapping surfaces, surfaces would render on top of each other, not caring if they are meant to be behind other objects. The identification and removal of these surfaces are called the hidden-surface problem. To check for overlap, the computer calculates the z-value of a pixel corresponding to the first object and compares it with the z-value at the same pixel location in the z-buffer. If the calculated z-value is smaller than the z-value already in the z-buffer (i.e., the new pixel is closer), then the current z-value in the z-buffer is replaced with the calculated value. This is repeated for all objects and surfaces in the scene (often in parallel). In the end, the z-buffer will allow correct reproduction of the usual depth perception: a close object hides one further away. This is called z-culling. The z-buffer has the same internal data structure as an image, namely a 2D-array, with the only difference being that it stores a single value for each screen pixel instead of color images that use 3 values to create color. This makes the z-buffer appear black-and-white because it is not storing color information. The buffer has the same dimensions as the screen buffer for consistency. Primary visibility tests (such as back-face culling) and secondary visibility tests (such as overlap checks and screen clipping) are usually performed on objects' polygons in order to skip specific polygons that are unnecessary to render. Z-buffer, by comparison, is comparatively expensive, so performing primary and secondary visibility tests relieve the z-buffer of some duty. The granularity of a z-buffer has a great influence on the scene quality: the traditional 16-bit z-buffer can result in artifacts (called "z-fighting" or stitching) when two objects are very close to each other. A more modern 24-bit or 32-bit z-buffer behaves much better, although the problem cannot be eliminated without additional algorithms. An 8-bit z-buffer is almost never used since it has too little precision. Uses. Z-buffering is a technique used in almost all contemporary computers, laptops, and mobile phones for performing 3D computer graphics. The primary use now is for video games, which require fast and accurate processing of 3D scenes. Z-buffers are often implemented in hardware within consumer graphics cards. Z-buffering is also used (implemented as software as opposed to hardware) for producing computer-generated special effects for films. Furthermore, Z-buffer data obtained from rendering a surface from a light's point-of-view permits the creation of shadows by the shadow mapping technique. Developments. Even with small enough granularity, quality problems may arise when precision in the z-buffer's distance values are not spread evenly over distance. Nearer values are much more precise (and hence can display closer objects better) than values that are farther away. Generally, this is desirable, but sometimes it will cause artifacts to appear as objects become more distant. A variation on z-buffering which results in more evenly distributed precision is called w-buffering (see below). At the start of a new scene, the z-buffer must be cleared to a defined value, usually 1.0, because this value is the upper limit (on a scale of 0 to 1) of depth, meaning that no object is present at this point through the viewing frustum. The invention of the z-buffer concept is most often attributed to Edwin Catmull, although Wolfgang Straßer described this idea in his 1974 Ph.D. thesis months before Catmull's invention. On more recent PC graphics cards (1999–2005), z-buffer management uses a significant chunk of the available memory bandwidth. Various methods have been employed to reduce the performance cost of z-buffering, such as lossless compression (computer resources to compress/decompress are cheaper than bandwidth) and ultra-fast hardware z-clear that makes obsolete the "one frame positive, one frame negative" trick (skipping inter-frame clear altogether using signed numbers to cleverly check depths). Some games, notably several games later in the N64's life cycle, decided to either minimize Z buffering (for example, rendering the background first without z buffering and only using Z buffering for the foreground objects) or to omit it entirely, to reduce memory bandwidth requirements and memory requirements respectively. Super Smash Bros. and F-Zero X are two N64 games that minimized Z buffering to increase framerates. Several Factor 5 games also minimized or omitted Z buffering. On the N64 Z Buffering can consume up to 4x as much bandwidth as opposed to not using Z buffering. on PC supported resolutions up to 800x600 on the original 4 MB 3DFX Voodoo due to not using Z Buffering. Z-culling. In rendering, z-culling is early pixel elimination based on depth, a method that provides an increase in performance when rendering of hidden surfaces is costly. It is a direct consequence of z-buffering, where the depth of each pixel candidate is compared to the depth of the existing geometry behind which it might be hidden. When using a z-buffer, a pixel can be culled (discarded) as soon as its depth is known, which makes it possible to skip the entire process of lighting and texturing a pixel that would not be visible anyway. Also, time-consuming pixel shaders will generally not be executed for the culled pixels. This makes z-culling a good optimization candidate in situations where fillrate, lighting, texturing, or pixel shaders are the main bottlenecks. While z-buffering allows the geometry to be unsorted, sorting polygons by increasing depth (thus using a reverse painter's algorithm) allows each screen pixel to be rendered fewer times. This can increase performance in fillrate-limited scenes with large amounts of overdraw, but if not combined with z-buffering it suffers from severe problems such as: As such, a reverse painter's algorithm cannot be used as an alternative to Z-culling (without strenuous re-engineering), except as an optimization to Z-culling. For example, an optimization might be to keep polygons sorted according to x/y-location and z-depth to provide bounds, in an effort to quickly determine if two polygons might possibly have an occlusion interaction. Mathematics. The range of depth values in camera space to be rendered is often defined between a formula_0 and formula_1 value of formula_2. After a perspective transformation, the new value of formula_2, or formula_3, is defined by: formula_4 After an orthographic projection, the new value of formula_2, or formula_3, is defined by: formula_5 where formula_2 is the old value of formula_2 in camera space, and is sometimes called formula_6 or formula_7. The resulting values of formula_3 are normalized between the values of -1 and 1, where the formula_0 plane is at -1 and the formula_8 plane is at 1. Values outside of this range correspond to points which are not in the viewing frustum, and shouldn't be rendered. Fixed-point representation. Typically, these values are stored in the z-buffer of the hardware graphics accelerator in fixed point format. First they are normalized to a more common range which is [0, 1] by substituting the appropriate conversion formula_9 into the previous formula: formula_10 Simplifying: formula_11 Second, the above formula is multiplied by formula_12 where d is the depth of the z-buffer (usually 16, 24 or 32 bits) and rounding the result to an integer: formula_13 This formula can be inverted and derived in order to calculate the z-buffer resolution (the 'granularity' mentioned earlier). The inverse of the above formula_14: formula_15 where formula_12 The z-buffer resolution in terms of camera space would be the incremental value resulted from the smallest change in the integer stored in the z-buffer, which is +1 or -1. Therefore, this resolution can be calculated from the derivative of formula_2 as a function of formula_3: formula_16 Expressing it back in camera space terms, by substituting formula_3 by the above formula_14: formula_17 This shows that the values of formula_3 are grouped much more densely near the formula_0 plane, and much more sparsely farther away, resulting in better precision closer to the camera. The smaller formula_18 is, the less precision there is far away—having the formula_18 plane set too closely is a common cause of undesirable rendering artifacts in more distant objects. To implement a z-buffer, the values of formula_3 are linearly interpolated across screen space between the vertices of the current polygon, and these intermediate values are generally stored in the z-buffer in fixed point format. W-buffer. To implement a w-buffer, the old values of formula_2 in camera space, or formula_6, are stored in the buffer, generally in floating point format. However, these values cannot be linearly interpolated across screen space from the vertices—they usually have to be inverted, interpolated, and then inverted again. The resulting values of formula_6, as opposed to formula_3, are spaced evenly between formula_0 and formula_1. There are implementations of the w-buffer that avoid the inversions altogether. Whether a z-buffer or w-buffer results in a better image depends on the application. Algorithmics. The following pseudocode demonstrates the process of z-buffering: References. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\textit{near}" }, { "math_id": 1, "text": "\\textit{far}" }, { "math_id": 2, "text": "z" }, { "math_id": 3, "text": "z'" }, { "math_id": 4, "text": "z'=\n \\frac{\\textit{far} + \\textit{near}}{\\textit{far} - \\textit{near}} +\n \\frac{1}{z} \\left(\\frac{-2 \\cdot \\textit{far} \\cdot \\textit{near}}{\\textit{far} - \\textit{near}}\\right)\n" }, { "math_id": 5, "text": "z'=\n 2 \\cdot \\frac{{z} - \\textit{near}}{\\textit{far}-\\textit{near}} - 1\n" }, { "math_id": 6, "text": "w" }, { "math_id": 7, "text": "w'" }, { "math_id": 8, "text": "\\mathit{far}" }, { "math_id": 9, "text": "z'_2 = \\frac{1}{2}\\left(z'_1 + 1\\right)" }, { "math_id": 10, "text": "z'=\n \\frac{\\textit{far} + \\textit{near}}{2 \\cdot \\left( \\textit{far} - \\textit{near} \\right)} + \\frac{1}{2} +\n \\frac{1}{z} \\left(\\frac{-\\textit{far} \\cdot \\textit{near}}{\\textit{far} - \\textit{near}}\\right) \n" }, { "math_id": 11, "text": "z'=\n \\frac{\\textit{far}}{\\left( \\textit{far} - \\textit{near} \\right)} +\n \\frac{1}{z} \\left(\\frac{-\\textit{far} \\cdot \\textit{near}}{\\textit{far} - \\textit{near}}\\right) \n" }, { "math_id": 12, "text": "S = 2^d - 1" }, { "math_id": 13, "text": "z' = f(z) = \\left\\lfloor\n \\left(2^d - 1\\right) \\cdot \\left(\\frac{\\textit{far}}{\\left( \\textit{far} - \\textit{near} \\right)} +\n \\frac{1}{z} \\left(\\frac{-\\textit{far} \\cdot \\textit{near}}{\\textit{far} - \\textit{near}}\\right)\n \\right)\\right\\rfloor\n" }, { "math_id": 14, "text": "f(z)\\," }, { "math_id": 15, "text": "z =\n \\frac{-\\textit{far} \\cdot \\textit{near}}{\\frac{z'}{S}\\left(\\textit{far} - \\textit{near}\\right) - \\textit{far}} =\n \\frac{-S \\cdot \\textit{far} \\cdot \\textit{near}}{z'\\left(\\textit{far} - \\textit{near}\\right) - \\textit{far} \\cdot S}\n" }, { "math_id": 16, "text": "\\frac{dz}{dz'} =\n \\frac{-1 \\cdot -1 \\cdot S \\cdot \\textit{far} \\cdot \\textit{near}}\n {\\left( z'\\left(\\textit{far} - \\textit{near}\\right) - \\textit{far} \\cdot S \\right)^2} \\cdot \\left(\\textit{far} - \\textit{near}\\right)\n" }, { "math_id": 17, "text": "\\begin{align}\n \\frac{dz}{dz'}\n &= \\frac{-1 \\cdot -1 \\cdot S \\cdot \\textit{far} \\cdot \\textit{near} \\cdot \\left(\\textit{far} - \\textit{near}\\right)}\n {\\left(S \\cdot \\left(\\frac{-\\textit{far} \\cdot \\textit{near}}{z} + \\textit{far}\\right) - \\textit{far} \\cdot S \\right)^2} \\\\\n &= \\frac{\\left(\\textit{far} - \\textit{near}\\right) \\cdot z^2}{S \\cdot \\textit{far} \\cdot \\textit{near}} \\\\\n &= \\frac{z^2}{S \\cdot \\textit{near}} - \\frac{z^2}{S \\cdot \\textit{far}}\n \\approx \\frac{z^2}{S \\cdot \\textit{near}}\n\\end{align}" }, { "math_id": 18, "text": "near" } ]
https://en.wikipedia.org/wiki?curid=146904
14692219
Short division
In arithmetic, short division is a division algorithm which breaks down a division problem into a series of easier steps. It is an abbreviated form of long division — whereby the products are omitted and the partial remainders are notated as superscripts. As a result, a short division tableau is shorter than its long division counterpart — though sometimes at the expense of relying on mental arithmetic, which could limit the size of the divisor. For most people, small integer divisors up to 12 are handled using memorised multiplication tables, although the procedure could also be adapted to the larger divisors as well. As in all division problems, a number called the "dividend" is divided by another, called the "divisor". The answer to the problem would be the "quotient", and in the case of Euclidean division, the remainder would be included as well. Using short division, arbitrarily large dividends can be handled. Tableau. Short division does not use the slash (/) or division sign (÷) symbols. Instead, it displays the dividend, divisor, and quotient (when it is found) in a tableau. An example is shown below, representing the division of 500 by 4. The quotient is 125. formula_0 Alternatively, the bar may be placed below the number, which means the sum proceeds down the page. This is in distinction to long division, where the space under the dividend is required for workings: formula_1 Example. The procedure involves several steps. As an example, consider 950 divided by 4: Using the alternative layout the final workings would be: formula_2 As usual, similar steps can also be used to handle the cases with a decimal dividend, or the cases where the divisor involves multiple digits. Prime factoring. A common requirement is to reduce a number to its prime factors. This is used particularly in working with vulgar fractions. The dividend is successively divided by prime numbers, repeating where possible: formula_3 This results in 950 = 2 x 5² x 19 Modulo division. When one is interested only in the remainder of the division, this procedure (a variation of short division) ignores the quotient and tallies only the remainders. It can be used for manual modulo calculation or as a test for even divisibility. The quotient digits are not written down. The following shows the solution (using short division) of 16762109 divided by seven. formula_4 The remainder is zero, so 16762109 is exactly divisible by 7. As an automaton. Given a divisor "k", this procedure can be written as a deterministic finite automaton with "k" states, each corresponding to a possible remainder. This implies that the set of numbers divisible by "k" is a regular language. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{array}{r}\n125\\\\\n4\\overline{)500}\\\\\n\\end{array}\n" }, { "math_id": 1, "text": "\n\\begin{array}{r}\n4\\underline{)500}\\\\\n125\\\\\n\\end{array}\n" }, { "math_id": 2, "text": "\n\\begin{array}{r}\n4\\underline{)9^{1}5^{3}0.^{2}0}\\\\\n2^{\\color{White}1}3^{\\color{White}3}7.^{\\color{White}2}5\\\\\n\\end{array}\n" }, { "math_id": 3, "text": "\n\\begin{array}{r}\n2\\underline{)950}\\\\\n5\\underline{)475}\\\\\n5\\underline{){\\color{White}0}95}\\\\\n\\ \\ \\ 19\\\\\n\\end{array}\n" }, { "math_id": 4, "text": "\n\\begin{matrix}\n7)16^27^66^32^41^60^49^0\n\\end{matrix}\n" } ]
https://en.wikipedia.org/wiki?curid=14692219
14692673
Quantile regression
Statistical modeling technique Quantile regression is a type of regression analysis used in statistics and econometrics. Whereas the method of least squares estimates the conditional "mean" of the response variable across values of the predictor variables, quantile regression estimates the conditional "median" (or other "quantiles") of the response variable. Quantile regression is an extension of linear regression used when the conditions of linear regression are not met. Advantages and applications. One advantage of quantile regression relative to ordinary least squares regression is that the quantile regression estimates are more robust against outliers in the response measurements. However, the main attraction of quantile regression goes beyond this and is advantageous when conditional quantile functions are of interest. Different measures of central tendency and statistical dispersion can be used to more comprehensively analyze the relationship between variables. In ecology, quantile regression has been proposed and used as a way to discover more useful predictive relationships between variables in cases where there is no relationship or only a weak relationship between the means of such variables. The need for and success of quantile regression in ecology has been attributed to the complexity of interactions between different factors leading to data with unequal variation of one variable for different ranges of another variable. Another application of quantile regression is in the areas of growth charts, where percentile curves are commonly used to screen for abnormal growth. History. The idea of estimating a median regression slope, a major theorem about minimizing sum of the absolute deviances and a geometrical algorithm for constructing median regression was proposed in 1760 by Ruđer Josip Bošković, a Jesuit Catholic priest from Dubrovnik. He was interested in the ellipticity of the earth, building on Isaac Newton's suggestion that its rotation could cause it to bulge at the equator with a corresponding flattening at the poles. He finally produced the first geometric procedure for determining the equator of a rotating planet from three observations of a surface feature. More importantly for quantile regression, he was able to develop the first evidence of the least absolute criterion and preceded the least squares introduced by Legendre in 1805 by fifty years. Other thinkers began building upon Bošković's idea such as Pierre-Simon Laplace, who developed the so-called "methode de situation." This led to Francis Edgeworth's plural median - a geometric approach to median regression - and is recognized as the precursor of the simplex method. The works of Bošković, Laplace, and Edgeworth were recognized as a prelude to Roger Koenker's contributions to quantile regression. Median regression computations for larger data sets are quite tedious compared to the least squares method, for which reason it has historically generated a lack of popularity among statisticians, until the widespread adoption of computers in the latter part of the 20th century. Background: quantiles. Quantile regression expresses the conditional quantiles of a dependent variable as a linear function of the explanatory variables. Crucial to the practicality of quantile regression is that the quantiles can be expressed as the solution of a minimization problem, as we will show in this section before discussing conditional quantiles in the next section. Quantile of a random variable. Let formula_0 be a real-valued random variable with cumulative distribution function formula_1. The formula_2th quantile of Y is given by formula_3 where formula_4 Define the loss function as formula_5, where formula_6 is an indicator function. A specific quantile can be found by minimizing the expected loss of formula_7 with respect to formula_8:(pp. 5): formula_9 This can be shown by computing the derivative of the expected loss with respect to formula_8 via an application of the Leibniz integral rule, setting it to 0, and letting formula_10 be the solution of formula_11 This equation reduces to formula_12 and then to formula_13 If the solution formula_10 is not unique, then we have to take the smallest such solution to obtain the formula_2th quantile of the random variable "Y". Example. Let formula_0 be a discrete random variable that takes values formula_14 with formula_15 with equal probabilities. The task is to find the median of Y, and hence the value formula_16 is chosen. Then the expected loss of formula_7 is formula_17formula_18formula_19formula_18formula_20formula_21formula_22formula_18formula_23formula_18formula_24 Since formula_25 is a constant, it can be taken out of the expected loss function (this is only true if formula_16). Then, at "u"=3, formula_26formula_27formula_28formula_29formula_30 Suppose that "u" is increased by 1 unit. Then the expected loss will be changed by formula_31 on changing "u" to 4. If, "u"=5, the expected loss is formula_32 and any change in "u" will increase the expected loss. Thus "u"=5 is the median. The Table below shows the expected loss (divided by formula_25) for different values of "u". Intuition. Consider formula_16 and let "q" be an initial guess for formula_10. The expected loss evaluated at "q" is formula_33 In order to minimize the expected loss, we move the value of "q" a little bit to see whether the expected loss will rise or fall. Suppose we increase "q" by 1 unit. Then the change of expected loss would be formula_34 The first term of the equation is formula_35 and second term of the equation is formula_36. Therefore, the change of expected loss function is negative if and only if formula_37, that is if and only if "q" is smaller than the median. Similarly, if we reduce "q" by 1 unit, the change of expected loss function is negative if and only if "q" is larger than the median. In order to minimize the expected loss function, we would increase (decrease) "L"("q") if "q" is smaller (larger) than the median, until "q" reaches the median. The idea behind the minimization is to count the number of points (weighted with the density) that are larger or smaller than "q" and then move "q" to a point where "q" is larger than formula_38% of the points. Sample quantile. The formula_2 sample quantile can be obtained by using an importance sampling estimate and solving the following minimization problem formula_39 formula_40, where the function formula_41 is the tilted absolute value function. The intuition is the same as for the population quantile. Conditional quantile and quantile regression. The formula_2th conditional quantile of formula_0 given formula_42 is the formula_2th quantile of the Conditional probability distribution of formula_0 given formula_42, formula_43. We use a capital formula_44 to denote the conditional quantile to indicate that it is a random variable. In quantile regression for the formula_2th quantile we make the assumption that the formula_2th conditional quantile is given as a linear function of the explanatory variables: formula_45. Given the distribution function of formula_0, formula_46 can be obtained by solving formula_47 Solving the sample analog gives the estimator of formula_48. formula_49 Note that when formula_50, the loss function formula_51 is proportional to the absolute value function, and thus median regression is the same as linear regression by least absolute deviations. Computation of estimates for regression parameters. The mathematical forms arising from quantile regression are distinct from those arising in the method of least squares. The method of least squares leads to a consideration of problems in an inner product space, involving projection onto subspaces, and thus the problem of minimizing the squared errors can be reduced to a problem in numerical linear algebra. Quantile regression does not have this structure, and instead the minimization problem can be reformulated as a linear programming problem formula_52 where formula_53 ,    formula_54 Simplex methods or interior point methods can be applied to solve the linear programming problem. Asymptotic properties. For formula_55, under some regularity conditions, formula_56 is asymptotically normal: formula_57 where formula_58 and formula_59 Direct estimation of the asymptotic variance-covariance matrix is not always satisfactory. Inference for quantile regression parameters can be made with the regression rank-score tests or with the bootstrap methods. Equivariance. See invariant estimator for background on invariance or see equivariance. Scale equivariance. For any formula_60 and formula_61 formula_62 formula_63 Shift equivariance. For any formula_64 and formula_61 formula_65 Equivariance to reparameterization of design. Let formula_66 be any formula_67 nonsingular matrix and formula_68 formula_69 Invariance to monotone transformations. If formula_70 is a nondecreasing function on formula_71, the following invariance property applies: formula_72 Example (1): If formula_73 and formula_74, then formula_75. The mean regression does not have the same property since formula_76 Inference. Interpretation of the slope parameters. The linear model formula_45 mis-specifies the true systematic relation formula_77 when formula_78 is nonlinear. However, formula_45 minimizes a weighted distanced to formula_79 among linear models. Furthermore, the slope parameters formula_80 of the linear model can be interpreted as weighted averages of the derivatives formula_81 so that formula_80 can be used for causal inference. Specifically, the hypothesis formula_82 for all formula_83 implies the hypothesis formula_84, which can be tested using the estimator formula_85 and its limit distribution. Goodness of fit. The goodness of fit for quantile regression for the formula_2 quantile can be defined as: formula_86 where formula_87 is the sum of squares of the conditional quantile, while formula_88 is the sum of squares of the unconditional quantile. Variants. Bayesian methods for quantile regression. Because quantile regression does not normally assume a parametric likelihood for the conditional distributions of Y|X, the Bayesian methods work with a working likelihood. A convenient choice is the asymmetric Laplacian likelihood, because the mode of the resulting posterior under a flat prior is the usual quantile regression estimates. The posterior inference, however, must be interpreted with care. Yang, Wang and He provided a posterior variance adjustment for valid inference. In addition, Yang and He showed that one can have asymptotically valid posterior inference if the working likelihood is chosen to be the empirical likelihood. Machine learning methods for quantile regression. Beyond simple linear regression, there are several machine learning methods that can be extended to quantile regression. A switch from the squared error to the tilted absolute value loss function (a.k.a. the "pinball loss") allows gradient descent-based learning algorithms to learn a specified quantile instead of the mean. It means that we can apply all neural network and deep learning algorithms to quantile regression, which is then referred to as nonparametric quantile regression. Tree-based learning algorithms are also available for quantile regression (see, e.g., Quantile Regression Forests, as a simple generalization of Random Forests). Censored quantile regression. If the response variable is subject to censoring, the conditional mean is not identifiable without additional distributional assumptions, but the conditional quantile is often identifiable. For recent work on censored quantile regression, see: Portnoy and Wang and Wang Example (2): Let formula_89 and formula_90. Then formula_91. This is the censored quantile regression model: estimated values can be obtained without making any distributional assumptions, but at the cost of computational difficulty, some of which can be avoided by using a simple three step censored quantile regression procedure as an approximation. For random censoring on the response variables, the censored quantile regression of Portnoy (2003) provides consistent estimates of all identifiable quantile functions based on reweighting each censored point appropriately. Censored quantile regression has close links to survival analysis. Heteroscedastic errors. The quantile regression loss needs to be adapted in the presence of heteroscedastic errors in order to be efficient. Implementations. Numerous statistical software packages include implementations of quantile regression: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y" }, { "math_id": 1, "text": "F_{Y}(y)=P(Y\\leq y)" }, { "math_id": 2, "text": "\\tau" }, { "math_id": 3, "text": "q_{Y}(\\tau)=F_{Y}^{-1}(\\tau)=\\inf\\left\\{ y:F_{Y}(y)\\geq\\tau\\right\\}" }, { "math_id": 4, "text": "\\tau\\in(0,1)." }, { "math_id": 5, "text": "\\rho_{\\tau}(m)=m(\\tau-\\mathbb{I}_{(m<0)})" }, { "math_id": 6, "text": "\\mathbb{I}" }, { "math_id": 7, "text": "Y-u" }, { "math_id": 8, "text": "u" }, { "math_id": 9, "text": "q_{Y}(\\tau)=\\underset{u}{\\mbox{arg min}}E(\\rho_{\\tau}(Y-u))=\\underset{u}{\\mbox{arg min}}\\biggl\\{(\\tau-1)\\int_{-\\infty}^{u}(y-u)dF_{Y}(y)+\\tau\\int_{u}^{\\infty}(y-u)dF_{Y}(y)\\biggr\\}." }, { "math_id": 10, "text": "q_{\\tau}" }, { "math_id": 11, "text": "0=(1-\\tau)\\int_{-\\infty}^{q_{\\tau}}dF_{Y}(y)-\\tau\\int_{q_{\\tau}}^{\\infty}dF_{Y}(y)." }, { "math_id": 12, "text": "0=F_{Y}(q_{\\tau})-\\tau," }, { "math_id": 13, "text": "F_{Y}(q_{\\tau})=\\tau." }, { "math_id": 14, "text": "y_i = i" }, { "math_id": 15, "text": "i = 1,2,\\dots,9" }, { "math_id": 16, "text": "\\tau=0.5" }, { "math_id": 17, "text": "L(u)=E(\\rho_{\\tau}(Y-u))=\\frac{(\\tau-1)}{9}\\sum_{y_{i}<u}" }, { "math_id": 18, "text": "(y_{i}-u)" }, { "math_id": 19, "text": "+\\frac{\\tau}{9}\\sum_{y_{i}\\geq u}" }, { "math_id": 20, "text": "=\\frac{0.5}{9}\\Bigl(" }, { "math_id": 21, "text": "-" }, { "math_id": 22, "text": "\\sum_{y_{i}<u}" }, { "math_id": 23, "text": "+\\sum_{y_{i}\\geq u}" }, { "math_id": 24, "text": "\\Bigr) ." }, { "math_id": 25, "text": "{0.5/9}" }, { "math_id": 26, "text": "L(3) \\propto\\sum_{i=1}^{2}" }, { "math_id": 27, "text": "-(i-3)" }, { "math_id": 28, "text": "+\\sum_{i=3}^{9}" }, { "math_id": 29, "text": "(i-3)" }, { "math_id": 30, "text": " =[(2+1)+(0+1+2+...+6)] =24." }, { "math_id": 31, "text": "(3)-(6)=-3" }, { "math_id": 32, "text": "L(5) \\propto \\sum_{i=1}^{4}i+\\sum_{i=0}^{4}i=20," }, { "math_id": 33, "text": "L(q)=-0.5\\int_{-\\infty}^{q}(y-q)dF_{Y}(y)+0.5\\int_{q}^{\\infty}(y-q)dF_{Y}(y) ." }, { "math_id": 34, "text": "\\int_{-\\infty}^{q}1dF_{Y}(y)-\\int_{q}^{\\infty}1dF_{Y}(y) ." }, { "math_id": 35, "text": "F_{Y}(q)" }, { "math_id": 36, "text": "1-F_{Y}(q)" }, { "math_id": 37, "text": "F_{Y}(q)<0.5" }, { "math_id": 38, "text": "100\\tau" }, { "math_id": 39, "text": "\\hat{q}_{\\tau}=\\underset{q\\in \\mathbb{R}}{\\mbox{arg min}}\\sum_{i=1}^{n}\\rho_{\\tau}(y_{i}-q) ," }, { "math_id": 40, "text": "=\\underset{q\\in \\mathbb{R}}{\\mbox{arg min}} \\left[(\\tau-1)\\sum_{y_{i}<q}(y_{i}-q)+\\tau\\sum_{y_{i}\\geq q}(y_{i}-q) \\right]" }, { "math_id": 41, "text": "\\rho_{\\tau}" }, { "math_id": 42, "text": "X" }, { "math_id": 43, "text": "Q_{Y|X}(\\tau)=\\inf\\left\\{ y:F_{Y|X}(y)\\geq\\tau\\right\\}" }, { "math_id": 44, "text": "Q" }, { "math_id": 45, "text": " Q_{Y|X}(\\tau)=X\\beta_{\\tau}" }, { "math_id": 46, "text": "\\beta_{\\tau}" }, { "math_id": 47, "text": "\\beta_{\\tau}=\\underset{\\beta\\in \\mathbb{R}^{k}}{\\mbox{arg min}}E(\\rho_{\\tau}(Y-X\\beta))." }, { "math_id": 48, "text": "\\beta" }, { "math_id": 49, "text": "\\hat{\\beta_{\\tau}}=\\underset{\\beta\\in \\mathbb{R}^{k}}{\\mbox{arg min}}\\sum_{i=1}^{n}(\\rho_{\\tau}(Y_{i}-X_{i}\\beta)) ." }, { "math_id": 50, "text": "\\tau = 0.5" }, { "math_id": 51, "text": "\\rho_\\tau" }, { "math_id": 52, "text": "\\underset{\\beta,u^{+},u^{-}\\in \\mathbb{R}^{k}\\times \\mathbb{R}_{+}^{2n}}{\\min}\\left\\{ \\tau1_{n}^{'}u^{+}+(1-\\tau)1_{n}^{'}u^{-}|X\\beta+u^{+}-u^{-}=Y\\right\\} ," }, { "math_id": 53, "text": "u_{j}^{+}=\\max(u_{j},0)" }, { "math_id": 54, "text": "u_{j}^{-}=-\\min(u_{j},0)." }, { "math_id": 55, "text": "\\tau\\in(0,1)" }, { "math_id": 56, "text": "\\hat{\\beta}_{\\tau}" }, { "math_id": 57, "text": "\\sqrt{n}(\\hat{\\beta}_{\\tau}-\\beta_{\\tau})\\overset{d}{\\rightarrow}N(0,\\tau(1-\\tau)D^{-1}\\Omega_{x}D^{-1})," }, { "math_id": 58, "text": "D=E(f_{Y}(X\\beta)XX^{\\prime})" }, { "math_id": 59, "text": "\\Omega_{x}=E(X^{\\prime} X) ." }, { "math_id": 60, "text": "a>0" }, { "math_id": 61, "text": "\\tau\\in[0,1]" }, { "math_id": 62, "text": "\\hat{\\beta}(\\tau;aY,X)=a\\hat{\\beta}(\\tau;Y,X)," }, { "math_id": 63, "text": "\\hat{\\beta}(\\tau;-aY,X)=-a\\hat{\\beta}(1-\\tau;Y,X)." }, { "math_id": 64, "text": "\\gamma\\in R^{k}" }, { "math_id": 65, "text": "\\hat{\\beta}(\\tau;Y+X\\gamma,X)=\\hat{\\beta}(\\tau;Y,X)+\\gamma ." }, { "math_id": 66, "text": "A" }, { "math_id": 67, "text": "p\\times p" }, { "math_id": 68, "text": "\\tau\\in[0,1] " }, { "math_id": 69, "text": "\\hat{\\beta}(\\tau;Y,XA)=A^{-1}\\hat{\\beta}(\\tau;Y,X) ." }, { "math_id": 70, "text": "h" }, { "math_id": 71, "text": "\\mathbb{R}" }, { "math_id": 72, "text": "h(Q_{Y|X}(\\tau))\\equiv Q_{h(Y)|X}(\\tau)." }, { "math_id": 73, "text": "W=\\exp(Y)" }, { "math_id": 74, "text": "Q_{Y|X}(\\tau)=X\\beta_{\\tau}" }, { "math_id": 75, "text": "Q_{W|X}(\\tau)=\\exp(X\\beta_{\\tau})" }, { "math_id": 76, "text": "\\operatorname{E} (\\ln(Y))\\neq \\ln(\\operatorname{E}(Y))." }, { "math_id": 77, "text": " Q_{Y|X}(\\tau)=f(X,\\tau)" }, { "math_id": 78, "text": " f(\\cdot,\\tau)" }, { "math_id": 79, "text": " f(X,\\tau)" }, { "math_id": 80, "text": " \\beta_{\\tau}" }, { "math_id": 81, "text": " \\nabla f(X,\\tau)" }, { "math_id": 82, "text": " H_0: \\nabla f(x,\\tau)=0" }, { "math_id": 83, "text": " x" }, { "math_id": 84, "text": " H_0: \\beta_\\tau=0" }, { "math_id": 85, "text": "\\hat{\\beta_{\\tau}}" }, { "math_id": 86, "text": "R^2(\\tau)=1-\\frac{\\hat{V}_\\tau}{\\tilde{V}_\\tau}," }, { "math_id": 87, "text": "\\hat{V}_\\tau" }, { "math_id": 88, "text": "\\tilde{V}_\\tau" }, { "math_id": 89, "text": "Y^{c}=\\max(0,Y)" }, { "math_id": 90, "text": "Q_{Y|X}=X\\beta_{\\tau}" }, { "math_id": 91, "text": "Q_{Y^{c}|X}(\\tau)=\\max(0,X\\beta_{\\tau})" } ]
https://en.wikipedia.org/wiki?curid=14692673
1469331
AU Microscopii
Star in the constellation Microscopium &lt;/td&gt; ! style="text-align: center; background-color: #FFFFC0;" colspan="2" | Observation dataEpoch J2000      Equinox J2000 ! style="text-align:left" | Constellation ! style="text-align:left" | Right ascension ! style="text-align:left" | Declination ! style="text-align:left" | Apparent magnitude (V) ! style="background-color: #FFFFC0; text-align: center;" colspan="2"| Characteristics ! style="text-align:left" | Spectral type ! style="text-align:left" | Apparent magnitude (V) ! style="text-align:left" | Apparent magnitude (J) ! style="text-align:left" | U−B ! style="text-align:left" | B−V ! style="text-align:left" | Variable type &lt;/th&gt;&lt;/tr&gt; &lt;/th&gt;&lt;/tr&gt; AU Microscopii (AU Mic) is a young red dwarf star located away – about 8 times as far as the closest star after the Sun. The apparent visual magnitude of AU Microscopii is 8.73, which is too dim to be seen with the naked eye. It was given this designation because it is in the southern constellation Microscopium and is a variable star. Like β Pictoris, AU Microscopii has a circumstellar disk of dust known as a debris disk and at least two exoplanets, with the presence of an additional two planets being likely. Stellar properties. AU Mic is a young star at only 22 million years old; less than 1% of the age of the Sun. With a stellar classification of M1 Ve, it is a red dwarf star with a physical radius of 75% that of the Sun. Despite being half the Sun's mass, it is radiating only 9% as much luminosity as the Sun. This energy is being emitted from the star's outer atmosphere at an effective temperature of 3,700 K, giving it the cool orange-red hued glow of an M-type star. AU Microscopii is a member of the β Pictoris moving group. AU Microscopii may be gravitationally bound to the binary star system AT Microscopii. AU Microscopii has been observed in every part of the electromagnetic spectrum from radio to X-ray and is known to undergo flaring activity at all these wavelengths. Its flaring behaviour was first identified in 1973. Underlying these random outbreaks is a nearly sinusoidal variation in its brightness with a period of 4.865 days. The amplitude of this variation changes slowly with time. The V band brightness variation was approximately 0.3 magnitudes in 1971; by 1980 it was merely 0.1 magnitudes. Planetary system. AU Microscopii's debris disk has an asymmetric structure and an inner gap or hole cleared of debris, which has led a number of astronomers to search for planets orbiting AU Microscopii. By 2007, no searches had led to any detections of planets. However, in 2020 the discovery of a Neptune-sized planet was announced based on transit observations by TESS. Its rotation axis is well aligned with the rotation axis of the parent star, with the misalignment being equal to 5°. Since 2018, a second planet, AU Microscopii c, was suspected to exist. It was confirmed in December 2020, after additional transit events were documented by the TESS observatory. A third planet in the system was suspected since 2022 based on transit-timing variations, and "validated" in 2023, although several possible orbital periods of planet d cannot be ruled out yet. This planet has a mass comparable to that of Earth. Radial velocity observations have also found evidence for a fourth, outer planet as of 2023. Observations of the AU Microscopii system with the James Webb Space Telescope were unable to confirm the presence of previously unknown companions. ! align=center| Companion ! align=center| Mass ! align=center| Semimajor axis ! align=center| Orbital period ! align=center| Eccentricity ! align=center| Inclination ! align=center| Radius Debris disk. All-sky observations with the Infrared Astronomy Satellite revealed faint infrared emission from AU Microscopii. This emission is due to a circumstellar disk of dust which first resolved at optical wavelengths in 2003 by Paul Kalas and collaborators using the University of Hawaii 2.2-m telescope on Mauna Kea, Hawaii. This large debris disk faces the earth edge-on at nearly 90 degrees, and measures at least 200 AU in radius. At these large distances from the star, the lifetime of dust in the disk exceeds the age of AU Microscopii. The disk has a gas to dust mass ratio of no more than 6:1, much lower than the usually assumed primordial value of 100:1. The debris disk is therefore referred to as "gas-poor", as the primordial gas within the circumstellar system has been mostly depleted. The total amount of dust visible in the disk is estimated to be at least a lunar mass, while the larger planetesimals from which the dust is produced are inferred to have at least six lunar masses. The spectral energy distribution of AU Microscopii's debris disk at submillimetre wavelengths indicate the presence of an inner hole in the disk extending to 17 AU, while scattered light images estimate the inner hole to be 12 AU in radius. Combining the spectral energy distribution with the surface brightness profile yields a smaller estimate of the radius of the inner hole, 1 - 10 AU. The inner part of the disk is asymmetric and shows structure in the inner 40 AU. The inner structure has been compared with that expected to be seen if the disk is influenced by larger bodies or has undergone recent planet formation. The surface brightness (brightness per area) of the disk in the near infrared formula_0 as a function of projected distance formula_1 from the star follows a characteristic shape. The inner formula_2 of the disk appear approximately constant in density and the brightness is unchanging, more-or-less flat. Around formula_3 the density and surface brightness begins to decrease: first it decreases slowly in proportion to distance as formula_4; then outside formula_5, the density and brightness drops much more steeply, as formula_6. This "broken power-law" shape is similar to the shape of the profile of β Pic's disk. In October 2015 it was reported that astronomers using the Very Large Telescope (VLT) had detected very unusual outward-moving features in the disk. By comparing the VLT images with those taken by the Hubble Space Telescope in 2010 and 2011 it was found that the wave-like structures are moving away from the star at speeds of up to 10 kilometers per second (22,000 miles per hour). The waves farther away from the star seem to be moving faster than those close to it, and at least three of the features are moving fast enough to escape the gravitational pull of the star. Follow-up observations with the SPHERE instrument on the Very Large Telescope were able to confirm the presence of the fast-moving features, and James Webb Space Telescope observations found similar features within the disk in two NIRCam filters; however, these features have not been detected in the radio with Atacama Large Millimeter Array observations. These fast-moving features have been described as "dust avalanches", where dust particles catastrophically collide into planetesimals within the disk. Methods of observation. AU Mic's disk has been observed at a variety of different wavelengths, giving humans different types of information about the system. The light from the disk observed at optical wavelengths is stellar light that has reflected (scattered) off dust particles into Earth's line of sight. Observations at these wavelengths utilize a coronagraphic spot to block the bright light coming directly from the star. Such observations provide high-resolution images of the disk. Because light having a wavelength longer than the size of a dust grain is scattered only poorly, comparing images at different wavelengths (visible and near-infrared, for example) gives humans information about the sizes of the dust grains in the disk. Optical observations have been made with the Hubble Space Telescope and Keck Telescopes. The system has also been observed at infrared and sub-millimeter wavelengths with the James Clerk Maxwell Telescope, Spitzer Space Telescope, and the James Webb Space Telescope. This light is emitted directly by dust grains as a result of their internal heat (modified blackbody radiation). The disk cannot be resolved at these wavelengths, so such observations are measurements of the amount of light coming from the entire system. Observations at increasingly longer wavelengths give information about dust particles of larger sizes and at larger distances from the star. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\scriptstyle I" }, { "math_id": 1, "text": "\\scriptstyle r" }, { "math_id": 2, "text": "\\scriptstyle r\\,<\\,15 AU" }, { "math_id": 3, "text": "\\scriptstyle r\\, \\approx\\, 15 AU" }, { "math_id": 4, "text": "\\scriptstyle I\\, \\propto \\, r^{-1.8}" }, { "math_id": 5, "text": "\\scriptstyle r\\, \\approx\\, 43 AU" }, { "math_id": 6, "text": "\\scriptstyle I\\, \\propto \\, r^{-4.7}" } ]
https://en.wikipedia.org/wiki?curid=1469331
1469458
Abnormal return
In finance, an abnormal return is the difference between the actual return of a security and the expected return. Abnormal returns are sometimes triggered by "events." Events can include mergers, dividend announcements, company earning announcements, interest rate increases, lawsuits, etc. all of which can contribute to an abnormal return. Events in finance can typically be classified as information or occurrences that have not already been priced by the market. Stock market. In stock market trading, abnormal returns are the differences between a single stock or portfolio's performance and the expected return over a set period of time. Usually a broad index, such as the S&amp;P 500 or a national index like the Nikkei 225, is used as a benchmark to determine the expected return. For example, if a stock increased by 5% because of some news that affected the stock price, but the average market only increased by 3% and the stock has a beta of 1, then the abnormal return was 2% (5% - 3% = 2%). If the market average performs better (after adjusting for beta) than the individual stock, then the abnormal return will be negative. formula_0 Calculation. The calculation formula for the abnormal returns is as follows: formula_1 where: ARit - abnormal return for firm i on day t Rit - actual return for firm i on day t E(Rit) – expected return for firm i on day t A common practice is to standardise the abnormal returns with the use of the following formula: formula_2 where: SARit - standardised abnormal returns SDit – standard deviation of the abnormal returns The SDit is calculated with the use of the following formula: formula_3 where: Si2 – the residual variance for firm i, Rmt – the return on the stock market index on day t, Rm – the average return from the market portfolio in the estimation period, T – the numbers of days in the estimation period. Cumulative abnormal return. Cumulative abnormal return, or CAR, is the sum of all abnormal returns. Cumulative Abnormal Returns are usually calculated over small windows, often only days. This is because evidence has shown that compounding daily abnormal returns can create bias in the results. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\textrm{Abnormal}\\ \\textrm{Return} = \\textrm{Actual}\\ \\textrm{Return} - \\textrm{Expected} \\ \\textrm{Return}" }, { "math_id": 1, "text": "AR_{it}=R_{it}-E(R_{it})" }, { "math_id": 2, "text": "SAR_{it}=AR_{it}/SD_{it}" }, { "math_id": 3, "text": "SD_{it}=[S_i^2*(1+\\frac{1}{T}*\\frac{(R_{mt}-R_m)^2}{\\textstyle \\sum_{t=1}^T \\displaystyle(R_{mt}-R_m)^2})]^{0,5}" } ]
https://en.wikipedia.org/wiki?curid=1469458
1469552
Affine pricing
Economic model In economics, affine pricing is a situation where buying more than zero of a good gains a fixed benefit or cost, and each purchase after that gains a per-unit benefit or cost. Calculation. Denoting "T" is the total price paid, "q" is the quantity in units purchased, "p" is a constant price per unit, and "k" is the fixed cost, the affine price is then calculated by formula_0. In mathematical language, the price is an affine function (sometimes also linear function) of the quantity bought. An example would be a cell phone contract where a base price is paid each month with a per-minute price for calls. Sliding-scale price contracts achieve a similar effect, although the terms are stated differently. The price decreases with volume produced, achieving the same financial transfer over time, but the transaction is always based on units sold, with the fixed cost amortized into the price of each unit. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T=p*q + k" } ]
https://en.wikipedia.org/wiki?curid=1469552
14695652
Ceramography
Preparation and study of ceramics with optical instruments Ceramography is the art and science of preparation, examination and evaluation of ceramic microstructures. Ceramography can be thought of as the metallography of ceramics. The microstructure is the structure level of approximately 0.1 to 100 μm, between the minimum wavelength of visible light and the resolution limit of the naked eye. The microstructure includes most grains, secondary phases, grain boundaries, pores, micro-cracks and hardness microindentations. Most bulk mechanical, optical, thermal, electrical and magnetic properties are significantly affected by the microstructure. The fabrication method and process conditions are generally indicated by the microstructure. The root cause of many ceramic failures is evident in the microstructure. Ceramography is part of the broader field of materialography, which includes all the microscopic techniques of material analysis, such as metallography, petrography and plastography. Ceramography is usually reserved for high-performance ceramics for industrial applications, such as 85–99.9% alumina (Al2O3) in Fig. 1, zirconia (ZrO2), silicon carbide (SiC), silicon nitride (Si3N4), and ceramic-matrix composites. It is seldom used on whiteware ceramics such as sanitaryware, wall tiles and dishware. History. Ceramography evolved along with other branches of materialography and ceramic engineering. Alois de Widmanstätten of Austria etched a meteorite in 1808 to reveal proeutectoid ferrite bands that grew on prior austenite grain boundaries. Geologist Henry Clifton Sorby, the "father of metallography", applied petrographic techniques to the steel industry in the 1860s in Sheffield, England. French geologist Auguste Michel-Lévy devised a chart that correlated the optical properties of minerals to their transmitted color and thickness in the 1880s. Swedish metallurgist J.A. Brinell invented the first quantitative hardness scale in 1900. Smith and Sandland developed the first microindentation hardness test at Vickers Ltd. in London in 1922. Swiss-born microscopist A.I. Buehler started the first metallographic equipment manufacturer near Chicago in 1936. Frederick Knoop and colleagues at the National Bureau of Standards developed a less-penetrating (than Vickers) microindentation test in 1939. Struers A/S of Copenhagen introduced the electrolytic polisher to metallography in 1943. George Kehl of Columbia University wrote a book that was considered the bible of materialography until the 1980s. Kehl co-founded a group within the Atomic Energy Commission that became the International Metallographic Society in 1967. Preparation of ceramographic specimens. The preparation of ceramic specimens for microstructural analysis consists of five broad steps: sawing, embedding, grinding, polishing and etching. The tools and consumables for ceramographic preparation are available worldwide from metallography equipment vendors and laboratory supply companies. Sawing. Most ceramics are extremely hard and must be wet-sawed with a circular blade embedded with diamond particles. A metallography or lapidary saw equipped with a low-density diamond blade is usually suitable. The blade must be cooled by a continuous liquid spray. Embedding. To facilitate further preparation, the sawed specimen is usually embedded (or mounted or encapsulated) in a plastic disc, 25, 32 or 38 mm in diameter. A thermosetting solid resin, activated by heat and compression, e.g. mineral-filled epoxy, is best for most applications. A castable (liquid) resin such as unfilled epoxy, acrylic or polyester may be used for porous refractory ceramics or microelectronic devices. The castable resins are also available with fluorescent dyes that aid in fluorescence microscopy. The left and right specimens in Fig. 3 were embedded in mineral-filled epoxy. The center refractory in Fig. 3 was embedded in castable, transparent acrylic. Grinding. Grinding is abrasion of the surface of interest by abrasive particles, usually diamond, that are bonded to paper or a metal disc. Grinding erases saw marks, coarsely smooths the surface, and removes stock to a desired depth. A typical grinding sequence for ceramics is one minute on a 240-grit metal-bonded diamond wheel rotating at 240 rpm and lubricated by flowing water, followed by a similar treatment on a 400-grit wheel. The specimen is washed in an ultrasonic bath after each step. Polishing. Polishing is abrasion by free abrasives that are suspended in a lubricant and can roll or slide between the specimen and paper. Polishing erases grinding marks and smooths the specimen to a mirror-like finish. Polishing on a bare metallic platen is called lapping. A typical polishing sequence for ceramics is 5–10 minutes each on 15-, 6- and 1-μm diamond paste or slurry on napless paper rotating at 240 rpm. The specimen is again washed in an ultrasonic bath after each step. The three sets of specimens in Fig. 3 have been sawed, embedded, ground and polished. Etching. Etching reveals and delineates grain boundaries and other microstructural features that are not apparent on the as-polished surface. The two most common types of etching in ceramography are selective chemical corrosion, and a thermal treatment that causes relief. As an example, alumina can be chemically etched by immersion in boiling concentrated phosphoric acid for 30–60 s, or thermally etched in a furnace for 20–40 min at in air. The plastic encapsulation must be removed before thermal etching. The alumina in Fig. 1 was thermally etched. Alternatively, non-cubic ceramics can be prepared as thin sections, also known as petrography, for examination by polarized transmitted light microscopy. In this technique, the specimen is sawed to ~1 mm thick, glued to a microscope slide, and ground or sawed (e.g., by microtome) to a thickness ("x") approaching 30 μm. A cover slip is glued onto the exposed surface. The adhesives, such as epoxy or Canada balsam resin, must have approximately the same refractive index (η ≈ 1.54) as glass. Most ceramics have a very small absorption coefficient (α ≈ 0.5 cm −1 for alumina in Fig. 2) in the Beer–Lambert law below, and can be viewed in transmitted light. Cubic ceramics, e.g. yttria-stabilized zirconia and spinel, have the same refractive index in all crystallographic directions and appear, therefore, black when the microscope's polarizer is 90° out of phase with its analyzer. formula_0 (Beer–Lambert eqn) Ceramographic specimens are electrical insulators in most cases, and must be coated with a conductive ~10-nm layer of metal or carbon for electron microscopy, after polishing and etching. Gold or Au-Pd alloy from a sputter coater or evaporative coater also improves the reflection of visible light from the polished surface under a microscope, by the Fresnel formula below. Bare alumina (η ≈ 1.77, "k" ≈ 10 −6) has a negligible extinction coefficient and reflects only 8% of the incident light from the microscope, as in Fig. 1. Gold-coated ("η" ≈ 0.82, "k" ≈ 1.59 @ λ = 500 nm) alumina reflects 44% in air, 39% in immersion oil. formula_1 (Fresnel eqn).. Ceramographic analysis. Ceramic microstructures are most often analyzed by reflected visible-light microscopy in brightfield. Darkfield is used in limited circumstances, e.g., to reveal cracks. Polarized transmitted light is used with thin sections, where the contrast between grains comes from birefringence. Very fine microstructures may require the higher magnification and resolution of a scanning electron microscope (SEM) or confocal laser scanning microscope (CLSM). The cathodoluminescence microscope (CLM) is useful for distinguishing phases of refractories. The transmission electron microscope (TEM) and scanning acoustic microscope (SAM) have specialty applications in ceramography. Ceramography is often done qualitatively, for comparison of the microstructure of a component to a standard for quality control or failure analysis purposes. Three common quantitative analyses of microstructures are grain size, second-phase content and porosity. Microstructures are measured by the principles of stereology, in which three-dimensional objects are evaluated in 2-D by projections or cross-sections. Microstructures exhibiting heterogeneous grain sizes, with certain grains growing very large, occur in diverse ceramic systems and this phenomenon is known as abnormal grain growth or AGG. The occurrence of AGG has consequences, positive or negative, on mechanical and chemical properties of ceramics and its identification is often the goal of ceramographic analysis. Grain size can be measured by the line-fraction or area-fraction methods of ASTM E112. In the line-fraction methods, a statistical grain size is calculated from the number of grains or grain boundaries intersecting a line of known length or circle of known circumference. In the area-fraction method, the grain size is calculated from the number of grains inside a known area. In each case, the measurement is affected by secondary phases, porosity, preferred orientation, exponential distribution of sizes, and non-equiaxed grains. Image analysis can measure the shape factors of individual grains by ASTM E1382. Second-phase content and porosity are measured the same way in a microstructure, such as ASTM E562. Procedure E562 is a point-fraction method based on the stereological principle of point fraction = volume fraction, i.e., "P"p = "V"v. Second-phase content in ceramics, such as carbide whiskers in an oxide matrix, is usually expressed as a mass fraction. Volume fractions can be converted to mass fractions if the density of each phase is known. Image analysis can measure porosity, pore-size distribution and volume fractions of secondary phases by ASTM E1245. Porosity measurements do not require etching. Multi-phase microstructures do not require etching if the contrast between phases is adequate, as is usually the case. Grain size, porosity and second-phase content have all been correlated with ceramic properties such as mechanical strength σ by the Hall–Petch equation. Hardness, toughness, dielectric constant and many other properties are microstructure-dependent. Microindentation hardness and toughness. The hardness of a material can be measured in many ways. The Knoop hardness test, a method of microindentation hardness, is the most reproducible for dense ceramics. The Vickers hardness test and superficial Rockwell scales (e.g., 45N) can also be used, but tend to cause more surface damage than Knoop. The Brinell test is suitable for ductile metals, but not ceramics. In the Knoop test, a diamond indenter in the shape of an elongated pyramid is forced into a polished (but not etched) surface under a predetermined load, typically 500 or 1000 g. The load is held for some amount of time, say 10 s, and the indenter is retracted. The indention long diagonal ("d", μm, in Fig. 4) is measured under a microscope, and the Knoop hardness (HK) is calculated from the load (P, g) and the square of the diagonal length in the equations below. The constants account for the projected area of the indenter and unit conversion factors. Most oxide ceramics have a Knoop hardness in the range of 1000–1500 kgf/mm2 (10 – 15 GPa), and many carbides are over 2000 (20 GPa). The method is specified in ASTM C849, C1326 &amp; E384. Microindentation hardness is also called microindentation hardness or simply microhardness. The hardness of very small particles and thin films of ceramics, on the order of 100 nm, can be measured by nanoindentation methods that use a Berkovich indenter. formula_2 (kgf/mm2) and formula_3 (GPa) The toughness of ceramics can be determined from a Vickers test under a load of 10 – 20 kg. Toughness is the ability of a material to resist crack propagation. Several calculations have been formulated from the load (P), elastic modulus (E), microindentation hardness (H), crack length ("c" in Fig. 5) and flexural strength (σ). Modulus of rupture (MOR) bars with a rectangular cross-section are indented in three places on a polished surface. The bars are loaded in 4-point bending with the polished, indented surface in tension, until fracture. The fracture normally originates at one of the indentions. The crack lengths are measured under a microscope. The toughness of most ceramics is 2–4 MPa√m, but toughened zirconia is as much as 13, and cemented carbides are often over 20. The toughness-by-indention methods have been discredited recently and are being replaced by more rigorous methods that measure crack growth in a notched beam in bending. formula_4 initial crack length formula_5 indention strength in bending References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I_t = I_0e^{-\\alpha x}" }, { "math_id": 1, "text": "R = \\frac{I_r}{I_i} = \\frac{(\\eta_1 - \\eta_2)^2 + k^2}{(\\eta_1 + \\eta_2)^2 + k^2}" }, { "math_id": 2, "text": "HK = 14229 \\frac{P}{d^2}" }, { "math_id": 3, "text": "HK = 139.54 \\frac{P}{d^2}" }, { "math_id": 4, "text": "K_{icl} = 0.016 \\sqrt{\\frac{E}{H}}\\frac{P}{(c_0)^{1.5}}" }, { "math_id": 5, "text": "K_{isb} = 0.59 \\left(\\frac{E}{H}\\right)^{1/8}[\\sigma (P^{1/3})]^{3/4}" } ]
https://en.wikipedia.org/wiki?curid=14695652
146983
Earth's magnetic field
Magnetic field that extends from the Earth's outer and inner core to where it meets the solar wind Earth's magnetic field, also known as the geomagnetic field, is the magnetic field that extends from Earth's interior out into space, where it interacts with the solar wind, a stream of charged particles emanating from the Sun. The magnetic field is generated by electric currents due to the motion of convection currents of a mixture of molten iron and nickel in Earth's outer core: these convection currents are caused by heat escaping from the core, a natural process called a geodynamo. The magnitude of Earth's magnetic field at its surface ranges from . As an approximation, it is represented by a field of a magnetic dipole currently tilted at an angle of about 11° with respect to Earth's rotational axis, as if there were an enormous bar magnet placed at that angle through the center of Earth. The North geomagnetic pole actually represents the South pole of Earth's magnetic field, and conversely the South geomagnetic pole corresponds to the north pole of Earth's magnetic field (because opposite magnetic poles attract and the north end of a magnet, like a compass needle, points toward Earth's South magnetic field, Ellesmere Island, Nunavut, Canada). While the North and South magnetic poles are usually located near the geographic poles, they slowly and continuously move over geological time scales, but sufficiently slowly for ordinary compasses to remain useful for navigation. However, at irregular intervals averaging several hundred thousand years, Earth's field reverses and the North and South Magnetic Poles respectively, abruptly switch places. These reversals of the geomagnetic poles leave a record in rocks that are of value to paleomagnetists in calculating geomagnetic fields in the past. Such information in turn is helpful in studying the motions of continents and ocean floors. The magnetosphere extends above the ionosphere that is defined by the extent of Earth's magnetic field in space or geospace. It extends several tens of thousands of kilometres into space, protecting Earth from the charged particles of the solar wind and cosmic rays that would otherwise strip away the upper atmosphere, including the ozone layer that protects Earth from harmful ultraviolet radiation. Significance. Earth's magnetic field deflects most of the solar wind, whose charged particles would otherwise strip away the ozone layer that protects the Earth from harmful ultraviolet radiation. One stripping mechanism is for gas to be caught in bubbles of the magnetic field, which are ripped off by solar winds. Calculations of the loss of carbon dioxide from the atmosphere of Mars, resulting from scavenging of ions by the solar wind, indicate that the dissipation of the magnetic field of Mars caused a near total loss of its atmosphere. The study of the past magnetic field of the Earth is known as paleomagnetism. The polarity of the Earth's magnetic field is recorded in igneous rocks, and reversals of the field are thus detectable as "stripes" centered on mid-ocean ridges where the sea floor is spreading, while the stability of the geomagnetic poles between reversals has allowed paleomagnetism to track the past motion of continents. Reversals also provide the basis for magnetostratigraphy, a way of dating rocks and sediments. The field also magnetizes the crust, and magnetic anomalies can be used to search for deposits of metal ores. Humans have used compasses for direction finding since the 11th century A.D. and for navigation since the 12th century. Although the magnetic declination does shift with time, this wandering is slow enough that a simple compass can remain useful for navigation. Using magnetoreception, various other organisms, ranging from some types of bacteria to pigeons, use the Earth's magnetic field for orientation and navigation. Characteristics. At any location, the Earth's magnetic field can be represented by a three-dimensional vector. A typical procedure for measuring its direction is to use a compass to determine the direction of magnetic North. Its angle relative to true North is the "declination" (D) or "variation". Facing magnetic North, the angle the field makes with the horizontal is the "inclination" (I) or "magnetic dip". The "intensity" (F) of the field is proportional to the force it exerts on a magnet. Another common representation is in X (North), Y (East) and Z (Down) coordinates. Intensity. The intensity of the field is often measured in gauss (G), but is generally reported in microteslas (μT), with 1 G = 100 μT. A nanotesla is also referred to as a gamma (γ). The Earth's field ranges between approximately . By comparison, a strong refrigerator magnet has a field of about . A map of intensity contours is called an "isodynamic chart". As the World Magnetic Model shows, the intensity tends to decrease from the poles to the equator. A minimum intensity occurs in the South Atlantic Anomaly over South America while there are maxima over northern Canada, Siberia, and the coast of Antarctica south of Australia. The intensity of the magnetic field is subject to change over time. A 2021 paleomagnetic study from the University of Liverpool contributed to a growing body of evidence that the Earth's magnetic field cycles with intensity every 200 million years. The lead author stated that "Our findings, when considered alongside the existing datasets, support the existence of an approximately 200-million-year-long cycle in the strength of the Earth's magnetic field related to deep Earth processes." Inclination. The inclination is given by an angle that can assume values between −90° (up) to 90° (down). In the northern hemisphere, the field points downwards. It is straight down at the North Magnetic Pole and rotates upwards as the latitude decreases until it is horizontal (0°) at the magnetic equator. It continues to rotate upwards until it is straight up at the South Magnetic Pole. Inclination can be measured with a dip circle. An "isoclinic chart" (map of inclination contours) for the Earth's magnetic field is shown below. Declination. Declination is positive for an eastward deviation of the field relative to true north. It can be estimated by comparing the magnetic north–south heading on a compass with the direction of a celestial pole. Maps typically include information on the declination as an angle or a small diagram showing the relationship between magnetic north and true north. Information on declination for a region can be represented by a chart with isogonic lines (contour lines with each line representing a fixed declination). Geographical variation. Components of the Earth's magnetic field at the surface from the World Magnetic Model for 2020. Dipolar approximation. Near the surface of the Earth, its magnetic field can be closely approximated by the field of a magnetic dipole positioned at the center of the Earth and tilted at an angle of about 11° with respect to the rotational axis of the Earth. The dipole is roughly equivalent to a powerful bar magnet, with its south pole pointing towards the geomagnetic North Pole. This may seem surprising, but the north pole of a magnet is so defined because, if allowed to rotate freely, it points roughly northward (in the geographic sense). Since the north pole of a magnet attracts the south poles of other magnets and repels the north poles, it must be attracted to the south pole of Earth's magnet. The dipolar field accounts for 80–90% of the field in most locations. Magnetic poles. Historically, the north and south poles of a magnet were first defined by the Earth's magnetic field, not vice versa, since one of the first uses for a magnet was as a compass needle. A magnet's North pole is defined as the pole that is attracted by the Earth's North Magnetic Pole when the magnet is suspended so it can turn freely. Since opposite poles attract, the North Magnetic Pole of the Earth is really the south pole of its magnetic field (the place where the field is directed downward into the Earth). The positions of the magnetic poles can be defined in at least two ways: locally or globally. The local definition is the point where the magnetic field is vertical. This can be determined by measuring the inclination. The inclination of the Earth's field is 90° (downwards) at the North Magnetic Pole and –90° (upwards) at the South Magnetic Pole. The two poles wander independently of each other and are not directly opposite each other on the globe. Movements of up to per year have been observed for the North Magnetic Pole. Over the last 180 years, the North Magnetic Pole has been migrating northwestward, from Cape Adelaide in the Boothia Peninsula in 1831 to from Resolute Bay in 2001. The "magnetic equator" is the line where the inclination is zero (the magnetic field is horizontal). The global definition of the Earth's field is based on a mathematical model. If a line is drawn through the center of the Earth, parallel to the moment of the best-fitting magnetic dipole, the two positions where it intersects the Earth's surface are called the North and South geomagnetic poles. If the Earth's magnetic field were perfectly dipolar, the geomagnetic poles and magnetic dip poles would coincide and compasses would point towards them. However, the Earth's field has a significant non-dipolar contribution, so the poles do not coincide and compasses do not generally point at either. Magnetosphere. Earth's magnetic field, predominantly dipolar at its surface, is distorted further out by the solar wind. This is a stream of charged particles leaving the Sun's corona and accelerating to a speed of 200 to 1000 kilometres per second. They carry with them a magnetic field, the interplanetary magnetic field (IMF). The solar wind exerts a pressure, and if it could reach Earth's atmosphere it would erode it. However, it is kept away by the pressure of the Earth's magnetic field. The magnetopause, the area where the pressures balance, is the boundary of the magnetosphere. Despite its name, the magnetosphere is asymmetric, with the sunward side being about 10 Earth radii out but the other side stretching out in a magnetotail that extends beyond 200 Earth radii. Sunward of the magnetopause is the bow shock, the area where the solar wind slows abruptly. Inside the magnetosphere is the plasmasphere, a donut-shaped region containing low-energy charged particles, or plasma. This region begins at a height of 60 km, extends up to 3 or 4 Earth radii, and includes the ionosphere. This region rotates with the Earth. There are also two concentric tire-shaped regions, called the Van Allen radiation belts, with high-energy ions (energies from 0.1 to 10 MeV). The inner belt is 1–2 Earth radii out while the outer belt is at 4–7 Earth radii. The plasmasphere and Van Allen belts have partial overlap, with the extent of overlap varying greatly with solar activity. As well as deflecting the solar wind, the Earth's magnetic field deflects cosmic rays, high-energy charged particles that are mostly from outside the Solar System. Many cosmic rays are kept out of the Solar System by the Sun's magnetosphere, or heliosphere. By contrast, astronauts on the Moon risk exposure to radiation. Anyone who had been on the Moon's surface during a particularly violent solar eruption in 2005 would have received a lethal dose. Some of the charged particles do get into the magnetosphere. These spiral around field lines, bouncing back and forth between the poles several times per second. In addition, positive ions slowly drift westward and negative ions drift eastward, giving rise to a ring current. This current reduces the magnetic field at the Earth's surface. Particles that penetrate the ionosphere and collide with the atoms there give rise to the lights of the aurorae while also emitting X-rays. The varying conditions in the magnetosphere, known as space weather, are largely driven by solar activity. If the solar wind is weak, the magnetosphere expands; while if it is strong, it compresses the magnetosphere and more of it gets in. Periods of particularly intense activity, called geomagnetic storms, can occur when a coronal mass ejection erupts above the Sun and sends a shock wave through the Solar System. Such a wave can take just two days to reach the Earth. Geomagnetic storms can cause a lot of disruption; the "Halloween" storm of 2003 damaged more than a third of NASA's satellites. The largest documented storm, the Carrington Event, occurred in 1859. It induced currents strong enough to disrupt telegraph lines, and aurorae were reported as far south as Hawaii. Time dependence. Short-term variations. The geomagnetic field changes on time scales from milliseconds to millions of years. Shorter time scales mostly arise from currents in the ionosphere (ionospheric dynamo region) and magnetosphere, and some changes can be traced to geomagnetic storms or daily variations in currents. Changes over time scales of a year or more mostly reflect changes in the Earth's interior, particularly the iron-rich core. Frequently, the Earth's magnetosphere is hit by solar flares causing geomagnetic storms, provoking displays of aurorae. The short-term instability of the magnetic field is measured with the K-index. Data from THEMIS show that the magnetic field, which interacts with the solar wind, is reduced when the magnetic orientation is aligned between Sun and Earth – opposite to the previous hypothesis. During forthcoming solar storms, this could result in blackouts and disruptions in artificial satellites. Secular variation. Changes in Earth's magnetic field on a time scale of a year or more are referred to as "secular variation". Over hundreds of years, magnetic declination is observed to vary over tens of degrees. The animation shows how global declinations have changed over the last few centuries. The direction and intensity of the dipole change over time. Over the last two centuries the dipole strength has been decreasing at a rate of about 6.3% per century. At this rate of decrease, the field would be negligible in about 1600 years. However, this strength is about average for the last 7 thousand years, and the current rate of change is not unusual. A prominent feature in the non-dipolar part of the secular variation is a "westward drift" at a rate of about 0.2° per year. This drift is not the same everywhere and has varied over time. The globally averaged drift has been westward since about 1400 AD but eastward between about 1000 AD and 1400 AD. Changes that predate magnetic observatories are recorded in archaeological and geological materials. Such changes are referred to as "paleomagnetic secular variation" or "paleosecular variation (PSV)". The records typically include long periods of small change with occasional large changes reflecting geomagnetic excursions and reversals. A 1995 study of lava flows on Steens Mountain, Oregon appeared to suggest the magnetic field once shifted at a rate of up to 6° per day at some time in Earth's history, a surprising result. However, in 2014 one of the original authors published a new study which found the results were actually due to the continuous thermal demagnitization of the lava, not to a shift in the magnetic field. In July 2020 scientists report that analysis of simulations and a recent observational field model show that maximum rates of directional change of Earth's magnetic field reached ~10° per year – almost 100 times faster than current changes and 10 times faster than previously thought. Magnetic field reversals. Although generally Earth's field is approximately dipolar, with an axis that is nearly aligned with the rotational axis, occasionally the North and South geomagnetic poles trade places. Evidence for these "geomagnetic reversals" can be found in basalts, sediment cores taken from the ocean floors, and seafloor magnetic anomalies. Reversals occur nearly randomly in time, with intervals between reversals ranging from less than 0.1 million years to as much as 50 million years. The most recent geomagnetic reversal, called the Brunhes–Matuyama reversal, occurred about 780,000 years ago. A related phenomenon, a geomagnetic "excursion", takes the dipole axis across the equator and then back to the original polarity. The Laschamp event is an example of an excursion, occurring during the last ice age (41,000 years ago). The past magnetic field is recorded mostly by strongly magnetic minerals, particularly iron oxides such as magnetite, that can carry a permanent magnetic moment. This remanent magnetization, or "remanence", can be acquired in more than one way. In lava flows, the direction of the field is "frozen" in small minerals as they cool, giving rise to a thermoremanent magnetization. In sediments, the orientation of magnetic particles acquires a slight bias towards the magnetic field as they are deposited on an ocean floor or lake bottom. This is called "detrital remanent magnetization". Thermoremanent magnetization is the main source of the magnetic anomalies around mid-ocean ridges. As the seafloor spreads, magma wells up from the mantle, cools to form new basaltic crust on both sides of the ridge, and is carried away from it by seafloor spreading. As it cools, it records the direction of the Earth's field. When the Earth's field reverses, new basalt records the reversed direction. The result is a series of stripes that are symmetric about the ridge. A ship towing a magnetometer on the surface of the ocean can detect these stripes and infer the age of the ocean floor below. This provides information on the rate at which seafloor has spread in the past. Radiometric dating of lava flows has been used to establish a "geomagnetic polarity time scale", part of which is shown in the image. This forms the basis of magnetostratigraphy, a geophysical correlation technique that can be used to date both sedimentary and volcanic sequences as well as the seafloor magnetic anomalies. Earliest appearance. Paleomagnetic studies of Paleoarchean lava in Australia and conglomerate in South Africa have concluded that the magnetic field has been present since at least about  million years ago. In 2024 researchers published evidence from Greenland for the existence of the magnetic field as early as 3,700 million years ago. Future. Starting in the late 1800s and throughout the 1900s and later, the overall geomagnetic field has become weaker; the present strong deterioration corresponds to a 10–15% decline and has accelerated since 2000; geomagnetic intensity has declined almost continuously from a maximum 35% above the modern value, from circa year 1 AD. The rate of decrease and the current strength are within the normal range of variation, as shown by the record of past magnetic fields recorded in rocks. The nature of Earth's magnetic field is one of heteroscedastic (seemingly random) fluctuation. An instantaneous measurement of it, or several measurements of it across the span of decades or centuries, are not sufficient to extrapolate an overall trend in the field strength. It has gone up and down in the past for unknown reasons. Also, noting the local intensity of the dipole field (or its fluctuation) is insufficient to characterize Earth's magnetic field as a whole, as it is not strictly a dipole field. The dipole component of Earth's field can diminish even while the total magnetic field remains the same or increases. The Earth's magnetic north pole is drifting from northern Canada towards Siberia with a presently accelerating rate— per year at the beginning of the 1900s, up to per year in 2003, and since then has only accelerated. Physical origin. Earth's core and the geodynamo. The Earth's magnetic field is believed to be generated by electric currents in the conductive iron alloys of its core, created by convection currents due to heat escaping from the core. The Earth and most of the planets in the Solar System, as well as the Sun and other stars, all generate magnetic fields through the motion of electrically conducting fluids. The Earth's field originates in its core. This is a region of iron alloys extending to about 3400 km (the radius of the Earth is 6370 km). It is divided into a solid inner core, with a radius of 1220 km, and a liquid outer core. The motion of the liquid in the outer core is driven by heat flow from the inner core, which is about , to the core-mantle boundary, which is about . The heat is generated by potential energy released by heavier materials sinking toward the core (planetary differentiation, the iron catastrophe) as well as decay of radioactive elements in the interior. The pattern of flow is organized by the rotation of the Earth and the presence of the solid inner core. The mechanism by which the Earth generates a magnetic field is known as a geodynamo. The magnetic field is generated by a feedback loop: current loops generate magnetic fields (Ampère's circuital law); a changing magnetic field generates an electric field (Faraday's law); and the electric and magnetic fields exert a force on the charges that are flowing in currents (the Lorentz force). These effects can be combined in a partial differential equation for the magnetic field called the "magnetic induction equation", formula_0 where u is the velocity of the fluid; B is the magnetic B-field; and η 1/σμ is the magnetic diffusivity, which is inversely proportional to the product of the electrical conductivity σ and the permeability μ . The term ∂B/∂"t" is the time derivative of the field; ∇2 is the Laplace operator and ∇× is the curl operator. The first term on the right hand side of the induction equation is a diffusion term. In a stationary fluid, the magnetic field declines and any concentrations of field spread out. If the Earth's dynamo shut off, the dipole part would disappear in a few tens of thousands of years. In a perfect conductor (formula_1), there would be no diffusion. By Lenz's law, any change in the magnetic field would be immediately opposed by currents, so the flux through a given volume of fluid could not change. As the fluid moved, the magnetic field would go with it. The theorem describing this effect is called the "frozen-in-field theorem". Even in a fluid with a finite conductivity, new field is generated by stretching field lines as the fluid moves in ways that deform it. This process could go on generating new field indefinitely, were it not that as the magnetic field increases in strength, it resists fluid motion. The motion of the fluid is sustained by convection, motion driven by buoyancy. The temperature increases towards the center of the Earth, and the higher temperature of the fluid lower down makes it buoyant. This buoyancy is enhanced by chemical separation: As the core cools, some of the molten iron solidifies and is plated to the inner core. In the process, lighter elements are left behind in the fluid, making it lighter. This is called "compositional convection". A Coriolis effect, caused by the overall planetary rotation, tends to organize the flow into rolls aligned along the north–south polar axis. A dynamo can amplify a magnetic field, but it needs a "seed" field to get it started. For the Earth, this could have been an external magnetic field. Early in its history the Sun went through a T-Tauri phase in which the solar wind would have had a magnetic field orders of magnitude larger than the present solar wind. However, much of the field may have been screened out by the Earth's mantle. An alternative source is currents in the core-mantle boundary driven by chemical reactions or variations in thermal or electric conductivity. Such effects may still provide a small bias that are part of the boundary conditions for the geodynamo. The average magnetic field in the Earth's outer core was calculated to be 25 gauss, 50 times stronger than the field at the surface. Numerical models. Simulating the geodynamo by computer requires numerically solving a set of nonlinear partial differential equations for the magnetohydrodynamics (MHD) of the Earth's interior. Simulation of the MHD equations is performed on a 3D grid of points and the fineness of the grid, which in part determines the realism of the solutions, is limited mainly by computer power. For decades, theorists were confined to creating "kinematic dynamo" computer models in which the fluid motion is chosen in advance and the effect on the magnetic field calculated. Kinematic dynamo theory was mainly a matter of trying different flow geometries and testing whether such geometries could sustain a dynamo. The first "self-consistent" dynamo models, ones that determine both the fluid motions and the magnetic field, were developed by two groups in 1995, one in Japan and one in the United States. The latter received attention because it successfully reproduced some of the characteristics of the Earth's field, including geomagnetic reversals. Effect of ocean tides. The oceans contribute to Earth's magnetic field. Seawater is an electrical conductor, and therefore interacts with the magnetic field. As the tides cycle around the ocean basins, the ocean water essentially tries to pull the geomagnetic field lines along. Because the salty water is only slightly conductive, the interaction is relatively weak: the strongest component is from the regular lunar tide that happens about twice per day (M2). Other contributions come from ocean swell, eddies, and even tsunamis. The strength of the interaction depends also on the temperature of the ocean water. The entire heat stored in the ocean can now be inferred from observations of the Earth's magnetic field. Currents in the ionosphere and magnetosphere. Electric currents induced in the ionosphere generate magnetic fields (ionospheric dynamo region). Such a field is always generated near where the atmosphere is closest to the Sun, causing daily alterations that can deflect surface magnetic fields by as much as 1°. Typical daily variations of field strength are about 25 nT (one part in 2000), with variations over a few seconds of typically around 1 nT (one part in 50,000). Measurement and analysis. Detection. The Earth's magnetic field strength was measured by Carl Friedrich Gauss in 1832 and has been repeatedly measured since then, showing a relative decay of about 10% over the last 150 years. The Magsat satellite and later satellites have used 3-axis vector magnetometers to probe the 3-D structure of the Earth's magnetic field. The later Ørsted satellite allowed a comparison indicating a dynamic geodynamo in action that appears to be giving rise to an alternate pole under the Atlantic Ocean west of South Africa. Governments sometimes operate units that specialize in measurement of the Earth's magnetic field. These are geomagnetic observatories, typically part of a national Geological survey, for example, the British Geological Survey's Eskdalemuir Observatory. Such observatories can measure and forecast magnetic conditions such as magnetic storms that sometimes affect communications, electric power, and other human activities. The International Real-time Magnetic Observatory Network, with over 100 interlinked geomagnetic observatories around the world, has been recording the Earth's magnetic field since 1991. The military determines local geomagnetic field characteristics, in order to detect "anomalies" in the natural background that might be caused by a significant metallic object such as a submerged submarine. Typically, these magnetic anomaly detectors are flown in aircraft like the UK's Nimrod or towed as an instrument or an array of instruments from surface ships. Commercially, geophysical prospecting companies also use magnetic detectors to identify naturally occurring anomalies from ore bodies, such as the Kursk Magnetic Anomaly. Crustal magnetic anomalies. Magnetometers detect minute deviations in the Earth's magnetic field caused by iron artifacts, kilns, some types of stone structures, and even ditches and middens in archaeological geophysics. Using magnetic instruments adapted from airborne magnetic anomaly detectors developed during World War II to detect submarines, the magnetic variations across the ocean floor have been mapped. Basalt — the iron-rich, volcanic rock making up the ocean floor — contains a strongly magnetic mineral (magnetite) and can locally distort compass readings. The distortion was recognized by Icelandic mariners as early as the late 18th century. More important, because the presence of magnetite gives the basalt measurable magnetic properties, these magnetic variations have provided another means to study the deep ocean floor. When newly formed rock cools, such magnetic materials record the Earth's magnetic field. Statistical models. Each measurement of the magnetic field is at a particular place and time. If an accurate estimate of the field at some other place and time is needed, the measurements must be converted to a model and the model used to make predictions. Spherical harmonics. The most common way of analyzing the global variations in the Earth's magnetic field is to fit the measurements to a set of spherical harmonics. This was first done by Carl Friedrich Gauss. Spherical harmonics are functions that oscillate over the surface of a sphere. They are the product of two functions, one that depends on latitude and one on longitude. The function of longitude is zero along zero or more great circles passing through the North and South Poles; the number of such "nodal lines" is the absolute value of the "order" m. The function of latitude is zero along zero or more latitude circles; this plus the order is equal to the "degree" ℓ. Each harmonic is equivalent to a particular arrangement of magnetic charges at the center of the Earth. A "monopole" is an isolated magnetic charge, which has never been observed. A "dipole" is equivalent to two opposing charges brought close together and a "quadrupole" to two dipoles brought together. A quadrupole field is shown in the lower figure on the right. Spherical harmonics can represent any scalar field (function of position) that satisfies certain properties. A magnetic field is a vector field, but if it is expressed in Cartesian components X, Y, Z, each component is the derivative of the same scalar function called the "magnetic potential". Analyses of the Earth's magnetic field use a modified version of the usual spherical harmonics that differ by a multiplicative factor. A least-squares fit to the magnetic field measurements gives the Earth's field as the sum of spherical harmonics, each multiplied by the best-fitting "Gauss coefficient" gmℓ or hmℓ. The lowest-degree Gauss coefficient, g00, gives the contribution of an isolated magnetic charge, so it is zero. The next three coefficients – g10, g11, and h11 – determine the direction and magnitude of the dipole contribution. The best fitting dipole is tilted at an angle of about 10° with respect to the rotational axis, as described earlier. Radial dependence. Spherical harmonic analysis can be used to distinguish internal from external sources if measurements are available at more than one height (for example, ground observatories and satellites). In that case, each term with coefficient gmℓ or hmℓ can be split into two terms: one that decreases with radius as 1/rℓ+1 and one that "increases" with radius as rℓ. The increasing terms fit the external sources (currents in the ionosphere and magnetosphere). However, averaged over a few years the external contributions average to zero. The remaining terms predict that the potential of a dipole source (ℓ 1) drops off as 1/r2. The magnetic field, being a derivative of the potential, drops off as 1/r3. Quadrupole terms drop off as 1/r4, and higher order terms drop off increasingly rapidly with the radius. The radius of the outer core is about half of the radius of the Earth. If the field at the core-mantle boundary is fit to spherical harmonics, the dipole part is smaller by a factor of about 8 at the surface, the quadrupole part by a factor of 16, and so on. Thus, only the components with large wavelengths can be noticeable at the surface. From a variety of arguments, it is usually assumed that only terms up to degree 14 or less have their origin in the core. These have wavelengths of about or less. Smaller features are attributed to crustal anomalies. Global models. The International Association of Geomagnetism and Aeronomy maintains a standard global field model called the International Geomagnetic Reference Field (IGRF). It is updated every five years. The 11th-generation model, IGRF11, was developed using data from satellites (Ørsted, CHAMP and SAC-C) and a world network of geomagnetic observatories. The spherical harmonic expansion was truncated at degree 10, with 120 coefficients, until 2000. Subsequent models are truncated at degree 13 (195 coefficients). Another global field model, called the World Magnetic Model, is produced jointly by the United States National Centers for Environmental Information (formerly the National Geophysical Data Center) and the British Geological Survey. This model truncates at degree 12 (168 coefficients) with an approximate spatial resolution of 3,000 kilometers. It is the model used by the United States Department of Defense, the Ministry of Defence (United Kingdom), the United States Federal Aviation Administration (FAA), the North Atlantic Treaty Organization (NATO), and the International Hydrographic Organization as well as in many civilian navigation systems. The above models only take into account the "main field" at the core-mantle boundary. Although generally good enough for navigation, higher-accuracy use cases require smaller-scale magnetic anomalies and other variations to be considered. Some examples are (see geomag.us ref for more): For historical data about the main field, the IGRF may be used back to year 1900. A specialized GUFM1 model estimates back to year 1590 using ship's logs. Paleomagnetic research has produced models dating back to 10,000 BCE. Biomagnetism. Animals, including birds and turtles, can detect the Earth's magnetic field, and use the field to navigate during migration. Some researchers have found that cows and wild deer tend to align their bodies north–south while relaxing, but not when the animals are under high-voltage power lines, suggesting that magnetism is responsible. Other researchers reported in 2011 that they could not replicate those findings using different Google Earth images. Very weak electromagnetic fields disrupt the magnetic compass used by European robins and other songbirds, which use the Earth's magnetic field to navigate. Neither power lines nor cellphone signals are to blame for the electromagnetic field effect on the birds; instead, the culprits have frequencies between 2 kHz and 5 MHz. These include AM radio signals and ordinary electronic equipment that might be found in businesses or private homes. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\partial \\mathbf{B}}{\\partial t} = \\eta \\nabla^2 \\mathbf{B} + \\nabla \\times (\\mathbf{u} \\times \\mathbf{B}), " }, { "math_id": 1, "text": "\\sigma = \\infty\\;" } ]
https://en.wikipedia.org/wiki?curid=146983
14698685
Metal-induced gap states
In bulk semiconductor band structure calculations, it is assumed that the crystal lattice (which features a periodic potential due to the atomic structure) of the material is infinite. When the finite size of a crystal is taken into account, the wavefunctions of electrons are altered and states that are forbidden within the bulk semiconductor gap are allowed at the surface. Similarly, when a metal is deposited onto a semiconductor (by thermal evaporation, for example), the wavefunction of an electron in the semiconductor must match that of an electron in the metal at the interface. Since the Fermi levels of the two materials must match at the interface, there exists gap states that decay deeper into the semiconductor. Band-bending at the metal-semiconductor interface. As mentioned above, when a metal is deposited onto a semiconductor, even when the metal film as small as a single atomic layer, the Fermi levels of the metal and semiconductor must match. This pins the Fermi level in the semiconductor to a position in the bulk gap. Shown to the right is a diagram of band-bending interfaces between two different metals (high and low work functions) and two different semiconductors (n-type and p-type). Volker Heine was one of the first to estimate the length of the tail end of metal electron states extending into the semiconductor's energy gap. He calculated the variation in surface state energy by matching wavefunctions of a free-electron metal to gapped states in an undoped semiconductor, showing that in most cases the position of the surface state energy is quite stable regardless of the metal used. Branching point. It is somewhat crude to suggest that the metal-induced gap states (MIGS) are tail ends of metal states that leak into the semiconductor. Since the mid-gap states do exist within some depth of the semiconductor, they must be a mixture (a Fourier series) of valence and conduction band states from the bulk. The resulting positions of these states, as calculated by C. Tejedor, F. Flores and E. Louis, and J. Tersoff, must be closer to either the valence- or conduction- band thus acting as acceptor or donor dopants, respectively. The point that divides these two types of MIGS is called the branching point, E_B. Tersoff argued formula_0 formula_1, where formula_2 is the spin orbit splitting of formula_3 at the formula_4 point. formula_5 is the indirect conduction band minimum. Metal–semiconductor contact point barrier height. In order for the Fermi levels to match at the interface, there must be charge transfer between the metal and semiconductor. The amount of charge transfer was formulated by Linus Pauling and later revised to be: formula_6 where formula_7 and formula_8 are the electronegativities of the metal and semiconductor, respectively. The charge transfer produces a dipole at the interface and thus a potential barrier called the Schottky barrier height. In the same derivation of the branching point mentioned above, Tersoff derives the barrier height to be: formula_9 where formula_10 is a parameter adjustable for the specific metal, dependent mostly on its electronegativity, formula_7. Tersoff showed that the experimentally measured formula_11 fits his theoretical model for Au in contact with 10 common semiconductors, including Si, Ge, GaP, and GaAs. Another derivation of the contact barrier height in terms of experimentally measurable parameters was worked out by Federico Garcia-Moliner and Fernando Flores who considered the density of states and dipole contributions more rigorously. formula_12 formula_13 is dependent on the charge densities of the both materials formula_14 density of surface states formula_15 work function of metal formula_16 sum of dipole contributions considering dipole corrections to the jellium model formula_17 semiconductor gap formula_18 Ef – Ev in semiconductor Thus formula_19 can be calculated by theoretically deriving or experimentally measuring each parameter. Garcia-Moliner and Flores also discuss two limits formula_20 (The Bardeen Limit), where the high density of interface states pins the Fermi level at that of the semiconductor regardless of formula_21. formula_22 (The Schottky Limit) where formula_23 varies with strongly with the characteristics of the metal, including the particular lattice structure as accounted for in formula_24. Applications. When a bias voltage formula_25 is applied across the interface of an n-type semiconductor and a metal, the Fermi level in the semiconductor is shifted with respect to the metal's and the band bending decreases. In effect, the capacitance across the depletion layer in the semiconductor is bias voltage dependent and goes as formula_26. This makes the metal/semiconductor junction useful in varactor devices used frequently in electronics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " E_B = \\frac{1}{2}[\\bar{E_V} + \\bar{E_C}] " }, { "math_id": 1, "text": " \\bar{E_V} = E_V - \\frac{1}{3} \\Delta_{so} " }, { "math_id": 2, "text": "\\Delta_{so} " }, { "math_id": 3, "text": " E_V" }, { "math_id": 4, "text": " \\Gamma " }, { "math_id": 5, "text": " \\bar{E_C} " }, { "math_id": 6, "text": "\\delta q = \\frac{0.16}{eV}|X_M - X_{SC}| + \\frac{0.035}{eV^2}|X_M - X_{SC}|^2" }, { "math_id": 7, "text": "X_M" }, { "math_id": 8, "text": "X_{SC}" }, { "math_id": 9, "text": "\\Phi_{bh} = \\frac{1}{2}[\\bar{E_C} - \\bar{E_V}] + \\delta_m = \\frac{1}{2} [\\bar{E_C} - E_V - \\frac{\\Delta_{so}}{3}] + \\delta_m" }, { "math_id": 10, "text": "\\delta_m " }, { "math_id": 11, "text": "\\Phi_{bh}" }, { "math_id": 12, "text": "\\Phi_{bh} = \\frac{1}{1+\\alpha N_{vs}} [\\Phi_M - X_M + D_J + \\alpha N_{vs}(E_g - \\Phi_0)] " }, { "math_id": 13, "text": "\\alpha " }, { "math_id": 14, "text": "N_{vs} =" }, { "math_id": 15, "text": "\\phi_M =" }, { "math_id": 16, "text": "D_J =" }, { "math_id": 17, "text": "E_G =" }, { "math_id": 18, "text": "\\Phi_0 = " }, { "math_id": 19, "text": "\\phi_{bh}" }, { "math_id": 20, "text": " \\alpha N_{vs} >> 1" }, { "math_id": 21, "text": " \\Phi_M " }, { "math_id": 22, "text": " \\alpha N_{vs} << 1" }, { "math_id": 23, "text": " \\Phi_{bh}" }, { "math_id": 24, "text": "D_J" }, { "math_id": 25, "text": "V" }, { "math_id": 26, "text": "(V_{if}-V)^{\\frac{1}{2}}" } ]
https://en.wikipedia.org/wiki?curid=14698685
14699765
Argument (complex analysis)
Angle of complex number about real axis In mathematics (particularly in complex analysis), the argument of a complex number z, denoted arg("z"), is the angle between the positive real axis and the line joining the origin and z, represented as a point in the complex plane, shown as formula_0 in Figure 1. By convention the positive real axis is drawn pointing rightward, the positive imaginary axis is drawn pointing upward, and complex numbers with positive real part are considered to have an anticlockwise argument with positive sign. When any real-valued angle is considered, the argument is a multivalued function operating on the nonzero complex numbers. The principal value of this function is single-valued, typically chosen to be the unique value of the argument that lies within the interval (−"π", "π"]. In this article the multi-valued function will be denoted arg("z") and its principal value will be denoted Arg("z"), but in some sources the capitalization of these symbols is exchanged. Definition. An argument of the complex number "z" = "x" + "iy", denoted arg("z"), is defined in two equivalent ways: The names "magnitude," for the modulus, and "phase", for the argument, are sometimes used equivalently. Under both definitions, it can be seen that the argument of any non-zero complex number has many possible values: firstly, as a geometrical angle, it is clear that whole circle rotations do not change the point, so angles differing by an integer multiple of 2π radians (a complete circle) are the same, as reflected by figure 2 on the right. Similarly, from the periodicity of sin and cos, the second definition also has this property. The argument of zero is usually left undefined. Alternative definition. The complex argument can also be defined algebraically in terms of complex roots as: formula_3 This definition removes reliance on other difficult-to-compute functions such as arctangent as well as eliminating the need for the piecewise definition. Because it's defined in terms of roots, it also inherits the principal branch of square root as its own principal branch. The normalization of formula_4 by dividing by formula_5 isn't necessary for convergence to the correct value, but it does speed up convergence and ensures that formula_6 is left undefined. Principal value. Because a complete rotation around the origin leaves a complex number unchanged, there are many choices which could be made for formula_0 by circling the origin any number of times. This is shown in figure 2, a representation of the multi-valued (set-valued) function formula_7, where a vertical line (not shown in the figure) cuts the surface at heights representing all the possible choices of angle for that point. When a well-defined function is required, then the usual choice, known as the "principal value", is the value in the open-closed interval (−"π" rad, "π" rad], that is from −"π" to "π" radians, excluding −"π" rad itself (equiv., from −180 to +180 degrees, excluding −180° itself). This represents an angle of up to half a complete circle from the positive real axis in either direction. Some authors define the range of the principal value as being in the closed-open interval [0, 2"π"). Notation. The principal value sometimes has the initial letter capitalized, as in Arg "z", especially when a general version of the argument is also being considered. Note that notation varies, so arg and Arg may be interchanged in different texts. The set of all possible values of the argument can be written in terms of Arg as: formula_8 Computing from the real and imaginary part. If a complex number is known in terms of its real and imaginary parts, then the function that calculates the principal value Arg is called the two-argument arctangent function, atan2: formula_9. The atan2 function is available in the math libraries of many programming languages, sometimes under a different name, and usually returns a value in the range (−π, π]. In some sources the argument is defined as formula_10 however this is correct only when "x" &gt; 0, where formula_11 is well-defined and the angle lies between formula_12 and formula_13 Extending this definition to cases where "x" is not positive is relatively involved. Specifically, one may define the principal value of the argument separately on the half-plane "x" &gt; 0 and the two quadrants with "x" &lt; 0, and then patch the definitions together: formula_14 See atan2 for further detail and alternative implementations. Realizations of the function in computer languages. Wolfram language (Mathematica). In Wolfram language, there's codice_0: codice_1 formula_15 or using the language's codice_2: codice_1 formula_16 codice_4 is formula_17 extended to work with infinities. codice_5 is codice_6 (i.e. it's still defined), while codice_7 doesn't return anything (i.e. it's undefined). Maple. Maple's codice_8 behaves the same as codice_0 in Wolfram language, except that codice_8 also returns formula_18 if codice_11 is the special floating-point value codice_12. Also, Maple doesn't have formula_19. MATLAB. MATLAB's codice_13 behaves the same as codice_0 in Wolfram language, except that it is formula_20 Unlike in Maple and Wolfram language, MATLAB's codice_15 is equivalent to codice_16. That is, codice_17 is formula_21. Identities. One of the main motivations for defining the principal value Arg is to be able to write complex numbers in modulus-argument form. Hence for any complex number z, formula_22 This is only really valid if z is non-zero, but can be considered valid for "z" = 0 if Arg(0) is considered as an indeterminate form—rather than as being undefined. Some further identities follow. If "z"1 and "z"2 are two non-zero complex numbers, then formula_23 If "z" ≠ 0 and n is any integer, then formula_24 formula_25 Using the complex logarithm. From formula_26, we get formula_27, alternatively formula_28. As we are taking the imaginary part, any normalisation by a real scalar will not affect the result. This is useful when one has the complex logarithm available. Extended argument. The extended argument of a number z (denoted as formula_29) is the set of all real numbers congruent to formula_30 modulo 2formula_18.formula_31 References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varphi" }, { "math_id": 1, "text": "z = r (\\cos \\varphi + i \\sin \\varphi) = r e^{i\\varphi}" }, { "math_id": 2, "text": "r = \\sqrt{x^2 + y^2}." }, { "math_id": 3, "text": "\\arg(z) = \\lim_{n\\to\\infty} n\\cdot \\operatorname{Im}{\\sqrt[n]{z/|z|}}" }, { "math_id": 4, "text": "z" }, { "math_id": 5, "text": "|z|" }, { "math_id": 6, "text": "\\arg(0)" }, { "math_id": 7, "text": "f(x,y)=\\arg(x+iy)" }, { "math_id": 8, "text": "\\arg(z) = \\{\\operatorname{Arg}(z) + 2\\pi n \\mid n \\in \\mathbb Z\\}." }, { "math_id": 9, "text": "\\operatorname{Arg}(x + iy) = \\operatorname{atan2}(y,\\, x)" }, { "math_id": 10, "text": "\\operatorname{Arg}(x + iy) = \\arctan(y/x)," }, { "math_id": 11, "text": "y/x" }, { "math_id": 12, "text": "-\\tfrac\\pi2" }, { "math_id": 13, "text": "\\tfrac\\pi2." }, { "math_id": 14, "text": "\\operatorname{Arg}(x + iy) = \\operatorname{atan2}(y,\\, x) =\n\\begin{cases}\n \\arctan\\left(\\frac y x\\right) &\\text{if } x > 0, \\\\[5mu]\n \\arctan\\left(\\frac y x\\right) + \\pi &\\text{if } x < 0 \\text{ and } y \\ge 0, \\\\[5mu]\n \\arctan\\left(\\frac y x\\right) - \\pi &\\text{if } x < 0 \\text{ and } y < 0, \\\\[5mu]\n +\\frac{\\pi}{2} &\\text{if } x = 0 \\text{ and } y > 0, \\\\[5mu]\n -\\frac{\\pi}{2} &\\text{if } x = 0 \\text{ and } y < 0, \\\\[5mu]\n \\text{undefined} &\\text{if } x = 0 \\text{ and } y = 0.\n\\end{cases}" }, { "math_id": 15, "text": "=\n\\begin{cases}\n \\text{undefined} &\\text{if } |x| = \\infty \\text{ and } |y|=\\infty, \\\\[5mu]\n 0 &\\text{if } x = 0 \\text{ and } y = 0, \\\\[5mu]\n 0 &\\text{if } x = \\infty, \\\\[5mu]\n \\pi &\\text{if } x = -\\infty, \\\\[5mu]\n \\pm\\frac{\\pi}{2} &\\text{if } y = \\pm\\infty, \\\\[5mu]\n \\operatorname{Arg}(x + y i) &\\text{otherwise}.\n\\end{cases}\n" }, { "math_id": 16, "text": "=\n\\begin{cases}\n 0 &\\text{if } x = 0 \\text{ and } y = 0, \\\\[5mu]\n \\text{ArcTan[x, y]} &\\text{otherwise}.\n\\end{cases}\n" }, { "math_id": 17, "text": "\\operatorname{atan2}(y, x)" }, { "math_id": 18, "text": "\\pi" }, { "math_id": 19, "text": "\\operatorname{atan2}" }, { "math_id": 20, "text": "\n\\begin{cases}\n \\frac{1\\pi}{4} &\\text{if } x = \\infty \\text{ and } y = \\infty, \\\\[5mu]\n -\\frac{1\\pi}{4} &\\text{if } x = \\infty \\text{ and } y = -\\infty, \\\\[5mu]\n \\frac{3\\pi}{4} &\\text{if } x = -\\infty \\text{ and } y = \\infty, \\\\[5mu]\n -\\frac{3\\pi}{4} &\\text{if } x = -\\infty \\text{ and } y = -\\infty.\n\\end{cases}\n" }, { "math_id": 21, "text": "0" }, { "math_id": 22, "text": "z = \\left| z \\right| e^{i \\operatorname{Arg} z}." }, { "math_id": 23, "text": "\\begin{align}\n \\operatorname{Arg}(z_1 z_2) &\\equiv \\operatorname{Arg}(z_1) + \\operatorname{Arg}(z_2) \\pmod{\\mathbb{R}/2\\pi\\mathbb{Z}}, \\\\\n \\operatorname{Arg}\\left(\\frac{z_1}{z_2}\\right) &\\equiv \\operatorname{Arg}(z_1) - \\operatorname{Arg}(z_2) \\pmod{\\mathbb{R}/2\\pi\\mathbb{Z}}.\n\\end{align}" }, { "math_id": 24, "text": "\\operatorname{Arg}\\left(z^n\\right) \\equiv n \\operatorname{Arg}(z) \\pmod{\\mathbb{R}/2\\pi\\mathbb{Z}}." }, { "math_id": 25, "text": "\\operatorname{Arg}\\biggl(\\frac{-1- i}{i}\\biggr) = \\operatorname{Arg}(-1 - i) - \\operatorname{Arg}(i) = -\\frac{3\\pi}{4} - \\frac{\\pi}{2} = -\\frac{5\\pi}{4}" }, { "math_id": 26, "text": "z = |z| e^{i \\operatorname{Arg}(z)}" }, { "math_id": 27, "text": "i \\operatorname{Arg}(z) = \\ln \\frac{z}{|z|}" }, { "math_id": 28, "text": "\\operatorname{Arg}(z) = \\operatorname{Im}(\\ln \\frac{z}{|z|}) = \\operatorname{Im}(\\ln z)" }, { "math_id": 29, "text": "\\overline{\\arg}(z)" }, { "math_id": 30, "text": "\\arg (z)" }, { "math_id": 31, "text": "\\overline{\\arg}(z) = \\arg (z) + 2k\\pi, \\forall k \\in \\mathbb{Z}" } ]
https://en.wikipedia.org/wiki?curid=14699765
147003
Hysteresis
Dependence of the state of a system on its history Hysteresis is the dependence of the state of a system on its history. For example, a magnet may have more than one possible magnetic moment in a given magnetic field, depending on how the field changed in the past. Plots of a single component of the moment often form a loop or hysteresis curve, where there are different values of one variable depending on the direction of change of another variable. This history dependence is the basis of memory in a hard disk drive and the remanence that retains a record of the Earth's magnetic field magnitude in the past. Hysteresis occurs in ferromagnetic and ferroelectric materials, as well as in the deformation of rubber bands and shape-memory alloys and many other natural phenomena. In natural systems, it is often associated with irreversible thermodynamic change such as phase transitions and with internal friction; and dissipation is a common side effect. Hysteresis can be found in physics, chemistry, engineering, biology, and economics. It is incorporated in many artificial systems: for example, in thermostats and Schmitt triggers, it prevents unwanted frequent switching. Hysteresis can be a dynamic lag between an input and an output that disappears if the input is varied more slowly; this is known as "rate-dependent" hysteresis. However, phenomena such as the magnetic hysteresis loops are mainly "rate-independent", which makes a durable memory possible. Systems with hysteresis are nonlinear, and can be mathematically challenging to model. Some hysteretic models, such as the Preisach model (originally applied to ferromagnetism) and the Bouc–Wen model, attempt to capture general features of hysteresis; and there are also phenomenological models for particular phenomena such as the Jiles–Atherton model for ferromagnetism. It is difficult to define hysteresis precisely. Isaak D. Mayergoyz wrote "...the very meaning of hysteresis varies from one area to another, from paper to paper and from author to author. As a result, a stringent mathematical definition of hysteresis is needed in order to avoid confusion and ambiguity.". &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Etymology and history. The term "hysteresis" is derived from , an Ancient Greek word meaning "deficiency" or "lagging behind". It was coined in 1881 by Sir James Alfred Ewing to describe the behaviour of magnetic materials. Some early work on describing hysteresis in mechanical systems was performed by James Clerk Maxwell. Subsequently, hysteretic models have received significant attention in the works of Ferenc Preisach (Preisach model of hysteresis), Louis Néel and Douglas Hugh Everett in connection with magnetism and absorption. A more formal mathematical theory of systems with hysteresis was developed in the 1970s by a group of Russian mathematicians led by Mark Krasnosel'skii. Types. Rate-dependent. One type of hysteresis is a lag between input and output. An example is a sinusoidal input X(t) that results in a sinusoidal output Y(t), but with a phase lag φ: formula_0 Such behavior can occur in linear systems, and a more general form of response is formula_1 where formula_2 is the instantaneous response and formula_3 is the impulse response to an impulse that occurred formula_4 time units in the past. In the frequency domain, input and output are related by a complex "generalized susceptibility" that can be computed from formula_5; it is mathematically equivalent to a transfer function in linear filter theory and analogue signal processing. This kind of hysteresis is often referred to as "rate-dependent hysteresis". If the input is reduced to zero, the output continues to respond for a finite time. This constitutes a memory of the past, but a limited one because it disappears as the output decays to zero. The phase lag depends on the frequency of the input, and goes to zero as the frequency decreases. When rate-dependent hysteresis is due to dissipative effects like friction, it is associated with power loss. Rate-independent. Systems with "rate-independent hysteresis" have a "persistent" memory of the past that remains after the transients have died out. The future development of such a system depends on the history of states visited, but does not fade as the events recede into the past. If an input variable X(t) cycles from X0 to X1 and back again, the output Y(t) may be Y0 initially but a different value Y2 upon return. The values of Y(t) depend on the path of values that X(t) passes through but not on the speed at which it traverses the path. Many authors restrict the term hysteresis to mean only rate-independent hysteresis. Hysteresis effects can be characterized using the Preisach model and the generalized Prandtl−Ishlinskii model. In engineering. Control systems. In control systems, hysteresis can be used to filter signals so that the output reacts less rapidly than it otherwise would by taking recent system history into account. For example, a thermostat controlling a heater may switch the heater on when the temperature drops below A, but not turn it off until the temperature rises above B. (For instance, if one wishes to maintain a temperature of 20 °C then one might set the thermostat to turn the heater on when the temperature drops to below 18 °C and off when the temperature exceeds 22 °C). Similarly, a pressure switch can be designed to exhibit hysteresis, with pressure set-points substituted for temperature thresholds. Electronic circuits. Often, some amount of hysteresis is intentionally added to an electronic circuit to prevent unwanted rapid switching. This and similar techniques are used to compensate for contact bounce in switches, or noise in an electrical signal. A Schmitt trigger is a simple electronic circuit that exhibits this property. A latching relay uses a solenoid to actuate a ratcheting mechanism that keeps the relay closed even if power to the relay is terminated. Some positive feedback from the output to one input of a comparator can increase the natural hysteresis (a function of its gain) it exhibits. Hysteresis is essential to the workings of some memristors (circuit components which "remember" changes in the current passing through them by changing their resistance). Hysteresis can be used when connecting arrays of elements such as nanoelectronics, electrochrome cells and memory effect devices using passive matrix addressing. Shortcuts are made between adjacent components (see crosstalk) and the hysteresis helps to keep the components in a particular state while the other components change states. Thus, all rows can be addressed at the same time instead of individually. In the field of audio electronics, a noise gate often implements hysteresis intentionally to prevent the gate from "chattering" when signals close to its threshold are applied. User interface design. A hysteresis is sometimes intentionally added to computer algorithms. The field of user interface design has borrowed the term hysteresis to refer to times when the state of the user interface intentionally lags behind the apparent user input. For example, a menu that was drawn in response to a mouse-over event may remain on-screen for a brief moment after the mouse has moved out of the trigger region and the menu region. This allows the user to move the mouse directly to an item on the menu, even if part of that direct mouse path is outside of both the trigger region and the menu region. For instance, right-clicking on the desktop in most Windows interfaces will create a menu that exhibits this behavior. Aerodynamics. In aerodynamics, hysteresis can be observed when decreasing the angle of attack of a wing after stall, regarding the lift and drag coefficients. The angle of attack at which the flow on top of the wing reattaches is generally lower than the angle of attack at which the flow separates during the increase of the angle of attack. Hydraulics. Hysteresis can be observed in the stage-flow relationship of a river during rapidly changing conditions such as passing of a flood wave. It is most pronounced in low gradient streams with steep leading edge hydrographs. Backlash. Moving parts within machines, such as the components of a gear train, normally have a small gap between them, to allow movement and lubrication. As a consequence of this gap, any reversal in direction of a drive part will not be passed on immediately to the driven part. This unwanted delay is normally kept as small as practicable, and is usually called backlash. The amount of backlash will increase with time as the surfaces of moving parts wear. In mechanics. Elastic hysteresis. In the elastic hysteresis of rubber, the area in the centre of a hysteresis loop is the energy dissipated due to material internal friction. Elastic hysteresis was one of the first types of hysteresis to be examined. The effect can be demonstrated using a rubber band with weights attached to it. If the top of a rubber band is hung on a hook and small weights are attached to the bottom of the band one at a time, it will stretch and get longer. As more weights are "loaded" onto it, the band will continue to stretch because the force the weights are exerting on the band is increasing. When each weight is taken off, or "unloaded", the band will contract as the force is reduced. As the weights are taken off, each weight that produced a specific length as it was loaded onto the band now contracts less, resulting in a slightly longer length as it is unloaded. This is because the band does not obey Hooke's law perfectly. The hysteresis loop of an idealized rubber band is shown in the figure. In terms of force, the rubber band was harder to stretch when it was being loaded than when it was being unloaded. In terms of time, when the band is unloaded, the effect (the length) lagged behind the cause (the force of the weights) because the length has not yet reached the value it had for the same weight during the loading part of the cycle. In terms of energy, more energy was required during the loading than the unloading, the excess energy being dissipated as thermal energy. Elastic hysteresis is more pronounced when the loading and unloading is done quickly than when it is done slowly. Some materials such as hard metals don't show elastic hysteresis under a moderate load, whereas other hard materials like granite and marble do. Materials such as rubber exhibit a high degree of elastic hysteresis. When the intrinsic hysteresis of rubber is being measured, the material can be considered to behave like a gas. When a rubber band is stretched it heats up, and if it is suddenly released, it cools down perceptibly. These effects correspond to a large hysteresis from the thermal exchange with the environment and a smaller hysteresis due to internal friction within the rubber. This proper, intrinsic hysteresis can be measured only if the rubber band is thermally isolated. Small vehicle suspensions using rubber (or other elastomers) can achieve the dual function of springing and damping because rubber, unlike metal springs, has pronounced hysteresis and does not return all the absorbed compression energy on the rebound. Mountain bikes have made use of elastomer suspension, as did the original Mini car. The primary cause of rolling resistance when a body (such as a ball, tire, or wheel) rolls on a surface is hysteresis. This is attributed to the viscoelastic characteristics of the material of the rolling body. Contact angle hysteresis. The contact angle formed between a liquid and solid phase will exhibit a range of contact angles that are possible. There are two common methods for measuring this range of contact angles. The first method is referred to as the tilting base method. Once a drop is dispensed on the surface with the surface level, the surface is then tilted from 0° to 90°. As the drop is tilted, the downhill side will be in a state of imminent wetting while the uphill side will be in a state of imminent dewetting. As the tilt increases the downhill contact angle will increase and represents the advancing contact angle while the uphill side will decrease; this is the receding contact angle. The values for these angles just prior to the drop releasing will typically represent the advancing and receding contact angles. The difference between these two angles is the contact angle hysteresis. The second method is often referred to as the add/remove volume method. When the maximum liquid volume is removed from the drop without the interfacial area decreasing the receding contact angle is thus measured. When volume is added to the maximum before the interfacial area increases, this is the advancing contact angle. As with the tilt method, the difference between the advancing and receding contact angles is the contact angle hysteresis. Most researchers prefer the tilt method; the add/remove method requires that a tip or needle stay embedded in the drop which can affect the accuracy of the values, especially the receding contact angle. Bubble shape hysteresis. The equilibrium shapes of bubbles expanding and contracting on capillaries (blunt needles) can exhibit hysteresis depending on the relative magnitude of the maximum capillary pressure to ambient pressure, and the relative magnitude of the bubble volume at the maximum capillary pressure to the dead volume in the system. The bubble shape hysteresis is a consequence of gas compressibility, which causes the bubbles to behave differently across expansion and contraction. During expansion, bubbles undergo large non equilibrium jumps in volume, while during contraction the bubbles are more stable and undergo a relatively smaller jump in volume resulting in an asymmetry across expansion and contraction. The bubble shape hysteresis is qualitatively similar to the adsorption hysteresis, and as in the contact angle hysteresis, the interfacial properties play an important role in bubble shape hysteresis. The existence of the bubble shape hysteresis has important consequences in interfacial rheology experiments involving bubbles. As a result of the hysteresis, not all sizes of the bubbles can be formed on a capillary. Further the gas compressibility causing the hysteresis leads to unintended complications in the phase relation between the applied changes in interfacial area to the expected interfacial stresses. These difficulties can be avoided by designing experimental systems to avoid the bubble shape hysteresis. Adsorption hysteresis. Hysteresis can also occur during physical adsorption processes. In this type of hysteresis, the quantity adsorbed is different when gas is being added than it is when being removed. The specific causes of adsorption hysteresis are still an active area of research, but it is linked to differences in the nucleation and evaporation mechanisms inside mesopores. These mechanisms are further complicated by effects such as cavitation and pore blocking. In physical adsorption, hysteresis is evidence of mesoporosity-indeed, the definition of mesopores (2–50 nm) is associated with the appearance (50 nm) and disappearance (2 nm) of mesoporosity in nitrogen adsorption isotherms as a function of Kelvin radius. An adsorption isotherm showing hysteresis is said to be of Type IV (for a wetting adsorbate) or Type V (for a non-wetting adsorbate), and hysteresis loops themselves are classified according to how symmetric the loop is. Adsorption hysteresis loops also have the unusual property that it is possible to scan within a hysteresis loop by reversing the direction of adsorption while on a point on the loop. The resulting scans are called "crossing", "converging", or "returning", depending on the shape of the isotherm at this point. Matric potential hysteresis. The relationship between matric water potential and water content is the basis of the water retention curve. Matric potential measurements (Ψm) are converted to volumetric water content (θ) measurements based on a site or soil specific calibration curve. Hysteresis is a source of water content measurement error. Matric potential hysteresis arises from differences in wetting behaviour causing dry medium to re-wet; that is, it depends on the saturation history of the porous medium. Hysteretic behaviour means that, for example, at a matric potential (Ψm) of 5 kPa, the volumetric water content (θ) of a fine sandy soil matrix could be anything between 8% and 25%. Tensiometers are directly influenced by this type of hysteresis. Two other types of sensors used to measure soil water matric potential are also influenced by hysteresis effects within the sensor itself. Resistance blocks, both nylon and gypsum based, measure matric potential as a function of electrical resistance. The relation between the sensor's electrical resistance and sensor matric potential is hysteretic. Thermocouples measure matric potential as a function of heat dissipation. Hysteresis occurs because measured heat dissipation depends on sensor water content, and the sensor water content–matric potential relationship is hysteretic. As of 2002[ [update]], only desorption curves are usually measured during calibration of soil moisture sensors. Despite the fact that it can be a source of significant error, the sensor specific effect of hysteresis is generally ignored. In materials. Magnetic hysteresis. When an external magnetic field is applied to a ferromagnetic material such as iron, the atomic domains align themselves with it. Even when the field is removed, part of the alignment will be retained: the material has become "magnetized". Once magnetized, the magnet will stay magnetized indefinitely. To demagnetize it requires heat or a magnetic field in the opposite direction. This is the effect that provides the element of memory in a hard disk drive. The relationship between field strength H and magnetization M is not linear in such materials. If a magnet is demagnetized (H   M   0) and the relationship between H and M is plotted for increasing levels of field strength, M follows the "initial magnetization curve". This curve increases rapidly at first and then approaches an asymptote called magnetic saturation. If the magnetic field is now reduced monotonically, M follows a different curve. At zero field strength, the magnetization is offset from the origin by an amount called the remanence. If the H-M relationship is plotted for all strengths of applied magnetic field the result is a hysteresis loop called the "main loop". The width of the middle section is twice the coercivity of the material. A closer look at a magnetization curve generally reveals a series of small, random jumps in magnetization called Barkhausen jumps. This effect is due to crystallographic defects such as dislocations. Magnetic hysteresis loops are not exclusive to materials with ferromagnetic ordering. Other magnetic orderings, such as spin glass ordering, also exhibit this phenomenon. Physical origin. The phenomenon of hysteresis in ferromagnetic materials is the result of two effects: rotation of magnetization and changes in size or number of magnetic domains. In general, the magnetization varies (in direction but not magnitude) across a magnet, but in sufficiently small magnets, it does not. In these single-domain magnets, the magnetization responds to a magnetic field by rotating. Single-domain magnets are used wherever a strong, stable magnetization is needed (for example, magnetic recording). Larger magnets are divided into regions called "domains". Across each domain, the magnetization does not vary; but between domains are relatively thin "domain walls" in which the direction of magnetization rotates from the direction of one domain to another. If the magnetic field changes, the walls move, changing the relative sizes of the domains. Because the domains are not magnetized in the same direction, the magnetic moment per unit volume is smaller than it would be in a single-domain magnet; but domain walls involve rotation of only a small part of the magnetization, so it is much easier to change the magnetic moment. The magnetization can also change by addition or subtraction of domains (called "nucleation" and "denucleation"). Magnetic hysteresis models. The most known empirical models in hysteresis are Preisach and Jiles-Atherton models. These models allow an accurate modeling of the hysteresis loop and are widely used in the industry. However, these models lose the connection with thermodynamics and the energy consistency is not ensured. A more recent model, with a more consistent thermodynamical foundation, is the vectorial incremental nonconservative consistent hysteresis (VINCH) model of Lavet et al. (2011) Applications. There are a great variety of applications of the hysteresis in ferromagnets. Many of these make use of their ability to retain a memory, for example magnetic tape, hard disks, and credit cards. In these applications, "hard" magnets (high coercivity) like iron are desirable, such that as much energy is absorbed as possible during the write operation and the resultant magnetized information is not easily erased. On the other hand, magnetically "soft" (low coercivity) iron is used for the cores in electromagnets. The low coercivity minimizes the energy loss associated with hysteresis, as the magnetic field periodically reverses in the presence of an alternating current. The low energy loss during a hysteresis loop is the reason why soft iron is used for transformer cores and electric motors. Electrical hysteresis. Electrical hysteresis typically occurs in ferroelectric material, where domains of polarization contribute to the total polarization. Polarization is the electrical dipole moment (either C·m−2 or C·m). The mechanism, an organization of the polarization into domains, is similar to that of magnetic hysteresis. Liquid–solid-phase transitions. Hysteresis manifests itself in state transitions when melting temperature and freezing temperature do not agree. For example, agar melts at and solidifies from . This is to say that once agar is melted at 85 °C, it retains a liquid state until cooled to 40 °C. Therefore, from the temperatures of 40 to 85 °C, agar can be either solid or liquid, depending on which state it was before. In biology. Cell biology and genetics. Hysteresis in cell biology often follows bistable systems where the same input state can lead to two different, stable outputs. Where bistability can lead to digital, switch-like outputs from the continuous inputs of chemical concentrations and activities, hysteresis makes these systems more resistant to noise. These systems are often characterized by higher values of the input required to switch into a particular state as compared to the input required to stay in the state, allowing for a transition that is not continuously reversible, and thus less susceptible to noise. Cells undergoing cell division exhibit hysteresis in that it takes a higher concentration of cyclins to switch them from G2 phase into mitosis than to stay in mitosis once begun. Biochemical systems can also show hysteresis-like output when slowly varying states that are not directly monitored are involved, as in the case of the cell cycle arrest in yeast exposed to mating pheromone. Here, the duration of cell cycle arrest depends not only on the final level of input Fus3, but also on the previously achieved Fus3 levels. This effect is achieved due to the slower time scales involved in the transcription of intermediate Far1, such that the total Far1 activity reaches its equilibrium value slowly, and for transient changes in Fus3 concentration, the response of the system depends on the Far1 concentration achieved with the transient value. Experiments in this type of hysteresis benefit from the ability to change the concentration of the inputs with time. The mechanisms are often elucidated by allowing independent control of the concentration of the key intermediate, for instance, by using an inducible promoter. Darlington in his classic works on genetics discussed hysteresis of the chromosomes, by which he meant "failure of the external form of the chromosomes to respond immediately to the internal stresses due to changes in their molecular spiral", as they lie in a somewhat rigid medium in the limited space of the cell nucleus. In developmental biology, cell type diversity is regulated by long range-acting signaling molecules called morphogens that pattern uniform pools of cells in a concentration- and time-dependent manner. The morphogen sonic hedgehog (Shh), for example, acts on limb bud and neural progenitors to induce expression of a set of homeodomain-containing transcription factors to subdivide these tissues into distinct domains. It has been shown that these tissues have a 'memory' of previous exposure to Shh. In neural tissue, this hysteresis is regulated by a homeodomain (HD) feedback circuit that amplifies Shh signaling. In this circuit, expression of Gli transcription factors, the executors of the Shh pathway, is suppressed. Glis are processed to repressor forms (GliR) in the absence of Shh, but in the presence of Shh, a proportion of Glis are maintained as full-length proteins allowed to translocate to the nucleus, where they act as activators (GliA) of transcription. By reducing Gli expression then, the HD transcription factors reduce the total amount of Gli (GliT), so a higher proportion of GliT can be stabilized as GliA for the same concentration of Shh. Immunology. There is some evidence that T cells exhibit hysteresis in that it takes a lower signal threshold to activate T cells that have been previously activated. Ras GTPase activation is required for downstream effector functions of activated T cells. Triggering of the T cell receptor induces high levels of Ras activation, which results in higher levels of GTP-bound (active) Ras at the cell surface. Since higher levels of active Ras have accumulated at the cell surface in T cells that have been previously stimulated by strong engagement of the T cell receptor, weaker subsequent T cell receptor signals received shortly afterwards will deliver the same level of activation due to the presence of higher levels of already activated Ras as compared to a naïve cell. Neuroscience. The property by which some neurons do not return to their basal conditions from a stimulated condition immediately after removal of the stimulus is an example of hysteresis. Neuropsychology. Neuropsychology, in exploring the neural correlates of consciousness, interfaces with neuroscience, although the complexity of the central nervous system is a challenge to its study (that is, its operation resists easy reduction). Context-dependent memory and state-dependent memory show hysteretic aspects of neurocognition. Respiratory physiology. Lung hysteresis is evident when observing the compliance of a lung on inspiration versus expiration. The difference in compliance (Δvolume/Δpressure) is due to the additional energy required to overcome surface tension forces during inspiration to recruit and inflate additional alveoli. The transpulmonary pressure vs Volume curve of inhalation is different from the Pressure vs Volume curve of exhalation, the difference being described as hysteresis. Lung volume at any given pressure during inhalation is less than the lung volume at any given pressure during exhalation. Voice and speech physiology. A hysteresis effect may be observed in voicing onset versus offset. The threshold value of the subglottal pressure required to start the vocal fold vibration is lower than the threshold value at which the vibration stops, when other parameters are kept constant. In utterances of vowel-voiceless consonant-vowel sequences during speech, the intraoral pressure is lower at the voice onset of the second vowel compared to the voice offset of the first vowel, the oral airflow is lower, the transglottal pressure is larger and the glottal width is smaller. Ecology and epidemiology. Hysteresis is a commonly encountered phenomenon in ecology and epidemiology, where the observed equilibrium of a system can not be predicted solely based on environmental variables, but also requires knowledge of the system's past history. Notable examples include the theory of spruce budworm outbreaks and behavioral-effects on disease transmission. It is commonly examined in relation to critical transitions between ecosystem or community types in which dominant competitors or entire landscapes can change in a largely irreversible fashion. In ocean and climate science. Complex ocean and climate models rely on the principle. In economics. Economic systems can exhibit hysteresis. For example, export performance is subject to strong hysteresis effects: because of the fixed transportation costs it may take a big push to start a country's exports, but once the transition is made, not much may be required to keep them going. When some negative shock reduces employment in a company or industry, fewer employed workers then remain. As usually the employed workers have the power to set wages, their reduced number incentivizes them to bargain for even higher wages when the economy again gets better instead of letting the wage be at the equilibrium wage level, where the supply and demand of workers would match. This causes hysteresis: the unemployment becomes permanently higher after negative shocks. Permanently higher unemployment. The idea of hysteresis is used extensively in the area of labor economics, specifically with reference to the unemployment rate. According to theories based on hysteresis, severe economic downturns (recession) and/or persistent stagnation (slow demand growth, usually after a recession) cause unemployed individuals to lose their job skills (commonly developed on the job) or to find that their skills have become obsolete, or become demotivated, disillusioned or depressed or lose job-seeking skills. In addition, employers may use time spent in unemployment as a screening tool, i.e., to weed out less desired employees in hiring decisions. Then, in times of an economic upturn, recovery, or "boom", the affected workers will not share in the prosperity, remaining unemployed for long periods (e.g., over 52 weeks). This makes unemployment "structural", i.e., extremely difficult to reduce simply by increasing the aggregate demand for products and labor without causing increased inflation. That is, it is possible that a ratchet effect in unemployment rates exists, so a short-term rise in unemployment rates tends to persist. For example, traditional anti-inflationary policy (the use of recession to fight inflation) leads to a permanently higher "natural" rate of unemployment (more scientifically known as the NAIRU). This occurs first because inflationary expectations are "sticky" downward due to wage and price rigidities (and so adapt slowly over time rather than being approximately correct as in theories of rational expectations) and second because labor markets do not clear instantly in response to unemployment. The existence of hysteresis has been put forward as a possible explanation for the persistently high unemployment of many economies in the 1990s. Hysteresis has been invoked by Olivier Blanchard among others to explain the differences in long run unemployment rates between Europe and the United States. Labor market reform (usually meaning institutional change promoting more flexible wages, firing, and hiring) or strong demand-side economic growth may not therefore reduce this pool of long-term unemployed. Thus, specific targeted training programs are presented as a possible policy solution. However, the hysteresis hypothesis suggests such training programs are aided by persistently high demand for products (perhaps with incomes policies to avoid increased inflation), which reduces the transition costs out of unemployment and into paid employment easier. Models. Hysteretic models are mathematical models capable of simulating complex nonlinear behavior (hysteresis) characterizing mechanical systems and materials used in different fields of engineering, such as aerospace, civil, and mechanical engineering. Some examples of mechanical systems and materials having hysteretic behavior are: Each subject that involves hysteresis has models that are specific to the subject. In addition, there are hysteretic models that capture general features of many systems with hysteresis. An example is the Preisach model of hysteresis, which represents a hysteresis nonlinearity as a linear superposition of square loops called non-ideal relays. Many complex models of hysteresis arise from the simple parallel connection, or superposition, of elementary carriers of hysteresis termed hysterons. A simple and intuitive parametric description of various hysteresis loops may be found in the Lapshin model. Along with the smooth loops, substitution of trapezoidal, triangular or rectangular pulses instead of the harmonic functions allows piecewise-linear hysteresis loops frequently used in discrete automatics to be built in the model. There are implementations of the hysteresis loop model in Mathcad and in R programming language. The Bouc–Wen model of hysteresis is often used to describe non-linear hysteretic systems. It was introduced by Bouc and extended by Wen, who demonstrated its versatility by producing a variety of hysteretic patterns. This model is able to capture in analytical form, a range of shapes of hysteretic cycles which match the behaviour of a wide class of hysteretical systems; therefore, given its versability and mathematical tractability, the Bouc–Wen model has quickly gained popularity and has been extended and applied to a wide variety of engineering problems, including multi-degree-of-freedom (MDOF) systems, buildings, frames, bidirectional and torsional response of hysteretic systems two- and three-dimensional continua, and soil liquefaction among others. The Bouc–Wen model and its variants/extensions have been used in applications of structural control, in particular in the modeling of the behaviour of magnetorheological dampers, base isolation devices for buildings and other kinds of damping devices; it has also been used in the modelling and analysis of structures built of reinforced concrete, steel, masonry and timber.. The most important extension of Bouc-Wen Model was carried out by Baber and Noori and later by Noori and co-workers. That extended model, named, BWBN, can reproduce the complex shear pinching or slip-lock phenomenon that earlier model could not reproduce. The BWBN model has been widely used in a wide spectrum of applications and implementations are available in software such as OpenSees. Hysteretic models may have a generalized displacement formula_6 as input variable and a generalized force formula_7 as output variable, or vice versa. In particular, in rate-independent hysteretic models, the output variable does not depend on the rate of variation of the input one. Rate-independent hysteretic models can be classified into four different categories depending on the type of equation that needs to be solved to compute the output variable: List of models. Some notable hysteretic models are listed below with their associated fields. Energy. When hysteresis occurs with extensive and intensive variables, the work done on the system is the area under the hysteresis graph. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\begin{align}\nX(t) &= X_0 \\sin \\omega t \\\\ Y(t) &= Y_0 \\sin\\left(\\omega t-\\varphi\\right).\n\\end{align}" }, { "math_id": 1, "text": " Y(t) = \\chi_\\text{i} X(t) + \\int_0^{\\infty} \\Phi_\\text{d} (\\tau) X(t-\\tau) \\, \\mathrm{d}\\tau, " }, { "math_id": 2, "text": "\\chi_\\text{i}" }, { "math_id": 3, "text": "\\Phi_d(\\tau)" }, { "math_id": 4, "text": "\\tau" }, { "math_id": 5, "text": "\\Phi_d" }, { "math_id": 6, "text": "u" }, { "math_id": 7, "text": "f" } ]
https://en.wikipedia.org/wiki?curid=147003
14703145
Aubin–Lions lemma
In mathematics, the Aubin–Lions lemma (or theorem) is the result in the theory of Sobolev spaces of Banach space-valued functions, which provides a compactness criterion that is useful in the study of nonlinear evolutionary partial differential equations. Typically, to prove the existence of solutions one first constructs approximate solutions (for example, by a Galerkin method or by mollification of the equation), then uses the compactness lemma to show that there is a convergent subsequence of approximate solutions whose limit is a solution. The result is named after the French mathematicians Jean-Pierre Aubin and Jacques-Louis Lions. In the original proof by Aubin, the spaces "X"0 and "X"1 in the statement of the lemma were assumed to be reflexive, but this assumption was removed by Simon, so the result is also referred to as the Aubin–Lions–Simon lemma. Statement of the lemma. Let "X"0, "X" and "X"1 be three Banach spaces with "X"0 ⊆ "X" ⊆ "X"1. Suppose that "X"0 is compactly embedded in "X" and that "X" is continuously embedded in "X"1. For formula_0, let formula_1 (i) If formula_2 then the embedding of W into formula_3 is compact. (ii) If formula_4 and formula_5 then the embedding of W into formula_6 is compact. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1\\leq p, q\\leq\\infty" }, { "math_id": 1, "text": "W = \\{ u \\in L^p ([0, T]; X_0) \\mid \\dot{u} \\in L^q ([0, T]; X_1) \\}." }, { "math_id": 2, "text": "p<\\infty" }, { "math_id": 3, "text": "L^p([0,T];X)" }, { "math_id": 4, "text": "p=\\infty" }, { "math_id": 5, "text": "q>1" }, { "math_id": 6, "text": "C([0,T];X)" } ]
https://en.wikipedia.org/wiki?curid=14703145
14703193
Type inhabitation
In type theory, a branch of mathematical logic, in a given typed calculus, the type inhabitation problem for this calculus is the following problem: given a type formula_0 and a typing environment formula_1, does there exist a formula_2-term M such that formula_3? With an empty type environment, such an M is said to be an inhabitant of formula_0. Relationship to logic. In the case of simply typed lambda calculus, a type has an inhabitant if and only if its corresponding proposition is a tautology of minimal implicative logic. Similarly, a System F type has an inhabitant if and only if its corresponding proposition is a tautology of intuitionistic second-order logic. Girard's paradox shows that type inhabitation is strongly related to the consistency of a type system with Curry–Howard correspondence. To be sound, such a system must have uninhabited types. Formal properties. For most typed calculi, the type inhabitation problem is very hard. Richard Statman proved that for simply typed lambda calculus the type inhabitation problem is PSPACE-complete. For other calculi, like System F, the problem is even undecidable. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tau" }, { "math_id": 1, "text": "\\Gamma" }, { "math_id": 2, "text": "\\lambda" }, { "math_id": 3, "text": "\\Gamma \\vdash M : \\tau" } ]
https://en.wikipedia.org/wiki?curid=14703193
14703713
Shock-capturing method
In computational fluid dynamics, shock-capturing methods are a class of techniques for computing inviscid flows with shock waves. The computation of flow containing shock waves is an extremely difficult task because such flows result in sharp, discontinuous changes in flow variables such as pressure, temperature, density, and velocity across the shock. Method. In shock-capturing methods, the governing equations of inviscid flows (i.e. Euler equations) are cast in conservation form and any shock waves or discontinuities are computed as part of the solution. Here, no special treatment is employed to take care of the shocks themselves, which is in contrast to the shock-fitting method, where shock waves are explicitly introduced in the solution using appropriate shock relations (Rankine–Hugoniot relations). The shock waves predicted by shock-capturing methods are generally not sharp and may be smeared over several grid elements. Also, classical shock-capturing methods have the disadvantage that unphysical oscillations (Gibbs phenomenon) may develop near strong shocks. Euler equations. The Euler equations are the governing equations for inviscid flow. To implement shock-capturing methods, the conservation form of the Euler equations are used. For a flow without external heat transfer and work transfer (isoenergetic flow), the conservation form of the Euler equation in Cartesian coordinate system can be written as formula_0 where the vectors U, F, G, and H are given by formula_1 where formula_2 is the total energy (internal energy + kinetic energy + potential energy) per unit mass. That is formula_3 The Euler equations may be integrated with any of the shock-capturing methods available to obtain the solution. Classical and modern shock capturing methods. From a historical point of view, shock-capturing methods can be classified into two general categories: classical methods and modern shock capturing methods (also called high-resolution schemes). Modern shock-capturing methods are generally upwind biased in contrast to classical symmetric or central discretizations. Upwind-biased differencing schemes attempt to discretize hyperbolic partial differential equations by using differencing based on the direction of the flow. On the other hand, symmetric or central schemes do not consider any information about the direction of wave propagation. Regardless of the shock-capturing scheme used, a stable calculation in the presence of shock waves requires a certain amount of numerical dissipation, in order to avoid the formation of unphysical numerical oscillations. In the case of classical shock-capturing methods, numerical dissipation terms are usually linear and the same amount is uniformly applied at all grid points. Classical shock-capturing methods only exhibit accurate results in the case of smooth and weak shock solutions, but when strong shock waves are present in the solution, non-linear instabilities and oscillations may arise across discontinuities. Modern shock-capturing methods usually employ nonlinear numerical dissipation, where a feedback mechanism adjusts the amount of artificial dissipation added in accord with the features in the solution. Ideally, artificial numerical dissipation needs to be added only in the vicinity of shocks or other sharp features, and regions of smooth flow must be left unmodified. These schemes have proven to be stable and accurate even for problems containing strong shock waves. Some of the well-known classical shock-capturing methods include the MacCormack method (uses a discretization scheme for the numerical solution of hyperbolic partial differential equations), Lax–Wendroff method (based on finite differences, uses a numerical method for the solution of hyperbolic partial differential equations), and Beam–Warming method. Examples of modern shock-capturing schemes include higher-order total variation diminishing (TVD) schemes first proposed by Harten, flux-corrected transport scheme introduced by Boris and Book, Monotonic Upstream-centered Schemes for Conservation Laws (MUSCL) based on Godunov approach and introduced by van Leer, various essentially non-oscillatory schemes (ENO) proposed by Harten et al., and the piecewise parabolic method (PPM) proposed by Colella and Woodward. Another important class of high-resolution schemes belongs to the approximate Riemann solvers proposed by Roe and by Osher. The schemes proposed by Jameson and Baker, where linear numerical dissipation terms depend on nonlinear switch functions, fall in between the classical and modern shock-capturing methods.
[ { "math_id": 0, "text": "\n \\frac{\\partial {\\mathbf U}}{\\partial t} + \\frac{\\partial {\\mathbf F}}{\\partial x} + \\frac{\\partial {\\mathbf G}}{\\partial y} + \\frac{\\partial {\\mathbf H}}{\\partial z} = 0\n" }, { "math_id": 1, "text": "\n\\mathbf U =\n\\begin{bmatrix}\n \\rho \\\\\n \\rho u \\\\\n \\rho v \\\\\n \\rho w \\\\\n \\rho e_t \\\\\n\\end{bmatrix} , \\quad\n\n\\mathbf F =\n\\begin{bmatrix}\n \\rho u\\\\\n \\rho u^2 + p \\\\\n \\rho uv \\\\\n \\rho uw \\\\\n (\\rho e_t + p)u \\\\\n\\end{bmatrix} , \\quad\n\n\\mathbf G =\n\\begin{bmatrix}\n \\rho v\\\\\n \\rho vu \\\\\n \\rho v^2 + p \\\\\n \\rho vw \\\\\n (\\rho e_t + p)v \\\\\n\\end{bmatrix} , \\quad\n\n\\mathbf H =\n\\begin{bmatrix}\n \\rho w\\\\\n \\rho wu \\\\\n \\rho wv \\\\\n \\rho w^2 + p \\\\\n (\\rho e_t + p)w \\\\\n\\end{bmatrix}\n" }, { "math_id": 2, "text": "e_t" }, { "math_id": 3, "text": "\n e_t = e + \\frac{u^2 + v^2 + w^2}{2} + gz\n" } ]
https://en.wikipedia.org/wiki?curid=14703713
1470432
Riesz function
Mathematical function In mathematics, the Riesz function is an entire function defined by Marcel Riesz in connection with the Riemann hypothesis, by means of the power series formula_0 If we set formula_1 we may define it in terms of the coefficients of the Laurent series development of the hyperbolic (or equivalently, the ordinary) cotangent around zero. If formula_2 then formula_3 may be defined as formula_4 The values of formula_5 approach one for increasing k, and comparing the series for the Riesz function with that for formula_6 shows that it defines an entire function. Alternatively, "F" may be defined as formula_7 formula_8 denotes the rising factorial power in the notation of D. E. Knuth and the number "formula_9" are the Bernoulli number. The series is one of alternating terms and the function quickly tends to minus infinity for increasingly negative values of "formula_10". Positive values of "formula_10" are more interesting and delicate. Riesz criterion. It can be shown that formula_11 for any exponent "formula_12" larger than formula_13, where this is big O notation; taking values both positive and negative. Riesz showed that the Riemann hypothesis is equivalent to the claim that the above is true for any "e" larger than formula_14. In the same paper, he added a slightly pessimistic note too: «"Je ne sais pas encore decider si cette condition facilitera la vérification de l'hypothèse"» ("I can't decide if this condition will facilitate the verification of the hypothesis yet"). Mellin transform of the Riesz function. The Riesz function is related to the Riemann zeta function via its Mellin transform. If we take formula_15 we see that if formula_16 then formula_17 converges, whereas from the growth condition we have that if formula_18 then formula_19 converges. Putting this together, we see the Mellin transform of the Riesz function is defined on the strip formula_20. On this strip, we have (cf. Ramanujan's master theorem) formula_21 From the inverse Mellin transform, we now get an expression for the Riesz function, as formula_22 where c is between minus one and minus one-half. If the Riemann hypothesis is true, we can move the line of integration to any value less than minus one-fourth, and hence we get the equivalence between the fourth-root rate of growth for the Riesz function and the Riemann hypothesis. Calculation of the Riesz function. The Maclaurin series coefficients of "formula_3" increase in absolute value until they reach their maximum at the 40th term of formula_23. By the 109th term they have dropped below one in absolute value. Taking the first 1000 terms suffices to give a very accurate value for formula_24 for formula_25. However, this would require evaluating a polynomial of degree 1000 either using rational arithmetic with the coefficients of large numerator or denominator, or using floating point computations of over 100 digits. An alternative is to use the inverse Mellin transform defined above and numerically integrate. Neither approach is computationally easy. Another approach is to use acceleration of convergence. We have formula_26 Since formula_5 approaches one as k grows larger, the terms of this series approach formula_27. Indeed, Riesz noted that: formula_28 Using Kummer's method for accelerating convergence gives formula_29 with an improved rate of convergence. Continuing this process leads to a new series for the Riesz function with much better convergence properties: formula_30 formula_31 Here formula_32 is the Möbius mu function, and the rearrangement of terms is justified by absolute convergence. We may now apply Kummer's method again, and write formula_33 the terms of which eventually decrease as the inverse fourth power of "formula_34". The above series are absolutely convergent everywhere, and hence may be differentiated term by term, leading to the following expression for the derivative of the Riesz function: formula_35 which may be rearranged as formula_36 Marek Wolf in assuming the Riemann Hypothesis has shown that for large formula_10: formula_37 where formula_38 is the imaginary part of the first nontrivial zero of the zeta function, formula_39 and formula_40. It agrees with the general theorems about zeros of the Riesz function proved in 1964 by Herbert Wilf. A plot for the range 0 to 50 is given above. So far as it goes, it does not indicate very rapid growth and perhaps bodes well for the truth of the Riemann hypothesis. Hardy–Littlewood criterion. G. H. Hardy and J. E. Littlewood proved, by similar methods, that the Riemann hypothesis is equivalent to the claim that the following will be true for any exponent "formula_12" larger than formula_41: formula_42
[ { "math_id": 0, "text": "{\\rm Riesz}(x) = \\sum_{k=1}^\\infty \\frac{(-1)^{k-1}x^k}{(k-1)! \\zeta(2k)}=x \\sum_{n=1}^\\infty \\frac{\\mu(n)}{n^2} \\exp\\left(\\frac{-x}{n^2}\\right)." }, { "math_id": 1, "text": "F(x) = \\frac12 {\\rm Riesz}(4 \\pi^2 x)" }, { "math_id": 2, "text": "\\frac{x}{2} \\coth \\frac{x}{2} = \\sum_{n=0}^\\infty c_n x^n = 1 + \\frac{1}{12} x^2 - \\frac{1}{720}x^4 + \\cdots" }, { "math_id": 3, "text": "F" }, { "math_id": 4, "text": "F(x) = \\sum_{k=1}^\\infty \\frac{x^k}{c_{2k}(k-1)!} = 12x - 720x^2 + 15120x^3 - \\cdots" }, { "math_id": 5, "text": "\\zeta(2k)" }, { "math_id": 6, "text": "x\\exp(-x)" }, { "math_id": 7, "text": " F(x) = \\sum_{k=1}^{\\infty}\\frac{k^{\\overline{k+1}}x^{k}}{B_{2k}}. \\ " }, { "math_id": 8, "text": "n^{\\overline{k}}" }, { "math_id": 9, "text": "B_n" }, { "math_id": 10, "text": "x" }, { "math_id": 11, "text": "\\operatorname{Riesz}(x) = O(x^e)\\qquad (\\text{as }x\\to\\infty)" }, { "math_id": 12, "text": "e" }, { "math_id": 13, "text": "1/2" }, { "math_id": 14, "text": "1/4" }, { "math_id": 15, "text": "{\\mathcal M}({\\rm Riesz}(z)) = \\int_0^\\infty {\\rm Riesz}(z) z^s \\frac{dz}{z}" }, { "math_id": 16, "text": "\\Re(s)>-1" }, { "math_id": 17, "text": "\\int_0^1 {\\rm Riesz}(z) z^s \\frac{dz}{z}" }, { "math_id": 18, "text": "\\Re(s) < -\\frac{1}{2} " }, { "math_id": 19, "text": "\\int_1^\\infty {\\rm Riesz}(z) z^s \\frac{dz}{z}" }, { "math_id": 20, "text": "-1 < \\Re(s) < -\\frac12" }, { "math_id": 21, "text": "\\frac{\\Gamma(s+1)}{\\zeta(-2s)} = {\\mathcal M}({\\rm Riesz}(z)) " }, { "math_id": 22, "text": "{\\rm Riesz}(z) = \\int_{c - i \\infty}^{c+i \\infty} \\frac{\\Gamma(s+1)}{\\zeta(-2s)} z^{-s} ds " }, { "math_id": 23, "text": "-1.753\\times 10^{17}" }, { "math_id": 24, "text": "F(z)" }, { "math_id": 25, "text": "|z| < 9" }, { "math_id": 26, "text": "{\\rm Riesz}(x) = \\sum_{k=1}^\\infty \\frac{(-1)^{k+1}x^k}{(k-1)! \\zeta(2k)}." }, { "math_id": 27, "text": "\\sum_{k=1}^\\infty \\frac{(-1)^{k+1}x^k}{(k-1)!} = x \\exp(-x)" }, { "math_id": 28, "text": "\\ {\\sum_{n=1}^\\infty {\\rm Riesz}(x/n^2) = x \\exp(-x)}." }, { "math_id": 29, "text": "{\\rm Riesz}(x) = x \\exp(-x) - \\sum_{k=1}^\\infty \\left(\\zeta(2k) -1\\right) \\left(\\frac{(-1)^{k+1}}\n{(k-1)! \\zeta(2k)}\\right)x^k" }, { "math_id": 30, "text": "{\\rm Riesz}(x) = \\sum_{k=1}^\\infty \\frac{(-1)^{k+1}x^k}{(k-1)! \\zeta(2k)}\n= \\sum_{k=1}^\\infty \\frac{(-1)^{k+1}x^k}{(k-1)!} \\left(\\sum_{n=1}^\\infty \\mu(n)n^{-2k}\\right)" }, { "math_id": 31, "text": " \\sum_{k=1}^\\infty \\sum_{n=1}^\\infty \\frac{(-1)^{k+1}\\left(x/n^2\\right)^k}{(k-1)!}= x \\sum_{n=1}^\\infty \\frac{\\mu(n)}{n^2} \\exp\\left(-\\frac{x}{n^2}\\right)." }, { "math_id": 32, "text": "\\mu" }, { "math_id": 33, "text": "{\\rm Riesz}(x) = x \\left(\\frac{6}{\\pi^2} + \\sum_{n=1}^\\infty \\frac{\\mu(n)}{n^2}\\left(\\exp\\left(-\\frac{x}{n^2}\\right) - 1\\right)\\right)" }, { "math_id": 34, "text": "n" }, { "math_id": 35, "text": "{\\rm Riesz}'(x) = \\frac{\\text{Riesz}(x)}{x} - x\\left(\\sum_{n=1}^\\infty \\frac{\\mu(n)}{n^4} \\exp\\left(-\\frac{x}{n^2}\\right)\\right)" }, { "math_id": 36, "text": "{\\rm Riesz}'(x) = \\frac{\\text{Riesz}(x)}{x} + x\\left(-\\frac{90}{\\pi^4} + \\sum_{n=1}^\\infty \\frac{\\mu(n)}{n^4} \\left(1-\\exp\\left(-\\frac{x}{n^2}\\right)\\right)\\right)." }, { "math_id": 37, "text": "{\\rm Riesz}(x) \\sim K x^{1/4} \\sin\\left(\\phi-\\frac{1}{2}\\gamma_1\\log(x)\\right)" }, { "math_id": 38, "text": "\\gamma_1=14.13472514..." }, { "math_id": 39, "text": " K =\n7.7750627...\\times 10^{-5}" }, { "math_id": 40, "text": "\\phi=-0.54916...= -31,46447^{\\circ}" }, { "math_id": 41, "text": "-1/4" }, { "math_id": 42, "text": "\\sum_{k=1}^\\infty \\frac{(-x)^k}{k! \\zeta(2k+1)} = O(x^{e})\\qquad (\\text{as }x\\to\\infty).\n" } ]
https://en.wikipedia.org/wiki?curid=1470432
1470466
Braided monoidal category
Object in category theory In mathematics, a "commutativity constraint" formula_0 on a monoidal category "formula_1" is a choice of isomorphism formula_2 for each pair of objects "A" and "B" which form a "natural family." In particular, to have a commutativity constraint, one must have formula_3 for all pairs of objects formula_4. A braided monoidal category is a monoidal category formula_1 equipped with a braiding—that is, a commutativity constraint formula_0 that satisfies axioms including the hexagon identities defined below. The term "braided" references the fact that the braid group plays an important role in the theory of braided monoidal categories. Partly for this reason, braided monoidal categories and other topics are related in the theory of knot invariants. Alternatively, a braided monoidal category can be seen as a tricategory with one 0-cell and one 1-cell. Braided monoidal categories were introduced by André Joyal and Ross Street in a 1986 preprint. A modified version of this paper was published in 1993. The hexagon identities. For formula_1 along with the commutativity constraint formula_5 to be called a braided monoidal category, the following hexagonal diagrams must commute for all objects formula_6. Here formula_7 is the associativity isomorphism coming from the monoidal structure on formula_1: Properties. Coherence. It can be shown that the natural isomorphism formula_5 along with the maps formula_8 coming from the monoidal structure on the category formula_1, satisfy various coherence conditions, which state that various compositions of structure maps are equal. In particular: formula_10 as maps formula_11. Here we have left out the associator maps. Variations. There are several variants of braided monoidal categories that are used in various contexts. See, for example, the expository paper of Savage (2009) for an explanation of symmetric and coboundary monoidal categories, and the book by Chari and Pressley (1995) for ribbon categories. Symmetric monoidal categories. A braided monoidal category is called symmetric if formula_5 also satisfies formula_12 for all pairs of objects formula_13 and formula_14. In this case the action of formula_5 on an formula_9-fold tensor product factors through the symmetric group. Ribbon categories. A braided monoidal category is a "ribbon category" if it is rigid, and it may preserve quantum trace and co-quantum trace. Ribbon categories are particularly useful in constructing knot invariants. Coboundary monoidal categories. A coboundary or “cactus” monoidal category is a monoidal category formula_15 together with a family of natural isomorphisms formula_16 with the following properties: The first property shows us that formula_18, thus allowing us to omit the analog to the second defining diagram of a braided monoidal category and ignore the associator maps as implied.
[ { "math_id": 0, "text": " \\gamma " }, { "math_id": 1, "text": "\\mathcal{C}" }, { "math_id": 2, "text": " \\gamma_{A,B} : A\\otimes B \\rightarrow B\\otimes A" }, { "math_id": 3, "text": " A \\otimes B \\cong B \\otimes A " }, { "math_id": 4, "text": " A,B \\in \\mathcal{C}" }, { "math_id": 5, "text": "\\gamma" }, { "math_id": 6, "text": "A,B,C \\in \\mathcal{C}" }, { "math_id": 7, "text": "\\alpha" }, { "math_id": 8, "text": " \\alpha, \\lambda, \\rho " }, { "math_id": 9, "text": "N" }, { "math_id": 10, "text": " (\\gamma_{B,C} \\otimes \\text{Id}) \\circ (\\text{Id} \\otimes \\gamma_{A, C}) \\circ (\\gamma_{A,B} \\otimes \\text{Id}) =\n(\\text{Id} \\otimes \\gamma_{A,B}) \\circ (\\gamma_{A,C} \\otimes \\text{Id}) \\circ (\\text{Id} \\otimes \\gamma_{B, C})\n" }, { "math_id": 11, "text": " A \\otimes B \\otimes C \\rightarrow C \\otimes B \\otimes A" }, { "math_id": 12, "text": " \\gamma_{B,A} \\circ \\gamma_{A,B} = \\text{Id}" }, { "math_id": 13, "text": "A" }, { "math_id": 14, "text": "B" }, { "math_id": 15, "text": " (C, \\otimes, \\text{Id}) " }, { "math_id": 16, "text": " \\gamma_{A,B}: A\\otimes B \\to B\\otimes A " }, { "math_id": 17, "text": " \\gamma_{B \\otimes A, C} \\circ (\\gamma_{A,B} \\otimes \\text{Id}) = \\gamma_{A, C \\otimes B} \\circ (\\text{Id} \\otimes \\gamma_{B,C})" }, { "math_id": 18, "text": " \\gamma^{-1}_{A,B} = \\gamma_{B,A} " }, { "math_id": 19, "text": " \\gamma (v \\otimes w) = w \\otimes v " }, { "math_id": 20, "text": " U_q(\\mathfrak{g})" } ]
https://en.wikipedia.org/wiki?curid=1470466
1470583
Aperture (antenna)
In electromagnetics and antenna theory, the aperture of an antenna is defined as "A surface, near or on an antenna, on which it is convenient to make assumptions regarding the field values for the purpose of computing fields at external points. The aperture is often taken as that portion of a plane surface near the antenna, perpendicular to the direction of maximum radiation, through which the major part of the radiation passes." Effective area. The effective area of an antenna is defined as "In a given direction, the ratio of the available power at the terminals of a receiving antenna to the power flux density of a plane wave incident on the antenna from that direction, the wave being polarization matched to the antenna." Of particular note in this definition is that both effective area and power flux density are functions of incident angle of a plane wave. Assume a plane wave from a particular direction formula_0, which are the azimuth and elevation angles relative to the array normal, has a "power flux density" formula_1; this is the amount of power passing through a unit area normal to the direction of the plane wave of one square meter. By definition, if an antenna delivers formula_2 watts to the transmission line connected to its output terminals when irradiated by a uniform field of power density formula_3 watts per square meter, the antenna's effective area formula_4 for the direction of that plane wave is given by formula_5 The power formula_2 accepted by the antenna (the power at the antenna terminals) is less than the power formula_6 received by an antenna by the radiation efficiency formula_7 of the antenna. formula_6 is equal to the power density of the electromagnetic energy formula_8, where formula_9 is the unit vector normal to the array aperture, multiplied by the physical aperture area formula_10. The incoming radiation is assumed to have the same polarization as the antenna. Therefore, formula_11 and formula_12 The effective area of an antenna or aperture is based upon a "receiving" antenna. However, due to reciprocity, an antenna's directivity in receiving and transmitting are identical, so the power transmitted by an antenna in different directions (the radiation pattern) is also proportional to the effective area formula_13. When no direction is specified, formula_13 is understood to refer to its maximal value. Effective length. Most antenna designs are not defined by a physical area but consist of wires or thin rods; then the effective aperture bears no clear relation to the size or area of the antenna. An alternate measure of antenna response that has a greater relationship to the physical length of such antennas is effective length formula_14 measured in metres, which is defined for a receiving antenna as formula_15 where formula_16 is the open-circuit voltage appearing across the antenna's terminals, formula_17 is the electric field strength of the radio signal, in volts per metre, at the antenna. The longer the effective length, the greater is the voltage appearing at its terminals. However, the actual power implied by that voltage depends on the antenna's feedpoint impedance, so this cannot be directly related to antenna gain, which "is" a measure of received power (but does not directly specify voltage or current). For instance, a half-wave dipole has a much longer effective length than a short dipole. However the effective area of the short dipole is almost as great as it is for the half-wave antenna, since (ideally), given an ideal impedance-matching network, it can receive almost as much power from that wave. Note that for a given antenna feedpoint impedance, an antenna's gain or formula_18 increases according to the "square" of formula_14, so that the effective length for an antenna relative to different wave directions follows the "square root" of the gain in those directions. But since changing the physical size of an antenna inevitably changes the impedance (often by a great factor), the effective length is not by itself a useful figure of merit for describing an antenna's peak directivity and is more of theoretical importance. In practice, the effective length of a particular antenna is often combined with its impedance and loss to become the realized effective length. Aperture efficiency. In general, the aperture of an antenna cannot be directly inferred from its physical size. However so-called "aperture antennas" such as parabolic dishes and horn antennas, have a large (relative to the wavelength) physical area formula_19 which is opaque to such radiation, essentially casting a shadow from a plane wave and thus removing an amount of power formula_20 from the original beam. That power removed from the plane wave can be actually received by the antenna (converted into electrical power), reflected or otherwise scattered, or absorbed (converted to heat). In this case the "effective aperture" formula_13 is always less than (or equal to) the area of the antenna's physical aperture formula_19, as it accounts only for the portion of that wave actually received as electrical power. An aperture antenna's "aperture efficiency" formula_21 is defined as the ratio of these two areas: formula_22 The aperture efficiency is a dimensionless parameter between 0 and 1 that measures how close the antenna comes to using all the radio wave power intersecting its physical aperture. If the aperture efficiency were 100%, then all the wave's power falling on its physical aperture would be converted to electrical power delivered to the load attached to its output terminals, so these two areas would be equal: formula_23. But due to nonuniform illumination by a parabolic dish's feed, as well as other scattering or loss mechanisms, this is not achieved in practice. Since a parabolic antenna's cost and wind load increase with the "physical" aperture size, there may be a strong motivation to reduce these (while achieving a specified antenna gain) by maximizing the aperture efficiency. Aperture efficiencies of typical aperture antennas vary from 0.35 to well over 0.70. Note that when one simply speaks of an antenna's "efficiency", what is most often meant is the "radiation efficiency", a measure which applies to all antennas (not just aperture antennas) and accounts only for the gain reduction due to losses. Outside of aperture antennas, most antennas consist of thin wires or rods with a small physical cross-sectional area (generally much smaller than formula_4) for which "aperture efficiency" is not even defined. Aperture and gain. The "directivity" of an antenna, its ability to direct radio waves preferentially in one direction or receive preferentially from a given direction, is expressed by a parameter formula_24 called "antenna gain". This is most commonly defined as the ratio of the power formula_25 received by that antenna from waves in a given direction to the power formula_26 that would be received by an ideal isotropic antenna, that is, a hypothetical antenna that receives power equally well from all directions. It can be seen that (for antennas at a given frequency) gain is also equal to the ratio of the apertures of these antennas: formula_27 As shown below, the aperture of a lossless isotropic antenna, which by this definition has unity gain, is formula_28 where formula_29 is the wavelength of the radio waves. Thus formula_30 So antennas with large effective apertures are considered high-gain antennas (or "beam antennas"), which have relatively small angular beam widths. As receiving antennas, they are much more sensitive to radio waves coming from a preferred direction compared to waves coming from other directions (which would be considered interference). As transmitting antennas, most of their power is radiated in a particular direction at the expense of other directions. Although antenna gain and effective aperture are functions of direction, when no direction is specified, these are understood to refer to their maximal values, that is, in the direction(s) of the antenna's intended use (also referred to as the antenna's main lobe or boresight). Friis transmission formula. The fraction of the power delivered to a transmitting antenna that is received by a receiving antenna is proportional to the product of the apertures of both the antennas and inversely proportional to the squared values of the distance between the antennas and the wavelength. This is given by a form of the Friis transmission formula: formula_31 where formula_32 is the power fed into the transmitting antenna input terminals, formula_33 is the power available at receiving antenna output terminals, formula_34 is the effective area of the receiving antenna, formula_35 is the effective area of the transmitting antenna, formula_36 is the distance between antennas (the formula is only valid for formula_36 large enough to ensure a plane wave front at the receive antenna, sufficiently approximated by formula_37, where formula_38 is the largest linear dimension of either of the antennas), formula_29 is the wavelength of the radio frequency. Derivation of antenna aperture from thermodynamic considerations. The aperture of an isotropic antenna, the basis of the definition of gain above, can be derived on the basis of consistency with thermodynamics. Suppose that an ideal isotropic antenna "A" with a driving-point impedance of "R" sits within a closed system CA in thermodynamic equilibrium at temperature "T". We connect the antenna terminals to a resistor also of resistance "R" inside a second closed system CR, also at temperature "T". In between may be inserted an arbitrary lossless electronic filter "Fν" passing only some frequency components. Each cavity is in thermal equilibrium and thus filled with black-body radiation due to temperature "T". The resistor, due to that temperature, will generate Johnson–Nyquist noise with an open-circuit voltage whose mean-squared spectral density is given by formula_39 where formula_40 is a quantum-mechanical factor applying to frequency "f"; at normal temperatures and electronic frequencies formula_41, but in general is given by formula_42 The amount of power supplied by an electrical source of impedance "R" into a matched load (that is, something with an impedance of "R", such as the antenna in CA) whose rms open-circuit voltage is "v"rms is given by formula_43 The mean-squared voltage formula_44 can be found by integrating the above equation for the spectral density of mean-squared noise voltage over frequencies passed by the filter "Fν". For simplicity, let us just consider "Fν" as a narrowband filter of bandwidth "B"1 around central frequency "f"1, in which case that integral simplifies as follows: formula_45 formula_46 This power due to Johnson noise from the resistor is received by the antenna, which radiates it into the closed system CA. The same antenna, being bathed in black-body radiation of temperature "T", receives a spectral radiance (power per unit area per unit frequency per unit solid angle) given by Planck's law: formula_47 using the notation formula_40 defined above. However, that radiation is unpolarized, whereas the antenna is only sensitive to one polarization, reducing it by a factor of 2. To find the total power from black-body radiation accepted by the antenna, we must integrate that quantity times the assumed cross-sectional area "A"eff of the antenna over all solid angles Ω and over all frequencies "f": formula_48 Since we have assumed an isotropic radiator, "A"eff is independent of angle, so the integration over solid angles is trivial, introducing a factor of 4π. And again we can take the simple case of a narrowband electronic filter function "Fν" which only passes power of bandwidth "B"1 around frequency "f"1. The double integral then simplifies to formula_49 where formula_50 is the free-space wavelength corresponding to the frequency "f"1. Since each system is in thermodynamic equilibrium at the same temperature, we expect no net transfer of power between the cavities. Otherwise one cavity would heat up and the other would cool down in violation of the second law of thermodynamics. Therefore, the power flows in both directions must be equal: formula_51 We can then solve for "A"eff, the cross-sectional area intercepted by the isotropic antenna: formula_52 formula_53 We thus find that for a hypothetical isotropic antenna, thermodynamics demands that the effective cross-section of the receiving antenna to have an area of λ2/4π. This result could be further generalized if we allow the integral over frequency to be more general. Then we find that "A"eff for the same antenna must vary with frequency according to that same formula, using λ = "c"/"f". Moreover, the integral over solid angle can be generalized for an antenna that is "not" isotropic (that is, any real antenna). Since the angle of arriving electromagnetic radiation only enters into "A"eff in the above integral, we arrive at the simple but powerful result that the "average" of the effective cross-section "A"eff over all angles at wavelength λ must also be given by formula_54 Although the above is sufficient proof, we can note that the condition of the antenna's impedance being "R", the same as the resistor, can also be relaxed. In principle, any antenna impedance (that isn't totally reactive) can be impedance-matched to the resistor "R" by inserting a suitable (lossless) matching network. Since that network is lossless, the powers "P"A and "P"R will still flow in opposite directions, even though the voltage and currents seen at the antenna and resistor's terminals will differ. The spectral density of the power flow in either direction will still be given by formula_55, and in fact this is the very thermal-noise power spectral density associated with one electromagnetic mode, be it in free-space or transmitted electrically. Since there is only a single connection to the resistor, the resistor itself represents a single mode. And an antenna, also having a single electrical connection, couples to one mode of the electromagnetic field according to its average effective cross-section of formula_56. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\theta, \\phi)" }, { "math_id": 1, "text": "\\|\\vec{S}\\|" }, { "math_id": 2, "text": "P_\\text{O}" }, { "math_id": 3, "text": "|S(\\theta, \\phi)|" }, { "math_id": 4, "text": "A_\\text{e}" }, { "math_id": 5, "text": "A_\\text{e}(\\theta, \\phi) = \\frac{P_O}{\\|\\vec{S}(\\theta, \\phi)\\|}." }, { "math_id": 6, "text": "P_\\text{R}" }, { "math_id": 7, "text": "\\eta" }, { "math_id": 8, "text": "|S(\\theta, \\phi)| = |\\vec{S} \\cdot \\hat{a}|" }, { "math_id": 9, "text": "\\hat{a}" }, { "math_id": 10, "text": "A" }, { "math_id": 11, "text": "P_\\text{O} = \\eta P_\\text{R} = \\eta A |\\vec{S} \\cdot \\hat{a}| = \\eta A \\|\\vec{S}(\\theta, \\phi)\\| \\cos\\theta \\cos\\phi," }, { "math_id": 12, "text": "A_\\text{e}(\\theta, \\phi) = \\eta A \\cos\\theta \\cos\\phi." }, { "math_id": 13, "text": "A_e" }, { "math_id": 14, "text": "l_\\text{eff}" }, { "math_id": 15, "text": "l_\\text{eff} = V_0 / E_\\text{s}," }, { "math_id": 16, "text": "V_0" }, { "math_id": 17, "text": "E_s" }, { "math_id": 18, "text": "A_\\text{eff}" }, { "math_id": 19, "text": "A_\\text{phys}" }, { "math_id": 20, "text": "A_\\text{phys} S" }, { "math_id": 21, "text": "e_\\text{a}" }, { "math_id": 22, "text": "e_\\text{a} = \\frac{A_e}{A_\\text{phys}}." }, { "math_id": 23, "text": "A_\\text{e} = A_\\text{phys}" }, { "math_id": 24, "text": "G" }, { "math_id": 25, "text": "P_\\text{o}" }, { "math_id": 26, "text": "P_\\text{iso}" }, { "math_id": 27, "text": "G = \\frac{P_\\text{o}}{P_\\text{iso}} = \\frac{A_\\text{e}}{A_\\text{iso}}." }, { "math_id": 28, "text": "A_\\text{iso} = \\frac{\\lambda^2}{4\\pi}," }, { "math_id": 29, "text": "\\lambda" }, { "math_id": 30, "text": "G = \\frac{A_\\text{e}}{A_\\text{iso}} = \\frac{4\\pi A_\\text{e}}{\\lambda^2}." }, { "math_id": 31, "text": "\\frac{P_\\text{r}}{P_\\text{t}} = \\frac{A_\\text{r} A_\\text{t}}{d^2 \\lambda^2}," }, { "math_id": 32, "text": "P_\\text{t}" }, { "math_id": 33, "text": "P_\\text{r}" }, { "math_id": 34, "text": "A_\\text{r}" }, { "math_id": 35, "text": "A_\\text{t}" }, { "math_id": 36, "text": "d" }, { "math_id": 37, "text": "d \\gtrsim 2a^2/\\lambda" }, { "math_id": 38, "text": "a" }, { "math_id": 39, "text": "\\overline{v_n^2} = 4 k_\\text{B} T R \\, \\eta(f)," }, { "math_id": 40, "text": "\\eta(f)" }, { "math_id": 41, "text": "\\eta(f) = 1" }, { "math_id": 42, "text": "\\eta(f) = \\frac{hf/k_\\text{B} T}{e^{hf/k_\\text{B} T} - 1}." }, { "math_id": 43, "text": "P = \\frac{\\text{v}_\\text{rms}^2}{4\\text{R}}." }, { "math_id": 44, "text": "\\overline{v_n^2} = \\text{v}_\\text{rms}^2" }, { "math_id": 45, "text": "P_R = \\frac{\\int_0^\\infty 4 k_\\text{B} T R \\, \\eta(f) \\, F_\\nu(f) \\, df}{4\\text{R}}" }, { "math_id": 46, "text": "\\qquad = \\frac{4 k_\\text{B} T R \\, \\eta(f_1) \\, B_1}{4\\text{R}} = k_\\text{B} T \\, \\eta(f_1) \\, B_1." }, { "math_id": 47, "text": "\\text{P}_{f,A,\\Omega}(f) = \\frac{2hf^3}{c^2} \\frac{1}{e^{hf / k_\\text{B} T} - 1}\n = \\frac{2f^2}{c^2} \\, k_\\text{B} T \\, \\eta(f)," }, { "math_id": 48, "text": "P_A = \\int_0^\\infty \\int_{4\\pi} \\, \\frac{P_{f,A,\\Omega}(f)}{2} A_\\text{eff}(\\Omega, f) \\, F_\\nu(f) \\, d\\Omega \\, df." }, { "math_id": 49, "text": "P_A = 2\\pi P_{f,A,\\Omega}(f) A_\\text{eff} \\, B_1 \n = \\frac{4\\pi \\, k_\\text{B} T \\, \\eta(f_1)}{\\lambda_1^2} A_\\text{eff} B_1," }, { "math_id": 50, "text": "\\lambda_1 = c/f_1" }, { "math_id": 51, "text": "P_A = P_R." }, { "math_id": 52, "text": "\\frac{4 \\pi \\, k_\\text{B} T \\, \\eta(f_1)}{\\lambda_1^2} A_\\text{eff} B_1\n = k_\\text{B} T \\, \\eta(f_1) \\, B_1," }, { "math_id": 53, "text": "A_\\text{eff} = \\frac{\\lambda_1^2}{4\\pi}." }, { "math_id": 54, "text": "\\overline{A_\\text{eff}} = \\frac{\\lambda^2}{4\\pi}." }, { "math_id": 55, "text": "k_\\text{B} T \\, \\eta(f)" }, { "math_id": 56, "text": "\\lambda_1^2/(4\\pi)" } ]
https://en.wikipedia.org/wiki?curid=1470583
1470603
Root test
Criterion for the convergence of an infinite series In mathematics, the root test is a criterion for the convergence (a convergence test) of an infinite series. It depends on the quantity formula_0 where formula_1 are the terms of the series, and states that the series converges absolutely if this quantity is less than one, but diverges if it is greater than one. It is particularly useful in connection with power series. Root test explanation. The root test was developed first by Augustin-Louis Cauchy who published it in his textbook Cours d'analyse (1821). Thus, it is sometimes known as the Cauchy root test or Cauchy's radical test. For a series formula_2 the root test uses the number formula_3 where "lim sup" denotes the limit superior, possibly +∞. Note that if formula_4 converges then it equals "C" and may be used in the root test instead. The root test states that: There are some series for which "C" = 1 and the series converges, e.g. formula_5, and there are others for which "C" = 1 and the series diverges, e.g. formula_6. Application to power series. This test can be used with a power series formula_7 where the coefficients "c""n", and the center "p" are complex numbers and the argument "z" is a complex variable. The terms of this series would then be given by "a""n" = "c""n"("z" − "p")"n". One then applies the root test to the "a""n" as above. Note that sometimes a series like this is called a power series "around "p"", because the radius of convergence is the radius "R" of the largest interval or disc centred at "p" such that the series will converge for all points "z" strictly in the interior (convergence on the boundary of the interval or disc generally has to be checked separately). A corollary of the root test applied to a power series is the Cauchy–Hadamard theorem: the radius of convergence is exactly formula_8 taking care that we really mean ∞ if the denominator is 0. Proof. The proof of the convergence of a series Σ"a""n" is an application of the comparison test. If for all "n" ≥ "N" ("N" some fixed natural number) we have formula_9, then formula_10. Since the geometric series formula_11 converges so does formula_12 by the comparison test. Hence Σ"a""n" converges absolutely. If formula_13 for infinitely many "n", then "a""n" fails to converge to 0, hence the series is divergent. Proof of corollary: For a power series Σ"a""n" = Σ"c""n"("z" − "p")"n", we see by the above that the series converges if there exists an "N" such that for all "n" ≥ "N" we have formula_14 equivalent to formula_15 for all "n" ≥ "N", which implies that in order for the series to converge we must have formula_16 for all sufficiently large "n". This is equivalent to saying formula_17 so formula_18 Now the only other place where convergence is possible is when formula_19 (since points &gt; 1 will diverge) and this will not change the radius of convergence since these are just the points lying on the boundary of the interval or disc, so formula_20 Examples. "Example 1:" formula_21 Applying the root test and using the fact that formula_22 formula_23 Since formula_24 the series diverges. "Example 2:" formula_25 The root test shows convergence because formula_26 This example shows how the root test is stronger than the ratio test. The ratio test is inconclusive for this series as if formula_27 is even, formula_28 while if formula_27 is odd, formula_29, therefore the limit formula_30 does not exist. Root tests hierarchy. Root tests hierarchy is built similarly to the ratio tests hierarchy (see Section 4.1 of ratio test, and more specifically Subsection 4.1.4 there). For a series formula_2 with positive terms we have the following tests for convergence/divergence. Let formula_31 be an integer, and let formula_32 denote the formula_33th iterate of natural logarithm, i.e. formula_34 and for any formula_35, formula_36. Suppose that formula_37, when formula_27 is large, can be presented in the form formula_38 Proof. Since formula_41, then we have formula_42 From this, formula_43 From Taylor's expansion applied to the right-hand side, we obtain: formula_44 Hence, formula_45 The final result follows from the integral test for convergence. References. "This article incorporates material from Proof of Cauchy's root test on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "\\limsup_{n\\rightarrow\\infty}\\sqrt[n]{|a_n|}," }, { "math_id": 1, "text": "a_n" }, { "math_id": 2, "text": "\\sum_{n=1}^\\infty a_n" }, { "math_id": 3, "text": "C = \\limsup_{n\\rightarrow\\infty}\\sqrt[n]{|a_n|}," }, { "math_id": 4, "text": "\\lim_{n\\rightarrow\\infty}\\sqrt[n]{|a_n|}," }, { "math_id": 5, "text": "\\textstyle \\sum 1/{n^2}" }, { "math_id": 6, "text": "\\textstyle\\sum 1/n" }, { "math_id": 7, "text": "f(z) = \\sum_{n=0}^\\infty c_n (z-p)^n" }, { "math_id": 8, "text": "1/\\limsup_{n \\rightarrow \\infty}{\\sqrt[n]{|c_n|}}," }, { "math_id": 9, "text": "\\sqrt[n]{|a_n|} \\le k < 1" }, { "math_id": 10, "text": "|a_n| \\le k^n < 1" }, { "math_id": 11, "text": "\\sum_{n=N}^\\infty k^n" }, { "math_id": 12, "text": "\\sum_{n=N}^\\infty |a_n|" }, { "math_id": 13, "text": "\\sqrt[n]{|a_n|} > 1" }, { "math_id": 14, "text": "\\sqrt[n]{|a_n|} = \\sqrt[n]{|c_n(z - p)^n|} < 1," }, { "math_id": 15, "text": "\\sqrt[n]{|c_n|}\\cdot|z - p| < 1" }, { "math_id": 16, "text": "|z - p| < 1/\\sqrt[n]{|c_n|}" }, { "math_id": 17, "text": "|z - p| < 1/\\limsup_{n \\rightarrow \\infty}{\\sqrt[n]{|c_n|}}," }, { "math_id": 18, "text": "R \\le 1/\\limsup_{n \\rightarrow \\infty}{\\sqrt[n]{|c_n|}}." }, { "math_id": 19, "text": "\\sqrt[n]{|a_n|} = \\sqrt[n]{|c_n(z - p)^n|} = 1," }, { "math_id": 20, "text": "R = 1/\\limsup_{n \\rightarrow \\infty}{\\sqrt[n]{|c_n|}}." }, { "math_id": 21, "text": " \\sum_{i=1}^\\infty \\frac{2^i}{i^9} " }, { "math_id": 22, "text": " \\lim_{n \\to \\infty} n^{1/n}=1," }, { "math_id": 23, "text": " C = \\lim_{n \\to \\infty}\\sqrt[n]{\\left|\\frac{2^n}{n^9}\\right|}= \\lim_{n \\to \\infty}\\frac{ \\sqrt[n]{2^n} } { \\sqrt[n]{n^9} } = \\lim_{n \\to \\infty}\\frac{ 2 } {(n^{1/n})^9 } = 2 " }, { "math_id": 24, "text": " C=2>1," }, { "math_id": 25, "text": "\\sum_{n=0}^\\infty \\frac{1}{2^{\\lfloor n/2 \\rfloor}}= 1 + 1 + \\frac12 + \\frac12 + \\frac14 + \\frac14 + \\frac18 + \\frac18 + \\ldots " }, { "math_id": 26, "text": "r= \\limsup_{n\\to\\infty}\\sqrt[n]{|a_n|} = \\limsup_{n\\to\\infty}\\sqrt[2n]{|a_{2n}|} = \\limsup_{n\\to\\infty}\\sqrt[2n]{|1/2^n|}=\\frac1\\sqrt{2}<1." }, { "math_id": 27, "text": "n" }, { "math_id": 28, "text": "a_{n+1}/a_n = 1" }, { "math_id": 29, "text": "a_{n+1}/a_n = 1/2" }, { "math_id": 30, "text": "\\lim_{n\\to\\infty} |a_{n+1}/a_n|" }, { "math_id": 31, "text": "K\\geq1" }, { "math_id": 32, "text": "\\ln_{(K)}(x)" }, { "math_id": 33, "text": "K" }, { "math_id": 34, "text": "\\ln_{(1)}(x)=\\ln (x)" }, { "math_id": 35, "text": "2\\leq k\\leq K" }, { "math_id": 36, "text": "\\ln_{(k)}(x)=\\ln_{(k-1)}(\\ln (x))" }, { "math_id": 37, "text": "\\sqrt[-n]{a_n}" }, { "math_id": 38, "text": "\\sqrt[-n]{a_n}=1+\\frac{1}{n}+\\frac{1}{n}\\sum_{i=1}^{K-1}\\frac{1}{\\prod_{k=1}^i\\ln_{(k)}(n)}+\\frac{\\rho_n}{n\\prod_{k=1}^K\\ln_{(k)}(n)}." }, { "math_id": 39, "text": "\\liminf_{n\\to\\infty}\\rho_n>1" }, { "math_id": 40, "text": "\\limsup_{n\\to\\infty}\\rho_n<1" }, { "math_id": 41, "text": "\\sqrt[-n]{a_n}=\\mathrm{e}^{-\\frac{1}{n}\\ln a_n}" }, { "math_id": 42, "text": "\\mathrm{e}^{-\\frac{1}{n}\\ln a_n}=1+\\frac{1}{n}+\\frac{1}{n}\\sum_{i=1}^{K-1}\\frac{1}{\\prod_{k=1}^i\\ln_{(k)}(n)}+\\frac{\\rho_n}{n\\prod_{k=1}^K\\ln_{(k)}(n)}." }, { "math_id": 43, "text": " \\ln a_n=-n\\ln\\left(1+\\frac{1}{n}+\\frac{1}{n}\\sum_{i=1}^{K-1}\\frac{1}{\\prod_{k=1}^i\\ln_{(k)}(n)}+\\frac{\\rho_n}{n\\prod_{k=1}^K\\ln_{(k)}(n)}\\right)." }, { "math_id": 44, "text": " \\ln a_n=-1-\\sum_{i=1}^{K-1}\\frac{1}{\\prod_{k=1}^i\\ln_{(k)}(n)}-\\frac{\\rho_n}{\\prod_{k=1}^K\\ln_{(k)}(n)}+O\\left(\\frac{1}{n}\\right)." }, { "math_id": 45, "text": "a_n=\\begin{cases}\\mathrm{e}^{-1+O(1/n)}\\frac{1}{(n\\prod_{k=1}^{K-2}\\ln_{(k)}n)\\ln^{\\rho_n}_{(K-1)}n}, &K\\geq2,\\\\\n\\mathrm{e}^{-1+O(1/n)}\\frac{1}{n^{\\rho_n}}, &K=1.\n\\end{cases}\n" } ]
https://en.wikipedia.org/wiki?curid=1470603
1470637
Quadrupole
Arrangement that creates a quadrupole field of some sort A quadrupole or quadrapole is one of a sequence of configurations of things like electric charge or current, or gravitational mass that can exist in ideal form, but it is usually just part of a multipole expansion of a more complex structure reflecting various orders of complexity. Mathematical definition. The quadrupole moment tensor "Q" is a rank-two tensor—3×3 matrix. There are several definitions, but it is normally stated in the traceless form (i.e. formula_0). The quadrupole moment tensor has thus nine components, but because of transposition symmetry and zero-trace property, in this form only five of these are independent. For a discrete system of formula_1 point charges or masses in the case of a gravitational quadrupole, each with charge formula_2, or mass formula_3, and position formula_4 relative to the coordinate system origin, the components of the "Q" matrix are defined by: formula_5 The indices formula_6 run over the Cartesian coordinates formula_7 and formula_8 is the Kronecker delta. This means that formula_7 must be equal, up to sign, to distances from the point to formula_9 mutually perpendicular hyperplanes for the Kronecker delta to equal 1. In the non-traceless form, the quadrupole moment is sometimes stated as: formula_10 with this form seeing some usage in the literature regarding the fast multipole method. Conversion between these two forms can be easily achieved using a detracing operator. For a continuous system with charge density, or mass density, formula_11, the components of Q are defined by integral over the Cartesian space r: formula_12 As with any multipole moment, if a lower-order moment, monopole or dipole in this case, is non-zero, then the value of the quadrupole moment depends on the choice of the coordinate origin. For example, a dipole of two opposite-sign, same-strength point charges, which has no monopole moment, can have a nonzero quadrupole moment if the origin is shifted away from the center of the configuration exactly between the two charges; or the quadrupole moment can be reduced to zero with the origin at the center. In contrast, if the monopole and dipole moments vanish, but the quadrupole moment does not, e.g. four same-strength charges, arranged in a square, with alternating signs, then the quadrupole moment is coordinate independent. If each charge is the source of a "formula_13 potential" field, like the electric or gravitational field, the contribution to the field's potential from the quadrupole moment is: formula_14 where R is a vector with origin in the system of charges and R̂ is the unit vector in the direction of R. That is to say, formula_15 for formula_16 are the Cartesian components of the unit vector pointing from the origin to the field point. Here, formula_17 is a constant that depends on the type of field, and the units being used. Electric quadrupole. A simple example of an electric quadrupole consists of alternating positive and negative charges, arranged on the corners of a square. The monopole moment (just the total charge) of this arrangement is zero. Similarly, the dipole moment is zero, regardless of the coordinate origin that has been chosen. But the quadrupole moment of the arrangement in the diagram cannot be reduced to zero, regardless of where we place the coordinate origin. The electric potential of an electric charge quadrupole is given by formula_18 where formula_19 is the electric permittivity, and formula_20 follows the definition above. Alternatively, other sources include the factor of one half in the formula_20 tensor itself, such that: formula_21 , and formula_22 which makes more explicit the connection to Legendre polynomials which result from the multipole expansion, namely here formula_23 Generalization: higher multipoles. An extreme generalization ("point octopole") would be: Eight alternating point charges at the eight corners of a parallelepiped, e.g., of a cube with edge length "a". The "octopole moment" of this arrangement would correspond, in the "octopole limit" formula_24 to a nonzero diagonal tensor of order three. Still higher multipoles, e.g. of order formula_25, would be obtained by dipolar (quadrupolar, octopolar, ...) arrangements of point dipoles (quadrupoles, octopoles, ...), not point monopoles, of lower order, e.g., formula_26. Magnetic quadrupole. All known magnetic sources give dipole fields. However, it is possible to make a magnetic quadrupole by placing four identical bar magnets perpendicular to each other such that the north pole of one is next to the south of the other. Such a configuration cancels the dipole moment and gives a quadrupole moment, and its field will decrease at large distances faster than that of a dipole. An example of a magnetic quadrupole, involving permanent magnets, is depicted on the right. Electromagnets of similar conceptual design (called quadrupole magnets) are commonly used to focus beams of charged particles in particle accelerators and beam transport lines, a method known as strong focusing. There are four steel pole tips, two opposing magnetic north poles and two opposing magnetic south poles. The steel is magnetized by a large electric current that flows in the coils of tubing wrapped around the poles. A changing magnetic quadrupole moment produces electromagnetic radiation. Gravitational quadrupole. The mass quadrupole is analogous to the electric charge quadrupole, where the charge density is simply replaced by the mass density and a negative sign is added because the masses are always positive and the force is attractive. The gravitational potential is then expressed as: formula_27 For example, because the Earth is rotating, it is oblate (flattened at the poles). This gives it a nonzero quadrupole moment. While the contribution to the Earth's gravitational field from this quadrupole is extremely important for artificial satellites close to Earth, it is less important for the Moon because the formula_28 term falls quickly. The mass quadrupole moment is also important in general relativity because, if it changes in time, it can produce gravitational radiation, similar to the electromagnetic radiation produced by oscillating electric or magnetic dipoles and higher multipoles. However, only quadrupole and higher moments can radiate gravitationally. The mass monopole represents the total mass-energy in a system, which is conserved—thus it gives off no radiation. Similarly, the mass dipole corresponds to the center of mass of a system and its first derivative represents momentum which is also a conserved quantity so the mass dipole also emits no radiation. The mass quadrupole, however, can change in time, and is the lowest-order contribution to gravitational radiation. The simplest and most important example of a radiating system is a pair of mass points with equal masses orbiting each other on a circular orbit, an approximation to e.g. special case of binary black holes. Since the dipole moment is constant, we can for convenience place the coordinate origin right between the two points. Then the dipole moment will be zero, and if we also scale the coordinates so that the points are at unit distance from the center, in opposite direction, the system's quadrupole moment will then simply be formula_29 where M is the mass of each point, and formula_30 are components of the (unit) position vector of one of the points. As they orbit, this x-vector will rotate, which means that it will have a non-zero first, and also a non-zero second time derivative (this is of course true regardless the choice of the coordinate system). Therefore, the system will radiate gravitational waves. Energy lost in this way was first observed in the changing period of the Hulse–Taylor binary, a pulsar in orbit with another neutron star of similar mass. Just as electric charge and current multipoles contribute to the electromagnetic field, mass and mass-current multipoles contribute to the gravitational field in general relativity, causing the so-called gravitomagnetic effects. Changing mass-current multipoles can also give off gravitational radiation. However, contributions from the current multipoles will typically be much smaller than that of the mass quadrupole. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q_{xx} + Q_{yy} + Q_{zz} = 0" }, { "math_id": 1, "text": "\\ell" }, { "math_id": 2, "text": "q_\\ell" }, { "math_id": 3, "text": "m_\\ell" }, { "math_id": 4, "text": "\\mathbf{r}_\\ell = \\left(r_{x\\ell}, r_{y\\ell}, r_{z\\ell}\\right)" }, { "math_id": 5, "text": "Q_{ij} = \\sum_\\ell q_\\ell\\left(3r_{i\\ell} r_{j\\ell} - \\left\\|\\mathbf{r}_\\ell \\right\\|^2\\delta_{ij}\\right)." }, { "math_id": 6, "text": "i,j" }, { "math_id": 7, "text": "x,y,z" }, { "math_id": 8, "text": "\\delta_{ij}" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "Q_{ij} = \\sum_\\ell q_\\ell r_{i\\ell} r_{j\\ell}" }, { "math_id": 11, "text": "\\rho(x, y, z)" }, { "math_id": 12, "text": "Q_{ij} = \\int\\, \\rho(\\mathbf{r})\\left(3r_i r_j - \\left\\|\\mathbf{r}\\right\\|^2\\delta_{ij}\\right)\\, d^3\\mathbf{r}" }, { "math_id": 13, "text": "1/r" }, { "math_id": 14, "text": "V_\\text{q}(\\mathbf{R}) = \\frac{k}{|\\mathbf{R}|^3} \\sum_{i,j} \\frac{1}{2} Q_{ij}\\, \\hat{R}_i \\hat{R}_j\\ ," }, { "math_id": 15, "text": "\\hat{R}_i" }, { "math_id": 16, "text": "i=x,y,z" }, { "math_id": 17, "text": "k" }, { "math_id": 18, "text": "V_\\text{q}(\\mathbf{R}) = \\frac{1}{4\\pi \\varepsilon_0} \\frac{1}{|\\mathbf{R}|^3} \\sum_{i,j} \\frac{1}{2} Q_{ij}\\, \\hat{R}_i \\hat{R}_j\\ ," }, { "math_id": 19, "text": "\\varepsilon_0" }, { "math_id": 20, "text": "Q_{ij}" }, { "math_id": 21, "text": "Q_{ij} = \\int\\, \\rho(\\mathbf{r})\\left(\\frac{3}{2}r_i r_j - \\frac{1}{2}\\left\\|\\mathbf{r}\\right\\|^2\\delta_{ij}\\right)\\, d^3\\mathbf{r}" }, { "math_id": 22, "text": "V_\\text{q}(\\mathbf{R}) = \\frac{1}{4\\pi \\varepsilon_0} \\frac{1}{|\\mathbf{R}|^3} \\sum_{i,j} Q_{ij}\\, \\hat{R}_i \\hat{R}_j\\ ," }, { "math_id": 23, "text": "P_2(x) = \\frac{3}{2}x^2 - \\frac{1}{2}." }, { "math_id": 24, "text": "\\lim_{a\\to 0} {a^3 \\cdot Q} \\to \\text{const. } " }, { "math_id": 25, "text": "2^{\\ell}" }, { "math_id": 26, "text": "2^{\\ell-1}" }, { "math_id": 27, "text": "V_\\text{q}(\\mathbf{R}) = -\\frac{G}{2|\\mathbf{R}|^3} \\sum_{i,j} Q_{ij}\\, \\hat{R}_i \\hat{R}_j\\ ." }, { "math_id": 28, "text": "{1}/{|\\mathbf{R}|^3}" }, { "math_id": 29, "text": "Q_{ij} = M\\left(3x_i x_j - |\\mathbf{x}|^2 \\delta_{ij}\\right) " }, { "math_id": 30, "text": "x_i" } ]
https://en.wikipedia.org/wiki?curid=1470637
1470657
Linear discriminant analysis
Method used in statistics, pattern recognition, and other fields Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification. LDA is closely related to analysis of variance (ANOVA) and regression analysis, which also attempt to express one dependent variable as a linear combination of other features or measurements. However, ANOVA uses categorical independent variables and a continuous dependent variable, whereas discriminant analysis has continuous independent variables and a categorical dependent variable ("i.e." the class label). Logistic regression and probit regression are more similar to LDA than ANOVA is, as they also explain a categorical variable by the values of continuous independent variables. These other methods are preferable in applications where it is not reasonable to assume that the independent variables are normally distributed, which is a fundamental assumption of the LDA method. LDA is also closely related to principal component analysis (PCA) and factor analysis in that they both look for linear combinations of variables which best explain the data. LDA explicitly attempts to model the difference between the classes of data. PCA, in contrast, does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities. Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made. LDA works when the measurements made on independent variables for each observation are continuous quantities. When dealing with categorical independent variables, the equivalent technique is discriminant correspondence analysis. Discriminant analysis is used when groups are known a priori (unlike in cluster analysis). Each case must have a score on one or more quantitative predictor measures, and a score on a group measure. In simple terms, discriminant function analysis is classification - the act of distributing things into groups, classes or categories of the same type. History. The original dichotomous discriminant analysis was developed by Sir Ronald Fisher in 1936. It is different from an ANOVA or MANOVA, which is used to predict one (ANOVA) or multiple (MANOVA) continuous dependent variables by one or more independent categorical variables. Discriminant function analysis is useful in determining whether a set of variables is effective in predicting category membership. LDA for two classes. Consider a set of observations formula_0 (also called features, attributes, variables or measurements) for each sample of an object or event with known class formula_1. This set of samples is called the training set in a supervised learning context. The classification problem is then to find a good predictor for the class formula_1 of any sample of the same distribution (not necessarily from the training set) given only an observation formula_2. LDA approaches the problem by assuming that the conditional probability density functions formula_3 and formula_4 are both the normal distribution with mean and covariance parameters formula_5 and formula_6, respectively. Under this assumption, the Bayes-optimal solution is to predict points as being from the second class if the log of the likelihood ratios is bigger than some threshold T, so that: formula_7 Without any further assumptions, the resulting classifier is referred to as quadratic discriminant analysis (QDA). LDA instead makes the additional simplifying homoscedasticity assumption ("i.e." that the class covariances are identical, so formula_8) and that the covariances have full rank. In this case, several terms cancel: formula_9 formula_10 because formula_11 is Hermitian and the above decision criterion becomes a threshold on the dot product formula_12 for some threshold constant "c", where formula_13 formula_14 This means that the criterion of an input formula_15 being in a class formula_1 is purely a function of this linear combination of the known observations. It is often useful to see this conclusion in geometrical terms: the criterion of an input formula_15 being in a class formula_1 is purely a function of projection of multidimensional-space point formula_15 onto vector formula_16 (thus, we only consider its direction). In other words, the observation belongs to formula_1 if corresponding formula_15 is located on a certain side of a hyperplane perpendicular to formula_16. The location of the plane is defined by the threshold formula_17. Assumptions. The assumptions of discriminant analysis are the same as those for MANOVA. The analysis is quite sensitive to outliers and the size of the smallest group must be larger than the number of predictor variables. It has been suggested that discriminant analysis is relatively robust to slight violations of these assumptions, and it has also been shown that discriminant analysis may still be reliable when using dichotomous variables (where multivariate normality is often violated). Discriminant functions. Discriminant analysis works by creating one or more linear combinations of predictors, creating a new latent variable for each function. These functions are called discriminant functions. The number of functions possible is either formula_18 where formula_19 = number of groups, or formula_20 (the number of predictors), whichever is smaller. The first function created maximizes the differences between groups on that function. The second function maximizes differences on that function, but also must not be correlated with the previous function. This continues with subsequent functions with the requirement that the new function not be correlated with any of the previous functions. Given group formula_21, with formula_22 sets of sample space, there is a discriminant rule such that if formula_23, then formula_24. Discriminant analysis then, finds “good” regions of formula_22 to minimize classification error, therefore leading to a high percent correct classified in the classification table. Each function is given a discriminant score to determine how well it predicts group placement. Eigenvalues. An eigenvalue in discriminant analysis is the characteristic root of each function. It is an indication of how well that function differentiates the groups, where the larger the eigenvalue, the better the function differentiates. This however, should be interpreted with caution, as eigenvalues have no upper limit. The eigenvalue can be viewed as a ratio of "SS"between and "SS"within as in ANOVA when the dependent variable is the discriminant function, and the groups are the levels of the IV. This means that the largest eigenvalue is associated with the first function, the second largest with the second, etc.. Effect size. Some suggest the use of eigenvalues as effect size measures, however, this is generally not supported. Instead, the canonical correlation is the preferred measure of effect size. It is similar to the eigenvalue, but is the square root of the ratio of "SS"between and "SS"total. It is the correlation between groups and the function. Another popular measure of effect size is the percent of variance for each function. This is calculated by: ("λx/Σλi") X 100 where "λx" is the eigenvalue for the function and Σ"λi" is the sum of all eigenvalues. This tells us how strong the prediction is for that particular function compared to the others. Percent correctly classified can also be analyzed as an effect size. The kappa value can describe this while correcting for chance agreement. Canonical discriminant analysis for "k" classes. Canonical discriminant analysis (CDA) finds axes ("k" − 1 canonical coordinates, "k" being the number of classes) that best separate the categories. These linear functions are uncorrelated and define, in effect, an optimal "k" − 1 space through the "n"-dimensional cloud of data that best separates (the projections in that space of) the "k" groups. See “Multiclass LDA” for details below. Fisher's linear discriminant. The terms "Fisher's linear discriminant" and "LDA" are often used interchangeably, although Fisher's original article actually describes a slightly different discriminant, which does not make some of the assumptions of LDA such as normally distributed classes or equal class covariances. Suppose two classes of observations have means formula_28 and covariances formula_29. Then the linear combination of features formula_30 will have means formula_31 and variances formula_32 for formula_33. Fisher defined the separation between these two distributions to be the ratio of the variance between the classes to the variance within the classes: formula_34 This measure is, in some sense, a measure of the signal-to-noise ratio for the class labelling. It can be shown that the maximum separation occurs when formula_35 When the assumptions of LDA are satisfied, the above equation is equivalent to LDA. Be sure to note that the vector formula_36 is the normal to the discriminant hyperplane. As an example, in a two dimensional problem, the line that best divides the two groups is perpendicular to formula_36. Generally, the data points to be discriminated are projected onto formula_36; then the threshold that best separates the data is chosen from analysis of the one-dimensional distribution. There is no general rule for the threshold. However, if projections of points from both classes exhibit approximately the same distributions, a good choice would be the hyperplane between projections of the two means, formula_37 and formula_38. In this case the parameter c in threshold condition formula_39 can be found explicitly: formula_40. Otsu's method is related to Fisher's linear discriminant, and was created to binarize the histogram of pixels in a grayscale image by optimally picking the black/white threshold that minimizes intra-class variance and maximizes inter-class variance within/between grayscales assigned to black and white pixel classes. Multiclass LDA. In the case where there are more than two classes, the analysis used in the derivation of the Fisher discriminant can be extended to find a subspace which appears to contain all of the class variability. This generalization is due to C. R. Rao. Suppose that each of C classes has a mean formula_41 and the same covariance formula_42. Then the scatter between class variability may be defined by the sample covariance of the class means formula_43 where formula_44 is the mean of the class means. The class separation in a direction formula_45 in this case will be given by formula_46 This means that when formula_45 is an eigenvector of formula_47 the separation will be equal to the corresponding eigenvalue. If formula_47 is diagonalizable, the variability between features will be contained in the subspace spanned by the eigenvectors corresponding to the "C" − 1 largest eigenvalues (since formula_48 is of rank "C" − 1 at most). These eigenvectors are primarily used in feature reduction, as in PCA. The eigenvectors corresponding to the smaller eigenvalues will tend to be very sensitive to the exact choice of training data, and it is often necessary to use regularisation as described in the next section. If classification is required, instead of dimension reduction, there are a number of alternative techniques available. For instance, the classes may be partitioned, and a standard Fisher discriminant or LDA used to classify each partition. A common example of this is "one against the rest" where the points from one class are put in one group, and everything else in the other, and then LDA applied. This will result in C classifiers, whose results are combined. Another common method is pairwise classification, where a new classifier is created for each pair of classes (giving "C"("C" − 1)/2 classifiers in total), with the individual classifiers combined to produce a final classification. Incremental LDA. The typical implementation of the LDA technique requires that all the samples are available in advance. However, there are situations where the entire data set is not available and the input data are observed as a stream. In this case, it is desirable for the LDA feature extraction to have the ability to update the computed LDA features by observing the new samples without running the algorithm on the whole data set. For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is an "incremental LDA algorithm", and this idea has been extensively studied over the last two decades. Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features. In other work, Demir and Ozmehmet proposed online local learning algorithms for updating LDA features incrementally using error-correcting and the Hebbian learning rules. Later, Aliyari "et a"l. derived fast incremental algorithms to update the LDA features by observing the new samples. Practical use. In practice, the class means and covariances are not known. They can, however, be estimated from the training set. Either the maximum likelihood estimate or the maximum a posteriori estimate may be used in place of the exact value in the above equations. Although the estimates of the covariance may be considered optimal in some sense, this does not mean that the resulting discriminant obtained by substituting these values is optimal in any sense, even if the assumption of normally distributed classes is correct. Another complication in applying LDA and Fisher's discriminant to real data occurs when the number of measurements of each sample (i.e., the dimensionality of each data vector) exceeds the number of samples in each class. In this case, the covariance estimates do not have full rank, and so cannot be inverted. There are a number of ways to deal with this. One is to use a pseudo inverse instead of the usual matrix inverse in the above formulae. However, better numeric stability may be achieved by first projecting the problem onto the subspace spanned by formula_48. Another strategy to deal with small sample size is to use a shrinkage estimator of the covariance matrix, which can be expressed mathematically as formula_49 where formula_50 is the identity matrix, and formula_51 is the "shrinkage intensity" or "regularisation parameter". This leads to the framework of regularized discriminant analysis or shrinkage discriminant analysis. Also, in many practical cases linear discriminants are not suitable. LDA and Fisher's discriminant can be extended for use in non-linear classification via the kernel trick. Here, the original observations are effectively mapped into a higher dimensional non-linear space. Linear classification in this non-linear space is then equivalent to non-linear classification in the original space. The most commonly used example of this is the kernel Fisher discriminant. LDA can be generalized to multiple discriminant analysis, where "c" becomes a categorical variable with "N" possible states, instead of only two. Analogously, if the class-conditional densities formula_52 are normal with shared covariances, the sufficient statistic for formula_53 are the values of "N" projections, which are the subspace spanned by the "N" means, affine projected by the inverse covariance matrix. These projections can be found by solving a generalized eigenvalue problem, where the numerator is the covariance matrix formed by treating the means as the samples, and the denominator is the shared covariance matrix. See “Multiclass LDA” above for details. Applications. In addition to the examples given below, LDA is applied in positioning and product management. Bankruptcy prediction. In bankruptcy prediction based on accounting ratios and other financial variables, linear discriminant analysis was the first statistical method applied to systematically explain which firms entered bankruptcy vs. survived. Despite limitations including known nonconformance of accounting ratios to the normal distribution assumptions of LDA, Edward Altman's 1968 model is still a leading model in practical applications. Face recognition. In computerised face recognition, each face is represented by a large number of pixel values. Linear discriminant analysis is primarily used here to reduce the number of features to a more manageable number before classification. Each of the new dimensions is a linear combination of pixel values, which form a template. The linear combinations obtained using Fisher's linear discriminant are called "Fisher faces", while those obtained using the related principal component analysis are called "eigenfaces". Marketing. In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps: Biomedical studies. The main application of discriminant analysis in medicine is the assessment of severity state of a patient and prognosis of disease outcome. For example, during retrospective analysis, patients are divided into groups according to severity of disease – mild, moderate, and severe form. Then results of clinical and laboratory analyses are studied to reveal statistically different variables in these groups. Using these variables, discriminant functions are built to classify disease severity in future patients. Additionally, Linear Discriminant Analysis (LDA) can help select more discriminative samples for data augmentation, improving classification performance. In biology, similar principles are used in order to classify and define groups of different biological objects, for example, to define phage types of Salmonella enteritidis based on Fourier transform infrared spectra, to detect animal source of "Escherichia coli" studying its virulence factors etc. Earth science. This method can be used to . For example, when different data from various zones are available, discriminant analysis can find the pattern within the data and classify it effectively. Comparison to logistic regression. Discriminant function analysis is very similar to logistic regression, and both can be used to answer the same research questions. Logistic regression does not have as many assumptions and restrictions as discriminant analysis. However, when discriminant analysis’ assumptions are met, it is more powerful than logistic regression. Unlike logistic regression, discriminant analysis can be used with small sample sizes. It has been shown that when sample sizes are equal, and homogeneity of variance/covariance holds, discriminant analysis is more accurate. Despite all these advantages, logistic regression has none-the-less become the common choice, since the assumptions of discriminant analysis are rarely met. Linear discriminant in high dimensions. Geometric anomalies in higher dimensions lead to the well-known curse of dimensionality. Nevertheless, proper utilization of concentration of measure phenomena can make computation easier. An important case of these "blessing of dimensionality" phenomena was highlighted by Donoho and Tanner: if a sample is essentially high-dimensional then each point can be separated from the rest of the sample by linear inequality, with high probability, even for exponentially large samples. These linear inequalities can be selected in the standard (Fisher's) form of the linear discriminant for a rich family of probability distribution. In particular, such theorems are proven for log-concave distributions including multidimensional normal distribution (the proof is based on the concentration inequalities for log-concave measures) and for product measures on a multidimensional cube (this is proven using Talagrand's concentration inequality for product probability spaces). Data separability by classical linear discriminants simplifies the problem of error correction for artificial intelligence systems in high dimension. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " { \\vec x } " }, { "math_id": 1, "text": "y" }, { "math_id": 2, "text": " \\vec x " }, { "math_id": 3, "text": "p(\\vec x|y=0)" }, { "math_id": 4, "text": "p(\\vec x|y=1)" }, { "math_id": 5, "text": "\\left(\\vec \\mu_0, \\Sigma_0\\right)" }, { "math_id": 6, "text": "\\left(\\vec \\mu_1, \\Sigma_1\\right)" }, { "math_id": 7, "text": " \\frac{1}{2} (\\vec x - \\vec \\mu_0)^\\mathrm{T} \\Sigma_0^{-1} ( \\vec x - \\vec \\mu_0) + \\frac{1}{2} \\ln|\\Sigma_0| - \\frac{1}{2} (\\vec x - \\vec \\mu_1)^\\mathrm{T} \\Sigma_1^{-1} ( \\vec x - \\vec \\mu_1) - \\frac{1}{2} \\ln|\\Sigma_1| \\ > \\ T " }, { "math_id": 8, "text": "\\Sigma_0 = \\Sigma_1 = \\Sigma" }, { "math_id": 9, "text": " {\\vec x}^\\mathrm{T} \\Sigma_0^{-1} \\vec x = {\\vec x}^\\mathrm{T} \\Sigma_1^{-1} \\vec x" }, { "math_id": 10, "text": "{\\vec x}^\\mathrm{T} {\\Sigma_i}^{-1} \\vec{\\mu}_i = {\\vec{\\mu}_i}^\\mathrm{T}{\\Sigma_i}^{-1} \\vec x" }, { "math_id": 11, "text": "\\Sigma_i" }, { "math_id": 12, "text": " {\\vec w}^\\mathrm{T} \\vec x > c " }, { "math_id": 13, "text": "\\vec w = \\Sigma^{-1} (\\vec \\mu_1 - \\vec \\mu_0)" }, { "math_id": 14, "text": " c = \\frac12 \\, {\\vec w}^\\mathrm{T} (\\vec \\mu_1 + \\vec \\mu_0)" }, { "math_id": 15, "text": " \\vec{ x }" }, { "math_id": 16, "text": " \\vec{ w }" }, { "math_id": 17, "text": "c" }, { "math_id": 18, "text": "N_g-1" }, { "math_id": 19, "text": "N_g" }, { "math_id": 20, "text": "p" }, { "math_id": 21, "text": "j" }, { "math_id": 22, "text": "\\mathbb{R}_j" }, { "math_id": 23, "text": "x \\in\\mathbb{R}_j" }, { "math_id": 24, "text": "x\\in j" }, { "math_id": 25, "text": "x" }, { "math_id": 26, "text": "\\pi_i f_i(x)" }, { "math_id": 27, "text": "f_i(x)" }, { "math_id": 28, "text": " \\vec \\mu_0, \\vec \\mu_1 " }, { "math_id": 29, "text": "\\Sigma_0,\\Sigma_1 " }, { "math_id": 30, "text": " {\\vec w}^\\mathrm{T} \\vec x " }, { "math_id": 31, "text": " {\\vec w}^\\mathrm{T} \\vec \\mu_i " }, { "math_id": 32, "text": " {\\vec w}^\\mathrm{T} \\Sigma_i \\vec w " }, { "math_id": 33, "text": " i=0,1 " }, { "math_id": 34, "text": "S=\\frac{\\sigma_{\\text{between}}^2}{\\sigma_{\\text{within}}^2}= \\frac{(\\vec w \\cdot \\vec \\mu_1 - \\vec w \\cdot \\vec \\mu_0)^2}{{\\vec w}^\\mathrm{T} \\Sigma_1 \\vec w + {\\vec w}^\\mathrm{T} \\Sigma_0 \\vec w} = \\frac{(\\vec w \\cdot (\\vec \\mu_1 - \\vec \\mu_0))^2}{{\\vec w}^\\mathrm{T} (\\Sigma_0+\\Sigma_1) \\vec w} " }, { "math_id": 35, "text": " \\vec w \\propto (\\Sigma_0+\\Sigma_1)^{-1}(\\vec \\mu_1 - \\vec \\mu_0) " }, { "math_id": 36, "text": "\\vec w" }, { "math_id": 37, "text": "\\vec w \\cdot \\vec \\mu_0 " }, { "math_id": 38, "text": "\\vec w \\cdot \\vec \\mu_1 " }, { "math_id": 39, "text": " \\vec w \\cdot \\vec x > c " }, { "math_id": 40, "text": " c = \\vec w \\cdot \\frac12 (\\vec \\mu_0 + \\vec \\mu_1) = \\frac{1}{2} \\vec\\mu_1^\\mathrm{T} \\Sigma^{-1}_{1} \\vec\\mu_1 - \\frac{1}{2} \\vec\\mu_0^\\mathrm{T} \\Sigma^{-1}_{0} \\vec\\mu_0 " }, { "math_id": 41, "text": " \\mu_i " }, { "math_id": 42, "text": " \\Sigma " }, { "math_id": 43, "text": " \\Sigma_b = \\frac{1}{C} \\sum_{i=1}^C (\\mu_i-\\mu) (\\mu_i-\\mu)^\\mathrm{T} " }, { "math_id": 44, "text": " \\mu " }, { "math_id": 45, "text": " \\vec w " }, { "math_id": 46, "text": " S = \\frac{{\\vec w}^\\mathrm{T} \\Sigma_b \\vec w}{{\\vec w}^\\mathrm{T} \\Sigma \\vec w} " }, { "math_id": 47, "text": " \\Sigma^{-1} \\Sigma_b " }, { "math_id": 48, "text": " \\Sigma_b " }, { "math_id": 49, "text": " \\Sigma = (1-\\lambda) \\Sigma+\\lambda I\\," }, { "math_id": 50, "text": " I " }, { "math_id": 51, "text": " \\lambda " }, { "math_id": 52, "text": "p(\\vec x\\mid c=i)" }, { "math_id": 53, "text": "P(c\\mid\\vec x)" } ]
https://en.wikipedia.org/wiki?curid=1470657
1470767
Linear complementarity problem
In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968. Formulation. Given a real matrix "M" and vector "q", the linear complementarity problem LCP("q", "M") seeks vectors "z" and "w" which satisfy the following constraints: A sufficient condition for existence and uniqueness of a solution to this problem is that "M" be symmetric positive-definite. If "M" is such that LCP("q", "M") has a solution for every "q", then "M" is a Q-matrix. If "M" is such that LCP("q", "M") have a unique solution for every "q", then "M" is a P-matrix. Both of these characterizations are sufficient and necessary. The vector "w" is a slack variable, and so is generally discarded after "z" is found. As such, the problem can also be formulated as: Convex quadratic-minimization: Minimum conditions. Finding a solution to the linear complementarity problem is associated with minimizing the quadratic function formula_10 subject to the constraints formula_11 formula_8 These constraints ensure that "f" is always non-negative. The minimum of "f" is 0 at "z" if and only if "z" solves the linear complementarity problem. If "M" is positive definite, any algorithm for solving (strictly) convex QPs can solve the LCP. Specially designed basis-exchange pivoting algorithms, such as Lemke's algorithm and a variant of the simplex algorithm of Dantzig have been used for decades. Besides having polynomial time complexity, interior-point methods are also effective in practice. Also, a quadratic-programming problem stated as minimize formula_12 subject to formula_13 as well as formula_14 with "Q" symmetric is the same as solving the LCP with formula_15 This is because the Karush–Kuhn–Tucker conditions of the QP problem can be written as: formula_16 with "v" the Lagrange multipliers on the non-negativity constraints, "λ" the multipliers on the inequality constraints, and "s" the slack variables for the inequality constraints. The fourth condition derives from the complementarity of each group of variables ("x", "s") with its set of KKT vectors (optimal Lagrange multipliers) being ("v", "λ"). In that case, formula_17 If the non-negativity constraint on the "x" is relaxed, the dimensionality of the LCP problem can be reduced to the number of the inequalities, as long as "Q" is non-singular (which is guaranteed if it is positive definite). The multipliers "v" are no longer present, and the first KKT conditions can be rewritten as: formula_18 or: formula_19 pre-multiplying the two sides by "A" and subtracting "b" we obtain: formula_20 The left side, due to the second KKT condition, is "s". Substituting and reordering: formula_21 Calling now formula_22 we have an LCP, due to the relation of complementarity between the slack variables "s" and their Lagrange multipliers "λ". Once we solve it, we may obtain the value of "x" from "λ" through the first KKT condition. Finally, it is also possible to handle additional equality constraints: formula_23 This introduces a vector of Lagrange multipliers "μ", with the same dimension as formula_24. It is easy to verify that the "M" and "Q" for the LCP system formula_25 are now expressed as: formula_26 From "λ" we can now recover the values of both "x" and the Lagrange multiplier of equalities "μ": formula_27 In fact, most QP solvers work on the LCP formulation, including the interior point method, principal / complementarity pivoting, and active set methods. LCP problems can be solved also by the criss-cross algorithm, conversely, for linear complementarity problems, the criss-cross algorithm terminates finitely only if the matrix is a sufficient matrix. A sufficient matrix is a generalization both of a positive-definite matrix and of a P-matrix, whose principal minors are each positive. Such LCPs can be solved when they are formulated abstractly using oriented-matroid theory. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "w, z \\geqslant 0," }, { "math_id": 1, "text": "z^Tw = 0" }, { "math_id": 2, "text": "\\sum\\nolimits_i w_i z_i = 0." }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "w_i" }, { "math_id": 5, "text": "z_i" }, { "math_id": 6, "text": "w = Mz + q" }, { "math_id": 7, "text": "Mz+q \\geqslant 0" }, { "math_id": 8, "text": "z \\geqslant 0" }, { "math_id": 9, "text": "z^{\\mathrm{T}}(Mz+q) = 0" }, { "math_id": 10, "text": "f(z) = z^T(Mz+q)" }, { "math_id": 11, "text": "{Mz}+q \\geqslant 0" }, { "math_id": 12, "text": "f(x)=c^Tx+\\tfrac{1}{2} x^T Qx" }, { "math_id": 13, "text": "Ax \\geqslant b" }, { "math_id": 14, "text": "x \\geqslant 0" }, { "math_id": 15, "text": "q = \\begin{bmatrix} c \\\\ -b \\end{bmatrix}, \\qquad M = \\begin{bmatrix} Q & -A^T \\\\ A & 0 \\end{bmatrix}" }, { "math_id": 16, "text": "\\begin{cases} \nv = Q x - A^T {\\lambda} + c \\\\ \ns = A x - b \\\\ \nx, {\\lambda}, v, s \\geqslant 0 \\\\ \nx^{T} v+ {\\lambda}^T s = 0\n\\end{cases}" }, { "math_id": 17, "text": "z = \\begin{bmatrix} x \\\\ \\lambda \\end{bmatrix}, \\qquad w = \\begin{bmatrix} v \\\\ s \\end{bmatrix}" }, { "math_id": 18, "text": "Q x = A^{T} {\\lambda} - c" }, { "math_id": 19, "text": " x = Q^{-1}(A^{T} {\\lambda} - c)" }, { "math_id": 20, "text": " A x - b = A Q^{-1}(A^{T} {\\lambda} - c) -b \\," }, { "math_id": 21, "text": " s = (A Q^{-1} A^{T}) {\\lambda} + (- A Q^{-1} c - b )\\," }, { "math_id": 22, "text": "\\begin{align}\nM &:= (A Q^{-1} A^{T}) \\\\\nq &:= (- A Q^{-1} c - b)\n\\end{align}" }, { "math_id": 23, "text": "A_{eq}x = b_{eq}" }, { "math_id": 24, "text": "b_{eq}" }, { "math_id": 25, "text": " s = M {\\lambda} + Q" }, { "math_id": 26, "text": "\\begin{align}\nM &:= \\begin{bmatrix} A & 0 \\end{bmatrix} \\begin{bmatrix} Q & A_{eq}^{T} \\\\ -A_{eq} & 0 \\end{bmatrix}^{-1} \\begin{bmatrix} A^T \\\\ 0 \\end{bmatrix} \\\\\nq &:= - \\begin{bmatrix} A & 0 \\end{bmatrix} \\begin{bmatrix} Q & A_{eq}^{T} \\\\ -A_{eq} & 0 \\end{bmatrix}^{-1} \\begin{bmatrix} c \\\\ b_{eq} \\end{bmatrix} - b\n\\end{align}" }, { "math_id": 27, "text": "\\begin{bmatrix} x \\\\ \\mu \\end{bmatrix} = \\begin{bmatrix} Q & A_{eq}^{T} \\\\ -A_{eq} & 0 \\end{bmatrix}^{-1} \\begin{bmatrix} A^T \\lambda - c \\\\ -b_{eq} \\end{bmatrix}" } ]
https://en.wikipedia.org/wiki?curid=1470767
14708063
Enthalpy–entropy compensation
Concept in thermodynamics In thermodynamics, enthalpy–entropy compensation is a specific example of the compensation effect. The compensation effect refers to the behavior of a series of closely related chemical reactions (e.g., reactants in different solvents or reactants differing only in a single substituent), which exhibit a linear relationship between one of the following kinetic or thermodynamic parameters for describing the reactions: When the activation energy is varied in the first instance, we may observe a related change in pre-exponential factors. An increase in A tends to compensate for an increase in Ea,i, which is why we call this phenomenon a compensation effect. Similarly, for the second and third instances, in accordance with the Gibbs free energy equation, with which we derive the listed equations, Δ"H" scales proportionately with Δ"S". The enthalpy and entropy compensate for each other because of their opposite algebraic signs in the Gibbs equation. A correlation between enthalpy and entropy has been observed for a wide variety of reactions. The correlation is significant because, for linear free-energy relationships (LFERs) to hold, one of three conditions for the relationship between enthalpy and entropy for a series of reactions must be met, with the most common encountered scenario being that which describes enthalpy–entropy compensation. The empirical relations above were noticed by several investigators beginning in the 1920s, since which the compensatory effects they govern have been identified under different aliases. Related terms. Many of the more popular terms used in discussing the compensation effect are specific to their field or phenomena. In these contexts, the unambiguous terms are preferred. The misapplication of and frequent crosstalk between fields on this matter has, however, often led to the use of inappropriate terms and a confusing picture. For the purposes of this entry different terms may refer to what may seem to be the same effect, but that either a term is being used as a shorthand (isokinetic and isoequilibrium relationships are different, yet are often grouped together synecdochically as isokinetic relationships for the sake of brevity) or is the correct term in context. This section should aid in resolving any uncertainties. ("see" Criticism "section for more on the variety of terms") compensation effect/rule : umbrella term for the observed linear relationship between: (i) the logarithm of the preexponential factors and the activation energies, (ii) enthalpies and entropies of activation, or (iii) between the enthalpy and entropy changes of a series of similar reactions. enthalpy-entropy compensation : the linear relationship between either the enthalpies and entropies of activation or the enthalpy and entropy changes of a series of similar reactions. isoequilibrium relation (IER), isoequilibrium effect : On a Van 't Hoff plot, there exists a common intersection point describing the thermodynamics of the reactions. At the isoequilibrium temperature β, all the reactions in the series should have the same equilibrium constant (Ki) formula_3 isokinetic relation (IKR), isokinetic effect : On an Arrhenius plot, there exists a common intersection point describing the kinetics of the reactions. At the isokinetic temperature β, all the reactions in the series should have the same rate constant (ki) formula_4 isoequilibrium temperature : used for thermodynamic LFERs; refers to β in the equations where it possesses dimensions of temperature isokinetic temperature : used for kinetic LFERs; refers to β in the equations where it possesses dimensions of temperature kinetic compensation : an increase in the preexponential factors tends to compensate for the increase in activation energy: formula_5 Meyer-Neldel rule (MNR) : primarily used in materials science and condensed matter physics; the MNR is often stated as the plot of the logarithm of the preexponential factor against activation energy is linear: formula_6 where ln "σ"0 is the preexponential factor, Ea is the activation energy, σ is the conductivity, and "k"B is Boltzmann's constant, and T is temperature. Mathematics. Enthalpy–entropy compensation as a requirement for LFERs. Linear free-energy relationships (LFERs) exist when the relative influence of changing substituents on one reactant is similar to the effect on another reactant, and include linear Hammett plots, Swain–Scott plots, and Brønsted plots. LFERs are not always found to hold, and to see when one can expect them to, we examine the relationship between the free-energy differences for the two reactions under comparison. The extent to which the free energy of the new reaction is changed, via a change in substituent, is proportional to the extent to which the reference reaction was changed by the same substitution. A ratio of the free-energy differences is the reaction quotient or constant Q. formula_7 The above equation may be rewritten as the difference (δ) in free-energy changes (Δ"G"): formula_8 Substituting the Gibbs free-energy equation (Δ"G" = Δ"H" – "T"Δ"S") into the equation above yields a form that makes clear the requirements for LFERs to hold. formula_9 One should expect LFERs to hold if one of three conditions are met: The third condition describes the enthalpy–entropy effect and is the condition most commonly met. Isokinetic and isoequilibrium temperature. For most reactions the activation enthalpy and activation entropy are unknown, but, if these parameters have been measured and a linear relationship is found to exist (meaning an LFER was found to hold), the following equation describes the relationship between Δ"H" and Δ"S": formula_10 Inserting the Gibbs free-energy equation and combining like terms produces the following equation: formula_11 where Δ"H" is constant regardless of substituents and Δ"S"‡ is different for each substituent. In this form, β has the dimension of temperature and is referred to as the isokinetic (or isoequilibrium) temperature. Alternately, the isokinetic (or isoequilibrium) temperature may be reached by observing that, if a linear relationship is found, then the difference between the Δ"H"‡'s for any closely related reactants will be related to the difference between Δ"S"‡'s for the same reactants: formula_12 Using the Gibbs free-energy equation, formula_13 In both forms, it is apparent that the difference in Gibbs free-energies of activations ("δ"Δ"G"‡) will be zero when the temperature is at the isokinetic (or isoequilibrium) temperature and hence identical for all members of the reaction set at that temperature. Beginning with the Arrhenius equation and assuming kinetic compensation (obeying ln "A" = ln "A"0 + "α"Δ"E"), the isokinetic temperature may also be given by formula_14 The reactions will have approximately the same value of their rate constant k at an isokinetic temperature. History. In a 1925 paper, F.H. Constable described the linear relationship observed for the reaction parameters of the catalytic dehydrogenation of primary alcohols with copper-chromium oxide. Phenomenon explained. The foundations of the compensation effect are still not fully understood though many theories have been brought forward. Compensation of Arrhenius processes in solid-state materials and devices can be explained quite generally from the statistical physics of aggregating fundamental excitations from the thermal bath to surmount a barrier whose activation energy is significantly larger than the characteristic energy of the excitations used (e.g., optical phonons). To rationalize the occurrences of enthalpy-entropy compensation in protein folding and enzymatic reactions, a Carnot-cycle model in which a micro-phase transition plays a crucial role was proposed. In drug receptor binding, it has been suggested that enthalpy-entropy compensation arises due to an intrinsic property of hydrogen bonds. A mechanical basis for solvent-induced enthalpy-entropy compensation has been put forward and tested at the dilute gas limit. There is some evidence of enthalpy-entropy compensation in biochemical or metabolic networks particularly in the context of intermediate-free coupled reactions or processes. However, a single general statistical mechanical explanation applicable to all compensated processes has not yet been developed. Criticism. Kinetic relations have been observed in many systems and, since their conception, have gone by many terms, among which are the Meyer-Neldel effect or rule, the Barclay-Butler rule, the theta rule, and the Smith-Topley effect. Generally, chemists will talk about the isokinetic relation (IKR), from the importance of the isokinetic (or isoequilibrium) temperature, condensed matter physicists and material scientists use the Meyer-Neldel rule, and biologists will use the compensation effect or rule. An interesting homework problem appears following Chapter 7: Structure-Reactivity Relationships in Kenneth Connors's textbook "Chemical Kinetics: The Study of Reaction Rates": From the last four digits of the office telephone numbers of the faculty in your department, systematically construct pairs of "rate constants" as two-digit numbers times 10−5 s−1 at temperatures 300 K and 315 K (obviously the larger rate constant of each pair to be associated with the higher temperature). Make a two-point Arrhenius plot for each faculty member, evaluating Δ"H"‡ and Δ"S"‡. Examine the plot of Δ"H"‡ against Δ"S"‡ for evidence of an isokinetic relationship. The existence of any real compensation effect has been widely derided in recent years and attributed to the analysis of interdependent factors and chance. Because the physical roots remain to be fully understood, it has been called into question whether compensation is a truly physical phenomenon or a coincidence due to trivial mathematical connections between parameters. The compensation effect has been criticized in other respects, namely for being the result of random experimental and systematic errors producing the appearance of compensation. The principal complaint lodged states that compensation is an artifact of data from a limited temperature range or from a limited range for the free energies. In response to the criticisms, investigators have stressed that compensatory phenomena are real, but appropriate and in-depth data analysis is always needed. The "F"-test has been used to such an aim, and it minimizes the deviations of points constrained to pass through an isokinetic temperature to the deviation of the points from the unconstrained line is achieved by comparing the mean deviations of points. Appropriate statistical tests should be performed as well. W. Linert wrote in a 1983 paper: There are few topics in chemistry in which so many misunderstandings and controversies have arisen as in connection with the so-called isokinetic relationship (IKR) or compensation law. Up to date, a great many chemists appear to be inclined to dismiss the IKR as being accidental. The crucial problem is that the activation parameters are mutually dependent because of their determination from the experimental data. Therefore, it has been stressed repeatedly, the isokinetic plot (i.e., Δ"H"‡ against Δ"S"‡) is unfit in principle to substantiate a claim of an isokinetic relationship. At the same time, however, it is a fatal error to dismiss the IKR because of that fallacy. Common among all defenders is the agreement that stringent criteria for the assignment of true compensation effects must be adhered to. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\ln A_i = \\alpha + \\frac{E_{a,i}}{R\\beta}\n" }, { "math_id": 1, "text": "\n\\Delta H^\\ddagger_i = \\alpha + \\beta \\Delta S^\\ddagger_i\n" }, { "math_id": 2, "text": "\n\\Delta H_i = \\alpha + \\beta \\Delta S_i\n" }, { "math_id": 3, "text": "\\Delta G_i(\\beta) = \\alpha" }, { "math_id": 4, "text": "k_i(\\beta) = e^\\alpha" }, { "math_id": 5, "text": "\\ln A = \\ln A_0 + \\alpha \\Delta E_0" }, { "math_id": 6, "text": "\\sigma(T) = \\sigma_0 \\exp\\left(-\\frac{E_a}{k_{\\rm B}T}\\right)" }, { "math_id": 7, "text": "(\\Delta G'_0 - \\Delta G'_x) = Q(\\Delta G_0 - \\Delta G_x)" }, { "math_id": 8, "text": "\\delta \\Delta G = Q \\delta \\Delta G" }, { "math_id": 9, "text": "(\\Delta H' - T\\Delta S') = Q(\\Delta H - T \\Delta S)" }, { "math_id": 10, "text": "\\Delta H^\\ddagger = \\beta \\Delta S^\\ddagger + \\Delta H^\\ddagger_0" }, { "math_id": 11, "text": "\\Delta G^\\ddagger = \\Delta H^\\ddagger_0 - (T - \\beta)\\Delta S^\\ddagger" }, { "math_id": 12, "text": "\\delta \\Delta H^\\ddagger = \\beta \\delta \\Delta S^\\ddagger" }, { "math_id": 13, "text": "\\delta \\Delta G^\\ddagger = \\left(1 - \\frac{T}{\\beta}\\right) \\delta \\Delta S^\\ddagger" }, { "math_id": 14, "text": "k_{\\rm B} \\beta = \\tfrac{1}{\\alpha}." } ]
https://en.wikipedia.org/wiki?curid=14708063
1470808
AZE
AZE may refer to: See also. Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "^A_Z E" } ]
https://en.wikipedia.org/wiki?curid=1470808
14708522
Diketene
Organic compound with formula (CH2CO)2 &lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Diketene is an organic compound with the molecular formula , and which is sometimes written as . It is formed by dimerization of ketene, . Diketene is a member of the oxetane family. It is used as a reagent in organic chemistry. It is a colorless liquid. Production. Diketene is produced on commercial scale by dimerization of ketene. Reactions. Heating or irradiation with UV light regenerates the ketene monomer: &lt;chem&gt;(C2H2O)2 &lt;=&gt; 2 CH2CO&lt;/chem&gt; Alkylated ketenes also dimerize with ease and form substituted diketenes. Diketene readily hydrolyzes in water forming acetoacetic acid. Its half-life in water is approximately 45 min. a 25 °C at 2 &lt; pH &lt; 7. Certain diketenes with two aliphatic chains, such as alkyl ketene dimers (AKDs), are used industrially to improve hydrophobicity in paper. At one time acetic anhydride was prepared by the reaction of ketene with acetic acid: &lt;chem&gt;H2C=C=O + CH3COOH -&gt; (CH3CO)2O&lt;/chem&gt;formula_0 Acetoacetylation. Diketene also reacts with alcohols and amines to the corresponding acetoacetic acid derivatives. The process is sometimes called acetoacetylation. An example is the reaction with 2-aminoindane: Diketene is an important industrial intermediate used for the production of acetoacetate esters and amides as well as substituted 1-phenyl-3-methylpyrazolones. The latter are used in the manufacture of dyestuffs and pigments. A typical reaction is: &lt;chem&gt;ArNH2 + (CH2CO)2 -&gt; ArNHC(O)CH2C(O)CH3&lt;/chem&gt; These acetoacetamides are precursors to arylide yellow and diarylide pigments. Use. Diketenes with two alkyl chains are used in the manufacture of paper for sizing of paper in order to improve their printability (by hydrophobization). Besides the rosin resins with about 60% share of world consumption, long chain diketenes called alkylketene dimers (AKD) are with 16% share the most important synthetic paper sizes, they are usually used in concentrations of 0.15%, meaning 1.5 kg solid AKD/t paper. The preparation of AKD is carried out by chlorination of long chain fatty acids (such as stearic acid, using chlorinating agents such as thionyl chloride) to give the corresponding acid chlorides and subsequent elimination of HCl by amines (for example triethylamine) in toluene or other solvents: Furthermore, diketenes are used as intermediates in the manufacture of pharmaceuticals, insecticides and dyes. For example pyrazolones are formed from substituted phenylhydrazines, they were used as analgetics but are now largely obsolete. With methylamine diketenes reacts to "N","N"'-dimethylacetoacetamide which is chlorinated with sulfuryl chloride and reacted with trimethyl phosphite to the highly toxic insecticide monocrotophos (especially toxic to bees). Diketenes react with substituted aromatic amines to acetoacetanilides, which are important precursors for mostly yellow, orange or red azo dyes and azo pigments. Exemplary for the synthesis of arylides by the reaction of diketenes with aromatic amines is: Aromatic diazonium coupling with arylides to form azo dyes, such as Pigment Yellow 74: The industrial synthesis of the sweetener acesulfam-K is based on the reaction of diketene with sulfamic acid and cyclization by sulfur trioxide (SO3). Drugs made from Diketene include: Safety. Despite its high reactivity as an alkylating agent, and unlike analogue β-lactones propiolactone and β-butyrolactone, diketene is inactive as a carcinogen, possibly due to the instability of its DNA adducts. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\qquad \\Delta H = \\text{−63 kJ/mol}" } ]
https://en.wikipedia.org/wiki?curid=14708522
1471037
Graphical timeline of the Big Bang
Logarithmic chronology of the event that began the Universe This timeline of the Big Bang shows a sequence of events as currently theorized by scientists. It is a logarithmic scale that shows formula_0 "second" instead of "second". For example, one microsecond is formula_1. To convert −30 read on the scale to second calculate formula_2 second = one millisecond. On a logarithmic time scale a step lasts ten times longer than the previous step.
[ { "math_id": 0, "text": "10 \\cdot \\log_{10}" }, { "math_id": 1, "text": "10 \\cdot \\log_{10} 0.000 001 = 10 \\cdot (-6) = -60" }, { "math_id": 2, "text": "10^{-\\frac{30}{10}} = 10^{-3} = 0.001" } ]
https://en.wikipedia.org/wiki?curid=1471037
1471192
Nome (mathematics)
Special mathematical function In mathematics, specifically the theory of elliptic functions, the nome is a special function that belongs to the non-elementary functions. This function is of great importance in the description of the elliptic functions, especially in the description of the modular identity of the Jacobi theta function, the Hermite elliptic transcendents and the Weber modular functions, that are used for solving equations of higher degrees. Definition. The nome function is given by formula_0 where formula_1 and formula_2 are the quarter periods, and formula_3 and formula_4 are the fundamental pair of periods, and formula_5 is the half-period ratio. The nome can be taken to be a function of any one of these quantities; conversely, any one of these quantities can be taken as functions of the nome. Each of them uniquely determines the others when formula_6. That is, when formula_6, the mappings between these various symbols are both 1-to-1 and onto, and so can be inverted: the quarter periods, the half-periods and the half-period ratio can be explicitly written as functions of the nome. For general formula_7 with formula_8, formula_9 is not a single-valued function of formula_10. Explicit expressions for the quarter periods, in terms of the nome, are given in the linked article. Notationally, the quarter periods formula_1 and formula_2 are usually used only in the context of the Jacobian elliptic functions, whereas the half-periods formula_3 and formula_4 are usually used only in the context of Weierstrass elliptic functions. Some authors, notably Apostol, use formula_3 and formula_4 to denote whole periods rather than half-periods. The nome is frequently used as a value with which elliptic functions and modular forms can be described; on the other hand, it can also be thought of as function, because the quarter periods are functions of the elliptic modulus formula_11: formula_12. The complementary nome formula_13 is given by formula_14 Sometimes the notation formula_15 is used for the "square" of the nome. The mentioned functions formula_1 and formula_16 are called complete elliptic integrals of the first kind. They are defined as follows: formula_17 formula_18 Applications. The nome solves the following equation: formula_19 This analogon is valid for the Pythagorean complementary modulus: formula_20 where formula_21 are the complete Jacobi theta functions and formula_22 is the complete elliptic integral of the first kind with modulus formula_11 shown in the formula above. For the complete theta functions these definitions introduced by Sir Edmund Taylor Whittaker and George Neville Watson are valid: formula_23 formula_24 formula_25 These three definition formulas are written down in the fourth edition of the book "A Course in Modern Analysis" written by Whittaker and Watson on the pages 469 and 470. The nome is commonly used as the starting point for the construction of Lambert series, the q-series and more generally the q-analogs. That is, the half-period ratio formula_9 is commonly used as a coordinate on the complex upper half-plane, typically endowed with the Poincaré metric to obtain the Poincaré half-plane model. The nome then serves as a coordinate on a punctured disk of unit radius; it is punctured because formula_26 is not part of the disk (or rather, formula_26 corresponds to formula_27). This endows the punctured disk with the Poincaré metric. The upper half-plane (and the Poincaré disk, and the punctured disk) can thus be tiled with the fundamental domain, which is the region of values of the half-period ratio formula_9 (or of formula_10, or of formula_1 and formula_2 etc.) that uniquely determine a tiling of the plane by parallelograms. The tiling is referred to as the modular symmetry given by the modular group. Some functions that are periodic on the upper half-plane are called to as modular functions; the nome, the half-periods, the quarter-periods or the half-period ratio all provide different parameterizations for these periodic functions. The prototypical modular function is Klein's j-invariant. It can be written as a function of either the half-period ratio τ or as a function of the nome formula_10. The series expansion in terms of the nome or the square of the nome (the "q"-expansion) is famously connected to the Fisher-Griess monster by means of monstrous moonshine. Euler's function arises as the prototype for "q"-series in general. The nome, as the formula_10 of "q"-series then arises in the theory of affine Lie algebras, essentially because (to put it poetically, but not factually) those algebras describe the symmetries and isometries of Riemann surfaces. Curve sketching. Every real value formula_28 of the interval formula_29 is assigned to a real number between inclusive zero and inclusive one in the nome function formula_30. The elliptic nome function is axial symmetric to the ordinate axis. Thus: formula_31. The functional curve of the nome passes through the origin of coordinates with the slope zero and curvature plus one eighth. For the real valued interval formula_32 the nome function formula_30 is strictly left-curved. Derivatives. The Legendre's relation is defined that way: formula_33 And as described above, the elliptic nome function formula_30 has this original definition: formula_34 Furthermore, these are the derivatives of the two complete elliptic integrals: formula_35 formula_36 Therefore, the derivative of the nome function has the following expression: formula_37 The second derivative can be expressed this way: formula_38 And that is the third derivative: formula_39 The complete elliptic integral of the second kind is defined as follows: formula_40 The following equation follows from these equations by eliminating the complete elliptic integral of the second kind: formula_41 Thus, the following third-order quartic differential equation is valid: formula_42 MacLaurin series and integer sequences. Kneser sequence. Given is the derivative of the Elliptic Nome mentioned above: formula_37 The outer factor with the K-integral in the denominator shown in this equation is the derivative of the elliptic period ratio. The elliptic period ratio is the quotient of the K-integral of the Pythagorean complementary modulus divided by the K-integral of the modulus itself. And the integer number sequence in MacLaurin series of that elliptic period ratio leads to the integer sequence of the series of the elliptic nome directly. The German mathematician Adolf Kneser researched on the integer sequence of the elliptic period ratio in his essay "Neue Untersuchung einer Reihe aus der Theorie der elliptischen Funktionen" and showed that the generating function of this sequence is an elliptic function. Also a further mathematician with the name Robert Fricke analyzed this integer sequence in his essay "Die elliptischen Funktionen und ihre Anwendungen" and described the accurate computing methods by using this mentioned sequence. The Kneser integer sequence Kn(n) can be constructed in this way: Executed examples: The Kneser sequence appears in the Taylor series of the period ratio (half period ratio): formula_43 formula_44 The derivative of this equation after formula_45 leads to this equation that shows the generating function of the Kneser number sequence: formula_46 formula_47 This result appears because of the Legendre's relation formula_33 in the numerator. Schellbach Schwarz sequence. The mathematician Karl Heinrich Schellbach discovered the integer number sequence that appears in the MacLaurin series of the fourth root of the quotient Elliptic Nome function divided by the square function. The construction of this sequence is detailed in his work "Die Lehre von den Elliptischen Integralen und den Thetafunktionen". The sequence was also constructed by the Silesian German mathematician Hermann Amandus Schwarz in "Formeln und Lehrsätze zum Gebrauche der elliptischen Funktionen" (pages 54–56, chapter "Berechnung der Grösse k"). This Schellbach Schwarz number sequence Sc(n) was also analyzed by the mathematicians Karl Theodor Wilhelm Weierstrass and Louis Melville Milne-Thomson in the 20th century. The mathematician Adolf Kneser determined a construction for this sequence based on the following pattern: formula_48 The Schellbach Schwarz sequence Sc(n) appears in the On-Line Encyclopedia of Integer Sequences under the number and the Kneser sequence Kn(n) appears under the number . The following table contains the Kneser numbers and the Schellbach Schwarz numbers: And this sequence creates the MacLaurin series of the elliptic nome in exactly this way: formula_49 formula_50 In the following, it will be shown as an example how the Schellbach Schwarz numbers are built up successively. For this, the examples with the numbers Sc(4) = 150, Sc(5) = 1707 and Sc(6) = 20910 are used: formula_51 formula_52 formula_53 formula_54 formula_55 formula_56 Kotěšovec sequence. The MacLaurin series of the nome function formula_30 has even exponents and positive coefficients at all positions: formula_57 And the sum with the same absolute values of the coefficients but with alternating signs generates this function: formula_58 The radius of convergence of this Maclaurin series is 1. Here formula_59 (OEIS A005797) is a sequence of exclusively natural numbers formula_60 for all natural numbers formula_61 and this integer number sequence is not elementary. This sequence of numbers formula_59 was researched by the Czech mathematician and fairy chess composer Václav Kotěšovec, born in 1956. Two ways of constructing this integer sequence shall be shown in the next section. Construction method with Kneser numbers. The Kotěšovec numbers are generated in the same way as the Schellbach Schwarz numbers are constructed: The only difference consists in the fact that this time the factor before the sum in this corresponding analogous formula is not formula_62 anymore, but formula_63 instead of that: formula_64 Following table contains the Schellbach Schwarz numbers and the Kneser numbers and the Apéry numbers: In the following, it will be shown as an example how the Schellbach Schwarz numbers are built up successively. For this, the examples with the numbers Kt(4) = 992, Kt(5) = 12514 and Kt(6) = 164688 are used: formula_65 formula_66 formula_67 formula_68 formula_69 formula_70 So the MacLaurin series of the direct Elliptic Nome can be generated: formula_71 formula_72 Construction method with Apéry numbers. By adding a further integer number sequence formula_73 that denotes a specially modified Apéry sequence (OEIS A036917), the sequence of the Kotěšovec numbers formula_59 can be generated. The starting value of the sequence formula_59 is the value formula_74 and the following values of this sequence are generated with those two formulas that are valid for all numbers formula_61: formula_75 formula_76 This formula creates the Kotěšovec sequence too, but it only creates the sequence numbers of even indices: formula_77 The Apéry sequence formula_73 was researched especially by the mathematicians Sun Zhi-Hong and Reinhard Zumkeller. And that sequence generates the square of the complete elliptic integral of the first kind: formula_78 The first numerical values of the central binomial coefficients and the two numerical sequences described are listed in the following table: Václav Kotěšovec wrote down the number sequence formula_59 on the Online Encyclopedia of Integer Sequences up to the seven hundredth sequence number. Here one example of the Kotěšovec sequence is computed: Function values. The two following lists contain many function values of the nome function: The first list shows pairs of values with mutually Pythagorean complementary modules: formula_79 formula_80 formula_81 formula_82 formula_83 formula_84 formula_85 formula_86 formula_87 formula_88 formula_89 formula_90 formula_91 The second list shows pairs of values with mutually tangentially complementary modules: formula_92 formula_93 formula_94 formula_95 formula_96 formula_97 formula_98 formula_99 formula_100 formula_101 formula_102 formula_103 formula_104 Related quartets of values are shown below: Sums and products. Sum series. The elliptic nome was explored by Richard Dedekind and this function is the fundament in the theory of eta functions and their related functions. The elliptic nome is the initial point of the construction of the Lambert series. In the theta function by Carl Gustav Jacobi the nome as an abscissa is assigned to algebraic combinations of the Arithmetic geometric mean and also the complete elliptic integral of the first kind. Many infinite series can be described easily in terms of the elliptic nome: formula_105 formula_106 formula_107 formula_108 formula_109 formula_110 formula_111 The quadrangle represents the square number of index "n", because in this way of notation the two in the exponent of the exponent would appear to small. So this formula is valid: formula_112 The letter formula_113 describes the complete elliptic integral of the second kind, which is the quarter periphery of an ellipse in relation to the bigger half axis of the ellipse with the numerical eccentricity formula_114 as abscissa value. Product series. The two most important theta functions can be defined by following product series: formula_115 formula_116 Furthermore, these two Pochhammer products have those two relations: formula_117 formula_118 The Pochhammer products have an important role in the pentagonal number theorem and its derivation. Relation to other functions. Complete elliptic integrals. The nome function can be used for the definition of the complete elliptic integrals of first and second kind: formula_119 formula_120 In this case the dash in the exponent position stands for the derivative of the so-called theta zero value function: formula_121 Definitions of Jacobi functions. The elliptic functions Zeta Amplitudinis and Delta Amplitudinis can be defined with the elliptic nome function easily: formula_122 formula_123 Using the fourth root of the quotient of the nome divided by the square function as it was mentioned above, following product series definitions can be set up for the Amplitude Sine, the Counter Amplitude Sine and the Amplitude Cosine in this way: formula_124 formula_125 formula_126 These five formulas are valid for all values k from −1 until +1. Then following successive definition of the other Jacobi functions is possible: formula_127 formula_128 formula_129 formula_130 The product definition of the amplitude sine was written down in the essay "π and the AGM" by the Borwein brothers on page 60 and this formula is based on the theta function definition of Whittaker und Watson. Identities of Jacobi Amplitude functions. In combination with the theta functions the nome gives the values of many Jacobi amplitude function values: formula_131 formula_132 formula_133 formula_134 formula_135 formula_136 formula_137 The abbreviation sc describes the quotient of the amplitude sine divided by the amplitude cosine. Theorems and Identities. Derivation of the nome square theorem. The law for the square of the elliptic noun involves forming the Landen daughter modulus: The Landen daughter modulus is also the tangential counterpart of the Pythagorean counterpart of the mother modulus. Examples for the nome square theorem. The Landen daughter modulus is identical to the tangential opposite of the Pythagorean opposite of the mother modulus. Three examples shall be shown in the following: Trigonometrically displayed examples: formula_138 formula_139 formula_140 formula_141 Hyperbolically displayed examples: formula_142 formula_143 formula_144 formula_145 formula_146 formula_147 formula_148 formula_149 Derivation of the parametrized nome cube theorem. Not only the law for the square but also the law for the cube of the elliptic nome leads to an elementary modulus transformation. This parameterized formula for the cube of the elliptic noun is valid for all values −1 &lt; u &lt; 1. This formula was displayed exactly like this and this time it was not printed exactly after the expression formula_150 with the main alignment on the mother modulus, because this formula contains a long formulation. And in the formula shown now with the parameter formula_151, a greatly simplified formula emerges. Derivation of the direct nome cube theorem. On the basis of the now absolved proof a direct formula for the nome cube theorem in relation to the modulus formula_150 and in combination with the Jacobi amplitude sine shall be generated: The works "Analytic Solutions to Algebraic Equations" by Johansson and "Evaluation of Fifth Degree Elliptic Singular Moduli" by Bagis showed in their quotated works that the Jacobi amplitude sine of the third part of the complete first kind integral K solves following quartic equation: formula_152 formula_153 Now the parametrization mentioned above is inserted into this equation: formula_154 formula_155 This is the real solution of the pattern formula_156 of that quartic equation: formula_157 Therefore, following formula is valid: formula_158 The parametrized nome cube formula has this mentioned form: formula_159 The same formula can be designed in this alternative way: formula_160 So this result appears as the direct nome cube theorem: formula_161 Examples for the nome cube theorem. Alternatively, this formula can be set up: The now presented formula is used for simplified computations, because the given elliptical modulus can be used to determine the value formula_162 in an easy way. The value formula_162 can be evoked by taking the tangent duplication of the modulus and then taking the cube root of that in order to get the parameterization value formula_162 directly. Two examples are to be treated exemplarily: In the first example, the value formula_163 is inserted: formula_164 formula_165 In the second example, the value formula_166 is inserted: formula_167 formula_168 formula_169 The constant formula_170 represents the Golden ratio number formula_171 exactly. Indeed, the formula for the cube of the nome involves a modulus transformation that really contains elementary cube roots because it involves the solution of a regular quartic equation. However the laws for the fifth power and the seventh power of the elliptic nome do not lead to an elementary nome transformation, but to a non elementary transformation. This was proven by the Abel–Ruffini theorem and by the Galois theory too. Exponentiation theorems with Jacobi amplitude functions. Every power of a nome of a positive algebraic number as base and a positive rational number as exponent is equal to a nome value of a positive algebraic number: formula_172 These are the most important examples of the general exponentiation theorem: formula_173 formula_174 formula_175 formula_176 formula_177 formula_178 formula_179 formula_180 The abbreviation formula_181 stands for the Jacobi elliptic function amplitude sine. For algebraic formula_28 values in the real interval formula_29 the shown amplitude sine expressions are always algebraic. This are the general exponentiation theorems: formula_182 formula_183 That theorem is valid for all natural numbers  "n". Important computation clues: The following Jacobi amplitude sine expressions solve the subsequent equations: Examples for the exponentiation theorems. For these nome power theorems important examples shall be formulated: Given is the fifth power theorem: formula_176 Lemniscatic example for the fifth power theorem: A next example for the fifth power theorem: Reflection theorems. If two positive numbers formula_184 and formula_185 are Pythagorean opposites to each other and thus the equation formula_186 is valid, then this relation is valid: formula_187 If two positive numbers formula_188 and formula_189 are tangential opposites to each other and thus the equation formula_190 is valid, then that relation is valid: formula_191 Therefore, these representations have validity for all real numbers "x": Pythagorean opposites: formula_192 formula_193 Tangential opposites: formula_194 formula_195 Derivations of the nome values. Direct results of mentioned theorems. The following examples should be used to determine the nouns: Example 1: Given is the formula of the Pythagorean counterparts: For x = 0, this formula gives this equation: formula_196 formula_197 Example 2: Given is the formula of the tangential counterparts: For x = 0, the formula for the tangential counterparts gives the following equation: formula_198 formula_199 Combinations of two theorems each. Example 1: Equianharmonic case The formula of the Pythagorean counterparts is used again: For formula_200, this equation results from this formula: formula_201 In a previous section this theorem was stated: From this theorem for cubing, the following equation results for formula_202: formula_203 The solution to the system of equations with two unknowns then reads as follows: formula_204 formula_205 Example 2: A further case with the cube formula The formula of the tangential counterparts is used again: For formula_206 this formula results in the following equation: formula_207 The theorem for cubing is also used here: From the previously mentioned theorem for cubing, the following equation results for formula_208: formula_209 The solution to the system of equations with two unknowns then reads as follows: formula_210 formula_211 Investigations about incomplete integrals. With the incomplete elliptic integrals of the first kind, the values of the elliptic noun function can be derived directly. With two accurate examples, these direct derivations are to be carried out in the following: First example: Second example: Third example: First derivative of the theta function. Derivation of the derivative. The first derivative of the principal theta function among the Jacobi theta functions can be derived in the following way using the chain rule and the derivation formula of the elliptic nome: formula_212 formula_213 For the now mentioned derivation part this identity is the fundament: formula_214 Therefore, this equation results: formula_215 The complete elliptic integrals of the second kind have that identity: formula_216 Along with this modular identity, following formula transformation can be made: formula_217 Furthermore, this identity is valid: formula_218 By using the theta function expressions ϑ00(x) and ϑ01(x) following representation is possible: formula_219 This is the final result: formula_220 Related first derivatives. In a similar way following other first derivatives of theta functions and their combinations can also be derived: formula_221 formula_222 formula_223 formula_224 formula_225 Important definition: formula_226 formula_227 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "q\n=\\mathrm{e}^{-{\\pi K'/K}}\n=\\mathrm{e}^{{\\rm{i}} \\pi\\omega_2/\\omega_1}\n=\\mathrm{e}^{{\\rm{i}} \\pi \\tau}\n\\, \n" }, { "math_id": 1, "text": "K" }, { "math_id": 2, "text": "iK'" }, { "math_id": 3, "text": "\\omega_1" }, { "math_id": 4, "text": "\\omega_2" }, { "math_id": 5, "text": "\\tau=\\frac{iK'}{K}=\\frac{\\omega_2}{\\omega_1}" }, { "math_id": 6, "text": "0<q<1" }, { "math_id": 7, "text": "q\\in\\mathbb{C}" }, { "math_id": 8, "text": "0<|q|<1" }, { "math_id": 9, "text": "\\tau" }, { "math_id": 10, "text": "q" }, { "math_id": 11, "text": "k" }, { "math_id": 12, "text": "q(k) = \\mathrm{e}^{-\\pi K'(k)/K(k)}" }, { "math_id": 13, "text": "q_1" }, { "math_id": 14, "text": "q_1(k) = \\mathrm{e}^{-\\pi K(k)/K'(k)}. \\, " }, { "math_id": 15, "text": "q=\\mathrm{e}^{{2\\rm{i}} \\pi \\tau}" }, { "math_id": 16, "text": "K'" }, { "math_id": 17, "text": "K(x) = \\int_0^{\\pi/2} \\frac{1}{\\sqrt{1 - x^2\\sin(\\varphi)^2}} \\,\\mathrm{d}\\varphi = \\int_0^1 \\frac{2}{\\sqrt{(y^2+1)^2 - 4x^2y^2}} \\mathrm{d}y " }, { "math_id": 18, "text": "K'(x) = K(\\sqrt{1 - x^2}) = \\int_0^{\\pi/2} \\frac{1}{\\sqrt{1 - (1 - x^2)\\sin(\\varphi)^2}} \\,\\mathrm{d}\\varphi " }, { "math_id": 19, "text": "|k| = \\frac{\\vartheta_{10}^2[0,q(k)]}{\\vartheta_{00}^2[0,q(k)]}\\rightarrow q(k) = \\mathrm{e}^{-\\pi K'(k)/K(k)}" }, { "math_id": 20, "text": "k' = \\sqrt{1 - k^2} = \\frac{\\vartheta_{01}^2[0,q(k)]}{\\vartheta_{00}^2[0,q(k)]}\\rightarrow q(k) = \\mathrm{e}^{-\\pi K'(k)/K(k)}" }, { "math_id": 21, "text": "\\vartheta_{10},\\theta_{00}" }, { "math_id": 22, "text": "K(k)" }, { "math_id": 23, "text": "\\vartheta_{00}(v;w) = \\prod_{n = 1}^\\infty (1-w^{2n})[1+2\\cos(2v)w^{2n-1}+w^{4n-2}]" }, { "math_id": 24, "text": "\\vartheta_{01}(v;w) = \\prod_{n = 1}^\\infty (1-w^{2n})[1-2\\cos(2v)w^{2n-1}+w^{4n-2}]" }, { "math_id": 25, "text": "\\vartheta_{10}(v;w) = 2 w^{1/4}\\cos(v)\\prod_{n = 1}^\\infty (1-w^{2n})[1+2\\cos(2v)w^{2n}+w^{4n}]" }, { "math_id": 26, "text": "q=0" }, { "math_id": 27, "text": "\\tau \\to \\infty" }, { "math_id": 28, "text": "x" }, { "math_id": 29, "text": "[-1,1]" }, { "math_id": 30, "text": "q(x)" }, { "math_id": 31, "text": "q(x) = q(-x)" }, { "math_id": 32, "text": "(-1,1)" }, { "math_id": 33, "text": " K\\,E' + E\\,K' - K\\,K' = \\tfrac{1}{2}\\pi " }, { "math_id": 34, "text": " q(x) = \\exp\\left[-\\pi \\,\\frac{K(\\sqrt{1 - x^2})}{K(x)}\\right] " }, { "math_id": 35, "text": " \\frac{\\mathrm{d}}{\\mathrm{d}x} K(x) = \\frac{1}{x(1 - x^2)}\\bigl[E(x) - (1 - x^2)K(x)\\bigr] " }, { "math_id": 36, "text": " \\frac{\\mathrm{d}}{\\mathrm{d}x} E(x) = -\\frac{1}{x}\\bigl[K(x) - E(x)\\bigr] " }, { "math_id": 37, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}x} \\,q(x) = \\frac{\\pi^2}{2x(1-x^2)K(x)^2} \\,q(x) " }, { "math_id": 38, "text": "\\frac{\\mathrm{d}^2}{\\mathrm{d}x^2} \\,q(x) = \\frac{\\pi^4 + 2\\pi^2 (1+x^2)K(x)^2 - 4\\pi^2 K(x)E(x)}{4x^2(1-x^2)^2 K(x)^4} \\,q(x) " }, { "math_id": 39, "text": "\\frac{\\mathrm{d}^3}{\\mathrm{d}x^3} \\,q(x) = \\frac{\\pi^6 + 6\\pi^4 (1+x^2)K(x)^2 - 12\\pi^4 K(x)E(x) + 8\\pi^2 (1+x^2)^2 K(x)^4 - 24\\pi^2 (1+x^2)K(x)^3 E(x) + 24\\pi^2 K(x)^2 E(x)^2}{8x^3(1-x^2)^3 K(x)^6} \\,q(x) " }, { "math_id": 40, "text": "E(x) = \\int_0^{\\pi/2} \\sqrt{1-x^2\\sin(\\varphi)^2} \\,\\mathrm{d}\\varphi = 2\\int_0^1 \\frac{\\sqrt{(y^2+1)^2 - 4x^2y^2}}{(y^2+1)^2} \\,\\mathrm{d}y " }, { "math_id": 41, "text": "3\\biggl[\\frac{\\mathrm{d}^2}{\\mathrm{d}x^2} q(x)\\biggr]^2 - 2\\biggl[\\frac{\\mathrm{d}}{\\mathrm{d}x} q(x)\\biggr]\\biggl[\\frac{\\mathrm{d}^3}{\\mathrm{d}x^3} q(x)\\biggr] = \\frac{\\pi^8 - 4\\pi^4 (1+x^2)^2 K(x)^4}{16x^4(1-x^2)^4 K(x)^8} q(x)^2 " }, { "math_id": 42, "text": "x^2 (1-x^2)^2 [2q(x)^2 q'(x)q'''(x) - 3q(x)^2 q''(x)^2 + q'(x)^4] = (1+x^2)^2 q(x)^2 q'(x)^2 " }, { "math_id": 43, "text": "\\frac{1}{4}\\ln\\bigl(\\frac{16}{x^2}\\bigr) - \\frac{\\pi \\,K'(x)}{4 \\,K(x)} = \\sum_{n = 1}^{\\infty} \\frac{\\text{Kn}(n)}{2^{4n - 1}n}\\,x^{2n} " }, { "math_id": 44, "text": "{\\color{limegreen}\\frac{1}{4}\\ln\\bigl(\\frac{16}{x^2}\\bigr) - \\frac{\\pi \\,K'(x)}{4 \\,K(x)} = \\frac{{\\color{cornflowerblue}1}}{8}x^2 + \\frac{{\\color{cornflowerblue}13}}{256}x^4 + \\frac{{\\color{cornflowerblue}184}}{6144}x^6 + \\frac{{\\color{cornflowerblue}2701}}{131072}x^8 + \\frac{{\\color{cornflowerblue}40456}}{2621440}x^{10} + \\ldots} " }, { "math_id": 45, "text": " x " }, { "math_id": 46, "text": "\\frac{\\pi^2}{8x(1 - x^2)K(x)^2} - \\frac{1}{2x} = \\sum_{n = 1}^{\\infty} \\frac{\\text{Kn}(n)}{2^{4n - 2}}x^{2n - 1} " }, { "math_id": 47, "text": "{\\color{limegreen}\\frac{\\pi^2}{8x(1 - x^2)K(x)^2} - \\frac{1}{2x} = \\frac{{\\color{cornflowerblue}1}}{4}x + \\frac{{\\color{cornflowerblue}13}}{64}x^3 + \\frac{{\\color{cornflowerblue}184}}{1024}x^5 + \\frac{{\\color{cornflowerblue}2701}}{16384}x^7 + \\frac{{\\color{cornflowerblue}40456}}{262144}x^9 + \\ldots} " }, { "math_id": 48, "text": "\\text{Sc}(n+1) = \\frac{2}{n}\\sum _{m = 1}^{n} \\text{Sc}(m)\\,\\text{Kn}(n + 1 - m) " }, { "math_id": 49, "text": "q(x) = \\sum_{n = 1}^{\\infty} \\frac{\\text{Sc}(n)}{2^{4n - 3}} \\biggl(\\frac{1 - \\sqrt[4]{1 - x^2}}{1 + \\sqrt[4]{1 - x^2}}\\biggr)^{4n - 3} = x^2\\biggl\\{\\frac{1}{2} + \\biggl[\\sum_{n = 1}^{\\infty} \\frac{\\text{Sc}(n + 1)}{2^{4n + 1}} x^{2n}\\biggr]\\biggr\\}^4" }, { "math_id": 50, "text": "q(x) = x^2\\bigl({\\color{limegreen}\\frac{{\\color{navy}1}}{2} + \\frac{{\\color{navy}2}}{32}x^2 + \\frac{{\\color{navy}15}}{512}x^4 + \\frac{{\\color{navy}150}}{8192}x^6 + \\frac{{\\color{navy}1707}}{131072}x^8 + \\ldots}\\bigr)^4" }, { "math_id": 51, "text": "\\mathrm{Sc}(4) = \\frac{2}{3}\\sum _{m = 1}^{3} \\mathrm{Sc}(m) \\,\\mathrm{Kn}(4 - m) = \\frac{2}{3} \\bigl[{\\color{navy}\\mathrm{Sc}(1)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(3)} + {\\color{navy}\\mathrm{Sc}(2)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(2)} + {\\color{navy}\\mathrm{Sc}(3)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(1)} \\bigr] " }, { "math_id": 52, "text": "{\\color{navy}\\mathrm{Sc}(4)} = \\frac{2}{3} \\bigl({\\color{navy}1} \\times {\\color{cornflowerblue}184} + {\\color{navy}2} \\times {\\color{cornflowerblue}13} + {\\color{navy}15} \\times {\\color{cornflowerblue}1} \\bigr) = {\\color{navy}150}" }, { "math_id": 53, "text": "\\mathrm{Sc}(5) = \\frac{2}{4}\\sum _{m = 1}^{4} \\mathrm{Sc}(m) \\,\\mathrm{Kn}(5 - m) = \\frac{2}{4} \\bigl[{\\color{navy}\\mathrm{Sc}(1)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(4)} + {\\color{navy}\\mathrm{Sc}(2)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(3)} + {\\color{navy}\\mathrm{Sc}(3)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(2)} + {\\color{navy}\\mathrm{Sc}(4)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(1)} \\bigr] " }, { "math_id": 54, "text": "{\\color{navy}\\mathrm{Sc}(5)} = \\frac{2}{4} \\bigl({\\color{navy}1} \\times {\\color{cornflowerblue}2701} + {\\color{navy}2} \\times {\\color{cornflowerblue}184} + {\\color{navy}15} \\times {\\color{cornflowerblue}13} + {\\color{navy}150} \\times {\\color{cornflowerblue}1} \\bigr) = {\\color{navy}1707}" }, { "math_id": 55, "text": "\\mathrm{Sc}(6) = \\frac{2}{5}\\sum _{m = 1}^{5} \\mathrm{Sc}(m) \\,\\mathrm{Kn}(6 - m) = \\frac{2}{5} \\bigl[{\\color{navy}\\mathrm{Sc}(1)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(5)} + {\\color{navy}\\mathrm{Sc}(2)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(4)} + {\\color{navy}\\mathrm{Sc}(3)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(3)} + {\\color{navy}\\mathrm{Sc}(4)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(2)} + {\\color{navy}\\mathrm{Sc}(5)}\\, {\\color{cornflowerblue}\\mathrm{Kn}(1)} \\bigr] " }, { "math_id": 56, "text": "{\\color{navy}\\mathrm{Sc}(6)} = \\frac{2}{5} \\bigl({\\color{navy}1} \\times {\\color{cornflowerblue}40456} + {\\color{navy}2} \\times {\\color{cornflowerblue}2701} + {\\color{navy}15} \\times {\\color{cornflowerblue}184} + {\\color{navy}150} \\times {\\color{cornflowerblue}13} + {\\color{navy}1707} \\times {\\color{cornflowerblue}1} \\bigr) = {\\color{navy}20910}" }, { "math_id": 57, "text": " q(x) = \\sum_{n = 1}^{\\infty} \\frac{\\operatorname{Kt}(n)}{16^n}\\,x^{2n} " }, { "math_id": 58, "text": "q\\bigl[x(x^2+1)^{-1/2}\\bigr] = \\sum_{n = 1}^{\\infty} \\frac{(-1)^{n+1}\\operatorname{Kt}(n)}{16^n}\\,x^{2n} " }, { "math_id": 59, "text": "\\operatorname{Kt}(n)" }, { "math_id": 60, "text": "\\operatorname{Kt}(n) \\isin \\mathbb{N}" }, { "math_id": 61, "text": "n \\isin \\mathbb{N}" }, { "math_id": 62, "text": " \\frac{2}{n} " }, { "math_id": 63, "text": " \\frac{8}{n} " }, { "math_id": 64, "text": "\\text{Kt}(n+1) = \\frac{8}{n}\\sum _{m = 1}^{n} \\text{Kt}(m)\\,\\text{Kn}(n + 1 - m) " }, { "math_id": 65, "text": "\\mathrm{Kt}(4) = \\frac{8}{3}\\sum _{m = 1}^{3} \\mathrm{Kt}(m) \\,\\mathrm{Kn}(4 - m) = \\frac{8}{3} \\bigl[{\\color{ForestGreen}\\mathrm{Kt}(1)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(3)} + {\\color{ForestGreen}\\mathrm{Kt}(2)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(2)} + {\\color{ForestGreen}\\mathrm{Kt}(3)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(1)} \\bigr] " }, { "math_id": 66, "text": "{\\color{ForestGreen}\\mathrm{Kt}(4)} = \\frac{8}{3} \\bigl({\\color{ForestGreen}1} \\times {\\color{cornflowerblue}184} + {\\color{ForestGreen}8} \\times {\\color{cornflowerblue}13} + {\\color{ForestGreen}84} \\times {\\color{cornflowerblue}1} \\bigr) = {\\color{ForestGreen}992}" }, { "math_id": 67, "text": "\\mathrm{Kt}(5) = \\frac{8}{4}\\sum _{m = 1}^{4} \\mathrm{Kt}(m) \\,\\mathrm{Kn}(5 - m) = \\frac{8}{4} \\bigl[{\\color{ForestGreen}\\mathrm{Kt}(1)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(4)} + {\\color{ForestGreen}\\mathrm{Kt}(2)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(3)} + {\\color{ForestGreen}\\mathrm{Kt}(3)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(2)} + {\\color{ForestGreen}\\mathrm{Kt}(4)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(1)} \\bigr] " }, { "math_id": 68, "text": "{\\color{ForestGreen}\\mathrm{Kt}(5)} = \\frac{8}{4} \\bigl({\\color{ForestGreen}1} \\times {\\color{cornflowerblue}2701} + {\\color{ForestGreen}8} \\times {\\color{cornflowerblue}184} + {\\color{ForestGreen}84} \\times {\\color{cornflowerblue}13} + {\\color{ForestGreen}992} \\times {\\color{cornflowerblue}1} \\bigr) = {\\color{ForestGreen}12514}" }, { "math_id": 69, "text": "\\mathrm{Kt}(6) = \\frac{8}{5}\\sum _{m = 1}^{5} \\mathrm{Kt}(m) \\,\\mathrm{Kn}(6 - m) = \\frac{8}{5} \\bigl[{\\color{ForestGreen}\\mathrm{Kt}(1)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(5)} + {\\color{ForestGreen}\\mathrm{Kt}(2)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(4)} + {\\color{ForestGreen}\\mathrm{Kt}(3)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(3)} + {\\color{ForestGreen}\\mathrm{Kt}(4)}\\,{\\color{cornflowerblue}\\mathrm{Kn}(2)} + {\\color{ForestGreen}\\mathrm{Kt}(5)}\\, {\\color{cornflowerblue}\\mathrm{Kn}(1)} \\bigr] " }, { "math_id": 70, "text": "{\\color{ForestGreen}\\mathrm{Kt}(6)} = \\frac{8}{5} \\bigl({\\color{ForestGreen}1} \\times {\\color{cornflowerblue}40456} + {\\color{ForestGreen}8} \\times {\\color{cornflowerblue}2701} + {\\color{ForestGreen}84} \\times {\\color{cornflowerblue}184} + {\\color{ForestGreen}992} \\times {\\color{cornflowerblue}13} + {\\color{ForestGreen}12514} \\times {\\color{cornflowerblue}1} \\bigr) = {\\color{ForestGreen}164688}" }, { "math_id": 71, "text": "q(x) = \\sum_{n = 1}^{\\infty} \\frac{\\text{Kt}(n)}{16^{n}} \\,x^{2n}" }, { "math_id": 72, "text": "q(x) = {\\color{limegreen}\\frac{{\\color{ForestGreen}1}}{16}x^2 + \\frac{{\\color{ForestGreen}8}}{256}x^4 + \\frac{{\\color{ForestGreen}84}}{4096}x^6 + \\frac{{\\color{ForestGreen}992}}{65536}x^8 + \\frac{{\\color{ForestGreen}12514}}{1048576}x^{10} + \\ldots}" }, { "math_id": 73, "text": "\\operatorname{Ap}(n)" }, { "math_id": 74, "text": "\\operatorname{Kt}(1)=1" }, { "math_id": 75, "text": " \\operatorname{Kt}(n+1) = \\frac{1}{n} \\sum_{m = 1}^n m\\operatorname{Kt}(m)[16\\operatorname{Ap}(n+1-m) - \\operatorname{Ap}(n+2-m)] " }, { "math_id": 76, "text": " \\operatorname{Ap}(n) = \\sum_{a = 0}^{n-1} \\binom{2a}{a}^2 \\binom{2n-2-2a}{n-1-a}^2 " }, { "math_id": 77, "text": " \\operatorname{Kt}(2n) = \\frac{1}{2} \\sum_{m = 1}^{2n-1} (-1)^{2n - m + 1}16^{2n - m}\\binom{2n-1}{m - 1} \\operatorname{Kt}(m) " }, { "math_id": 78, "text": " 4\\pi^{-2}K(x)^2 = 1 + \\sum_{n = 1}^\\infty \\frac{\\operatorname{Ap}(n + 1)x^{2n}}{16^n} " }, { "math_id": 79, "text": "q(\\tfrac{1}{2}\\sqrt{2}) = \\exp(-\\pi)" }, { "math_id": 80, "text": "q[\\tfrac{1}{4}(\\sqrt{6} - \\sqrt{2})] = \\exp(-\\sqrt{3}\\,\\pi)" }, { "math_id": 81, "text": "q[\\tfrac{1}{4}(\\sqrt{6} + \\sqrt{2})] = \\exp(-\\tfrac{1}{3}\\sqrt{3}\\,\\pi)" }, { "math_id": 82, "text": "q\\bigl\\{\\sin\\bigl[\\tfrac{1}{2}\\arcsin(\\sqrt{5} - 2)\\bigr]\\bigr\\} = \\exp(-\\sqrt{5}\\,\\pi)" }, { "math_id": 83, "text": "q\\bigl\\{\\cos\\bigl[\\tfrac{1}{2}\\arcsin(\\sqrt{5} - 2)\\bigr]\\bigr\\} = \\exp(-\\tfrac{1}{5}\\sqrt{5}\\,\\pi)" }, { "math_id": 84, "text": "q[\\tfrac{1}{8}(3\\sqrt{2} - \\sqrt{14})] = \\exp(-\\sqrt{7}\\,\\pi)" }, { "math_id": 85, "text": "q[\\tfrac{1}{8}(3\\sqrt{2} + \\sqrt{14})] = \\exp(-\\tfrac{1}{7}\\sqrt{7}\\,\\pi)" }, { "math_id": 86, "text": "q[\\tfrac{1}{2}(\\sqrt{3} - 1)(\\sqrt{2} - \\sqrt[4]{3})] = \\exp(-3\\pi)" }, { "math_id": 87, "text": "q[\\tfrac{1}{2}(\\sqrt{3} - 1)(\\sqrt{2} + \\sqrt[4]{3})] = \\exp(-\\tfrac{1}{3}\\pi)" }, { "math_id": 88, "text": "q\\bigl[\\tfrac{1}{16}\\bigl(\\sqrt{22} + 3\\sqrt{2}\\bigr)\\bigl(\\tfrac{1}{3}\\sqrt[3]{6\\sqrt{3} + 2\\sqrt{11}} - \\tfrac{1}{3}\\sqrt[3]{6\\sqrt{3} - 2\\sqrt{11}} + \\tfrac{1}{3}\\sqrt{11} - 1\\bigr)^4\\bigr] = \\exp(-\\sqrt{11}\\,\\pi)" }, { "math_id": 89, "text": "q\\bigl[\\tfrac{1}{16}\\bigl(\\sqrt{22} - 3\\sqrt{2}\\bigr)\\bigl(\\tfrac{1}{3}\\sqrt[3]{6\\sqrt{3} + 2\\sqrt{11}} - \\tfrac{1}{3}\\sqrt[3]{6\\sqrt{3} - 2\\sqrt{11}} + \\tfrac{1}{3}\\sqrt{11} + 1\\bigr)^4\\bigr] = \\exp(-\\tfrac{1}{11}\\sqrt{11}\\,\\pi)" }, { "math_id": 90, "text": "q\\bigl\\{\\sin\\bigl[\\tfrac{1}{2}\\arcsin(5\\sqrt{13} - 18)\\bigr]\\bigr\\} = \\exp(-\\sqrt{13}\\,\\pi)" }, { "math_id": 91, "text": "q\\bigl\\{\\cos\\bigl[\\tfrac{1}{2}\\arcsin(5\\sqrt{13} - 18)\\bigr]\\bigr\\} = \\exp(-\\tfrac{1}{13}\\sqrt{13}\\,\\pi)" }, { "math_id": 92, "text": "q(\\sqrt{2} - 1) = \\exp(-\\sqrt{2}\\,\\pi)" }, { "math_id": 93, "text": "q[(2 - \\sqrt{3})(\\sqrt{3} - \\sqrt{2})] = \\exp(-\\sqrt{6}\\,\\pi)" }, { "math_id": 94, "text": "q[(2 - \\sqrt{3})(\\sqrt{3} + \\sqrt{2})] = \\exp(-\\tfrac{1}{3}\\sqrt{6}\\,\\pi)" }, { "math_id": 95, "text": "q[(\\sqrt{10} - 3)(\\sqrt{2} - 1)^2] = \\exp(-\\sqrt{10}\\,\\pi)" }, { "math_id": 96, "text": "q[(\\sqrt{10} - 3)(\\sqrt{2} + 1)^2] = \\exp(-\\tfrac{1}{5}\\sqrt{10}\\,\\pi)" }, { "math_id": 97, "text": "q\\bigl[\\tfrac{1}{16}\\sqrt{2\\sqrt{2} - \\sqrt{7}}\\,(3\\sqrt{2} - \\sqrt{14})(\\sqrt{2\\sqrt{2} + 1} - 1)^4\\bigr] = \\exp(-\\sqrt{14}\\,\\pi)" }, { "math_id": 98, "text": "q\\bigl[\\tfrac{1}{16}\\sqrt{2\\sqrt{2} + \\sqrt{7}}\\,(3\\sqrt{2} + \\sqrt{14})(\\sqrt{2\\sqrt{2} + 1} - 1)^4\\bigr] = \\exp(-\\tfrac{1}{7}\\sqrt{14}\\,\\pi)" }, { "math_id": 99, "text": "q[(2 - \\sqrt{3})^2 (\\sqrt{2} - 1)^3] = \\exp(-3\\sqrt{2}\\,\\pi)" }, { "math_id": 100, "text": "q[(2 + \\sqrt{3})^2 (\\sqrt{2} - 1)^3] = \\exp(-\\tfrac{1}{3}\\sqrt{2}\\,\\pi)" }, { "math_id": 101, "text": "q[(10 - 3\\sqrt{11})(3\\sqrt{11} - 7\\sqrt{2})] = \\exp(-\\sqrt{22}\\,\\pi)" }, { "math_id": 102, "text": "q[(10 - 3\\sqrt{11})(3\\sqrt{11} + 7\\sqrt{2})] = \\exp(-\\tfrac{1}{11}\\sqrt{22}\\,\\pi)" }, { "math_id": 103, "text": "q\\bigl\\{(\\sqrt{26}+5)(\\sqrt{2}-1)^2 \\tan\\bigl[\\tfrac{1}{4}\\pi-\\arctan(\\tfrac{1}{3}\\sqrt[3]{3\\sqrt{3}+\\sqrt{26}}-\\tfrac{1}{3}\\sqrt[3]{3\\sqrt{3}-\\sqrt{26}}+\\tfrac{1}{6}\\sqrt{26}-\\tfrac{1}{2}\\sqrt{2})\\bigr]^4\\bigr\\} = \\exp(-\\sqrt{26}\\,\\pi)" }, { "math_id": 104, "text": "q\\bigl\\{(\\sqrt{26}+5)(\\sqrt{2}+1)^2 \\tan\\bigl[\\arctan(\\tfrac{1}{3}\\sqrt[3]{3\\sqrt{3}+\\sqrt{26}}-\\tfrac{1}{3}\\sqrt[3]{3\\sqrt{3}-\\sqrt{26}}+\\tfrac{1}{6}\\sqrt{26}+\\tfrac{1}{2}\\sqrt{2})-\\tfrac{1}{4}\\pi\\bigr]^4\\bigr\\} = \\exp(-\\tfrac{1}{13}\\sqrt{26}\\,\\pi)" }, { "math_id": 105, "text": "\\sum_{n = 1}^{\\infty} q(x)^{\\Box(n)} = \\tfrac{1}{2}\\vartheta_{00}[q(x)] - \\tfrac{1}{2} = \\tfrac{1}{2}\\sqrt{2\\pi^{-1}K(x)} - \\tfrac{1}{2} = \\tfrac{1}{2}\\operatorname{agm}(1-x;1+x)^{-1/2} - \\tfrac{1}{2} " }, { "math_id": 106, "text": "\\sum_{n = 1}^{\\infty} q(x)^{\\Box(2n-1)} = \\tfrac{1}{4}\\vartheta_{00}[q(x)] - \\tfrac{1}{4}\\vartheta_{01}[q(x)] = \\tfrac{1}{4}(1-\\sqrt[4]{1-x^2})\\sqrt{2\\pi^{-1}K(x)} " }, { "math_id": 107, "text": "\\sum_{n = 1}^{\\infty} \\frac{2q(x)^{n}}{q(x)^{2n} + 1} = \\tfrac{1}{2}\\vartheta_{00}[q(x)]^2 - \\tfrac{1}{2} = \\pi^{-1}K(x) - \\tfrac{1}{2} " }, { "math_id": 108, "text": "\\sum_{n = 1}^{\\infty} \\frac{2q(x)^{2n-1}}{q(x)^{4n-2} + 1} = \\tfrac{1}{4}\\vartheta_{00}[q(x)]^2 - \\tfrac{1}{4}\\vartheta_{01}[q(x)]^2 = \\tfrac{1}{2}(1-\\sqrt{1-x^2})\\pi^{-1}K(x) " }, { "math_id": 109, "text": "\\sum_{n = 1}^{\\infty} \\Box(n) q(x)^{\\Box(n)} = 2^{-1/2}\\pi^{-5/2}K(x)^{3/2}[E(x)-(1-x^2)K(x)] " }, { "math_id": 110, "text": "\\sum_{n = 1}^{\\infty} \\biggl[\\frac{2q(x)^{n}}{1 + q(x)^{2n}}\\biggr]^2 = 2\\pi^{-2}E(x)K(x) - \\tfrac{1}{2} " }, { "math_id": 111, "text": "\\sum_{n = 1}^{\\infty} \\biggl[\\frac{2q(x)^{n}}{1 - q(x)^{2n}}\\biggr]^2 = \\tfrac{2}{3}\\pi^{-2}(2 - x^2)K(x)^2 - 2\\pi^{-2}K(x)E(x) + \\tfrac{1}{6} " }, { "math_id": 112, "text": "\\Box(n)=n^2" }, { "math_id": 113, "text": "\\operatorname{E}(\\varepsilon)" }, { "math_id": 114, "text": "\\varepsilon" }, { "math_id": 115, "text": "\\prod_{n = 1}^{\\infty} [1-q(x)^{2n}][1+q(x)^{2n-1}]^2 = \\vartheta_{00}[q(x)] = \\sqrt{2\\pi^{-1}K(x)} " }, { "math_id": 116, "text": "\\prod_{n = 1}^{\\infty} [1-q(x)^{2n}][1-q(x)^{2n-1}]^2 = \\vartheta_{01}[q(x)] = \\sqrt[4]{1-x^2}\\sqrt{2\\pi^{-1}K(x)} " }, { "math_id": 117, "text": "q(\\varepsilon)[q(\\varepsilon);q(\\varepsilon)]_{\\infty}^{24} = 256\\,\\varepsilon^2 (1 - \\varepsilon^2)^4 \\pi^{-{12}}K(\\varepsilon)^{12} " }, { "math_id": 118, "text": "\\varepsilon^2 [q(\\varepsilon);q(\\varepsilon)^2]_{\\infty}^{24} = 16\\,(1 - \\varepsilon^2)^2 q(\\varepsilon) " }, { "math_id": 119, "text": "K(\\varepsilon) = \\tfrac{1}{2}\\pi\\,\\vartheta_{00}[q(\\varepsilon)]^2" }, { "math_id": 120, "text": "E(\\varepsilon) = 2\\pi q(\\varepsilon)\\,\\vartheta_{00}'[q(\\varepsilon)]\\vartheta_{00}[q(\\varepsilon)]^{-3} + \\tfrac{1}{2}\\pi(1 - \\varepsilon^2)\\,\\vartheta_{00}[q(\\varepsilon)]^2" }, { "math_id": 121, "text": "\\vartheta_{00}'(x) = \\frac{\\mathrm{d}}{\\mathrm{d}x}\\,\\vartheta_{00}(x) = 2 + \\sum_{n = 1}^{\\infty} 2(n + 1)^2 x^{n(n+2)}" }, { "math_id": 122, "text": "\\operatorname{zn}(x;k) = \\sum_{n = 1}^{\\infty} \\frac{2\\pi K(k)^{-1}\\sin[\\pi K(k)^{-1}x]q(k)^{2n-1}}{1-2\\cos[\\pi K(k)^{-1}x]q(k)^{2n-1}+q(k)^{4n-2}}" }, { "math_id": 123, "text": "\\operatorname{dn}(x;k) = \\sqrt[4]{1-k^2}\\prod_{n = 1}^{\\infty} \\frac{1+2\\cos[\\pi K(k)^{-1}x]q(k)^{2n-1}+q(k)^{4n-2}}{1-2\\cos[\\pi K(k)^{-1}x]q(k)^{2n-1}+q(k)^{4n-2}} " }, { "math_id": 124, "text": "\\operatorname{sn}(x;k) = 2\\sqrt[4]{k^{-2}q(k)}\\,\\sin[\\tfrac{1}{2}\\pi K(k)^{-1}x]\\prod_{n = 1}^{\\infty} \\frac{1 - 2q(k)^{2n}\\cos[\\pi K(k)^{-1}x] + q(k)^{4n}}{1 - 2q(k)^{2n - 1}\\cos[\\pi K(k)^{-1}x] + q(k)^{4n - 2}}" }, { "math_id": 125, "text": "\\operatorname{cd}(x;k) = 2\\sqrt[4]{k^{-2}q(k)}\\,\\cos[\\tfrac{1}{2}\\pi K(k)^{-1}x]\\prod_{n = 1}^{\\infty} \\frac{1 + 2q(k)^{2n}\\cos[\\pi K(k)^{-1}x] + q(k)^{4n}}{1 + 2q(k)^{2n - 1}\\cos[\\pi K(k)^{-1}x] + q(k)^{4n - 2}}" }, { "math_id": 126, "text": "\\operatorname{cn}(x;k) = 2\\sqrt[4]{k^{-2}(1 - k^2)\\,q(k)}\\,\\cos[\\tfrac{1}{2}\\pi K(k)^{-1}x]\\prod_{n = 1}^{\\infty} \\frac{1 + 2q(k)^{2n}\\cos[\\pi K(k)^{-1}x] + q(k)^{4n}}{1 - 2q(k)^{2n - 1}\\cos[\\pi K(k)^{-1}x] + q(k)^{4n - 2}}" }, { "math_id": 127, "text": "\\operatorname{sn}(x;k) = \\frac{2\\{\\operatorname{zn}(\\tfrac{1}{2}x;k) + \\operatorname{zn}[K(k)-\\tfrac{1}{2}x;k]\\}}{k^2+\\{\\operatorname{zn}(\\tfrac{1}{2}x;k) + \\operatorname{zn}[K(k)-\\tfrac{1}{2}x;k]\\}^2}" }, { "math_id": 128, "text": "\\operatorname{cd}(x;k) = \\operatorname{sn}[K(k) - x;k]" }, { "math_id": 129, "text": "\\operatorname{cn}(x;k) = \\operatorname{cd}(x;k)\\operatorname{dn}(x;k)" }, { "math_id": 130, "text": "\\operatorname{dn}(x;k) = \\frac{k^2-\\{\\operatorname{zn}(\\tfrac{1}{2}x;k) + \\operatorname{zn}[K(k)-\\tfrac{1}{2}x;k]\\}^2}{k^2+\\{\\operatorname{zn}(\\tfrac{1}{2}x;k) + \\operatorname{zn}[K(k)-\\tfrac{1}{2}x;k]\\}^2}" }, { "math_id": 131, "text": "\\operatorname{sc}[\\tfrac{2}{3}K(k);k] = \\frac{\\sqrt{3}\\,\\vartheta_{01}[q(k)^6]}{\\sqrt{1 - k^2}\\,\\vartheta_{01}[q(k)^2]}" }, { "math_id": 132, "text": "\\operatorname{sn}[\\tfrac{1}{3}K(k);k] = \\frac{2\\vartheta_{00}[q(k)]^2}{3\\vartheta_{00}[q(k)^3]^2 + \\vartheta_{00}[q(k)]^2} = \\frac{3\\vartheta_{01}[q(k)^3]^2 - \\vartheta_{01}[q(k)]^2}{3\\vartheta_{01}[q(k)^3]^2 + \\vartheta_{01}[q(k)]^2}" }, { "math_id": 133, "text": "\\operatorname{cn}[\\tfrac{2}{3}K(k);k] = \\frac{3\\vartheta_{00}[q(k)^3]^2 - \\vartheta_{00}[q(k)]^2}{3\\vartheta_{00}[q(k)^3]^2 + \\vartheta_{00}[q(k)]^2} = \\frac{2\\vartheta_{01}[q(k)]^2}{3\\vartheta_{01}[q(k)^3]^2 + \\vartheta_{01}[q(k)]^2} " }, { "math_id": 134, "text": "\\operatorname{sn}[\\tfrac{1}{5}K(k);k] = \\biggl\\{\\frac{\\sqrt{5}\\,\\vartheta_{01}[q(k)^5]}{\\vartheta_{01}[q(k)]} - 1\\biggr\\}\\biggl\\{\\frac{5\\vartheta_{01}[q(k)^{10}]^2}{\\vartheta_{01}[q(k)^2]^2} - 1\\biggr\\}^{-1} " }, { "math_id": 135, "text": "\\operatorname{sn}[\\tfrac{3}{5}K(k);k] = \\biggl\\{\\frac{\\sqrt{5}\\,\\vartheta_{01}[q(k)^5]}{\\vartheta_{01}[q(k)]} + 1\\biggr\\}\\biggl\\{\\frac{5\\vartheta_{01}[q(k)^{10}]^2}{\\vartheta_{01}[q(k)^2]^2} - 1\\biggr\\}^{-1} " }, { "math_id": 136, "text": "\\operatorname{cn}[\\tfrac{2}{5}K(k);k] = \\biggl\\{\\frac{\\sqrt{5}\\,\\vartheta_{00}[q(k)^5]}{\\vartheta_{00}[q(k)]} + 1\\biggr\\}\\biggl\\{\\frac{5\\vartheta_{01}[q(k)^{10}]^2}{\\vartheta_{01}[q(k)^2]^2} - 1\\biggr\\}^{-1} " }, { "math_id": 137, "text": "\\operatorname{cn}[\\tfrac{4}{5}K(k);k] = \\biggl\\{\\frac{\\sqrt{5}\\,\\vartheta_{00}[q(k)^5]}{\\vartheta_{00}[q(k)]} - 1\\biggr\\}\\biggl\\{\\frac{5\\vartheta_{01}[q(k)^{10}]^2}{\\vartheta_{01}[q(k)^2]^2} - 1\\biggr\\}^{-1} " }, { "math_id": 138, "text": "\\exp(-2\\sqrt{3}\\,\\pi) = \\exp(-\\sqrt{3}\\,\\pi)^2 = q\\bigl[\\sin(\\tfrac{1}{12}\\pi)\\bigr]^2 = q\\bigl[\\tan(\\tfrac{1}{24}\\pi)^2\\bigr] " }, { "math_id": 139, "text": "\\exp(-2\\sqrt{5}\\,\\pi) = \\exp(-\\sqrt{5}\\,\\pi)^2 = q\\bigl\\{\\sin\\bigl[\\tfrac{1}{2}\\arcsin(\\sqrt{5} - 2)\\bigr]\\bigr\\}^2 = q\\bigl\\{\\tan\\bigl[\\tfrac{1}{4}\\arcsin(\\sqrt{5} - 2)\\bigr]^2\\bigr\\} " }, { "math_id": 140, "text": "\\exp(-2\\sqrt{7}\\,\\pi) = \\exp(-\\sqrt{7}\\,\\pi)^2 = q\\bigl\\{\\sin\\bigl[\\tfrac{1}{2}\\arcsin(\\tfrac{1}{8})\\bigr]\\bigr\\}^2 = q\\bigl\\{\\tan\\bigl[\\tfrac{1}{4}\\arcsin(\\tfrac{1}{8})\\bigr]^2\\bigr\\} " }, { "math_id": 141, "text": "\\exp(-2\\sqrt{13}\\,\\pi) = \\exp(-\\sqrt{13}\\,\\pi)^2 = q\\bigl\\{\\sin\\bigl[\\tfrac{1}{2}\\arcsin(5\\sqrt{13} - 18)\\bigr]\\bigr\\}^2 = q\\bigl\\{\\tan\\bigl[\\tfrac{1}{4}\\arcsin(5\\sqrt{13} - 18)\\bigr]^2\\bigr\\} " }, { "math_id": 142, "text": "\\exp(-2\\sqrt{6}\\,\\pi) = \\exp(-\\sqrt{6}\\,\\pi)^2 =" }, { "math_id": 143, "text": "= q\\biggl\\langle\\operatorname{tanh}\\bigl\\{\\tfrac{1}{2}\\operatorname{arsinh}\\bigl[(\\sqrt{2} - 1)^2\\bigr]\\bigr\\}\\biggr\\rangle^2 = q\\biggl\\langle\\operatorname{tanh}\\bigl\\{\\tfrac{1}{4}\\operatorname{arsinh}\\bigl[(\\sqrt{2} - 1)^2\\bigr]\\bigr\\}^2\\biggr\\rangle " }, { "math_id": 144, "text": "\\exp(-2\\sqrt{10}\\,\\pi) = \\exp(-\\sqrt{10}\\,\\pi)^2 =" }, { "math_id": 145, "text": "= q\\biggl\\langle\\operatorname{tanh}\\bigl\\{\\tfrac{1}{2}\\operatorname{arsinh}\\bigl[(\\sqrt{5} - 2)^2\\bigr]\\bigr\\}\\biggr\\rangle^2 = q\\biggl\\langle\\operatorname{tanh}\\bigl\\{\\tfrac{1}{4}\\operatorname{arsinh}\\bigl[(\\sqrt{5} - 2)^2\\bigr]\\bigr\\}^2\\biggr\\rangle " }, { "math_id": 146, "text": "\\exp(-2\\sqrt{14}\\,\\pi) = \\exp(-\\sqrt{14}\\,\\pi)^2 =" }, { "math_id": 147, "text": "= q\\biggl\\langle\\operatorname{tanh}\\bigl\\{\\tfrac{1}{2}\\operatorname{arsinh}\\bigl[(\\sqrt{2} + \\tfrac{1}{2} - \\tfrac{1}{2}\\sqrt{4\\sqrt{2} + 5})^3\\bigr]\\bigr\\}\\biggr\\rangle^2 = q\\biggl\\langle\\operatorname{tanh}\\bigl\\{\\tfrac{1}{4}\\operatorname{arsinh}\\bigl[(\\sqrt{2} + \\tfrac{1}{2} - \\tfrac{1}{2}\\sqrt{4\\sqrt{2} + 5})^3\\bigr]\\bigr\\}^2\\biggr\\rangle " }, { "math_id": 148, "text": "\\exp(-2\\sqrt{22}\\,\\pi) = \\exp(-\\sqrt{22}\\,\\pi)^2 =" }, { "math_id": 149, "text": "= q\\biggl\\langle\\operatorname{tanh}\\bigl\\{\\tfrac{1}{2}\\operatorname{arsinh}\\bigl[(\\sqrt{2} - 1)^6\\bigr]\\bigr\\}\\biggr\\rangle^2 = q\\biggl\\langle\\operatorname{tanh}\\bigl\\{\\tfrac{1}{4}\\operatorname{arsinh}\\bigl[(\\sqrt{2} - 1)^6\\bigr]\\bigr\\}^2\\biggr\\rangle " }, { "math_id": 150, "text": " \\varepsilon " }, { "math_id": 151, "text": " u " }, { "math_id": 152, "text": "\\varepsilon^2 x^4 - 2\\varepsilon^2 x^3 + 2x - 1 = 0 " }, { "math_id": 153, "text": "x = \\text{sn}\\bigl[\\tfrac{1}{3}K(\\varepsilon);\\varepsilon\\bigr] " }, { "math_id": 154, "text": "\\varepsilon = u(\\sqrt{u^4-u^2+1}-u^2+1)" }, { "math_id": 155, "text": "u^2(\\sqrt{u^4-u^2+1}-u^2+1)^2 (x^4 - 2x^3) + 2x - 1 = 0 " }, { "math_id": 156, "text": " \\tfrac{1}{2} < x < 1 \\,\\cap \\,x \\in \\R " }, { "math_id": 157, "text": "x = \\frac{1}{\\sqrt{u^4-u^2+1}-u^2+1} " }, { "math_id": 158, "text": "\\text{sn}\\bigl[\\tfrac{1}{3}K(\\varepsilon);\\varepsilon\\bigr] \\bigl[\\varepsilon = u(\\sqrt{u^4-u^2+1}-u^2+1)\\bigr] = \\frac{1}{\\sqrt{u^4-u^2+1}-u^2+1}" }, { "math_id": 159, "text": "q\\bigl[u(\\sqrt{u^4-u^2+1}-u^2+1)\\bigr]^3 = q\\bigl[u(\\sqrt{u^4-u^2+1}+u^2-1)\\bigr] " }, { "math_id": 160, "text": "q\\bigl[u(\\sqrt{u^4-u^2+1}-u^2+1)\\bigr]^3 = q\\bigl\\{\\bigl[u(\\sqrt{u^4-u^2+1}-u^2+1)\\bigr]^3 \\bigl(\\sqrt{u^4-u^2+1}-u^2+1\\bigr)^{-4} \\bigr\\} " }, { "math_id": 161, "text": "q(\\varepsilon)^3 = q\\bigl\\{\\varepsilon^3 \\text{sn}\\bigl[\\tfrac{1}{3}K(\\varepsilon);\\varepsilon\\bigr]^4 \\bigr\\}" }, { "math_id": 162, "text": "t " }, { "math_id": 163, "text": "t = 1 " }, { "math_id": 164, "text": "{\\color{blue}\\exp(-3\\sqrt{2}\\,\\pi)} = \\exp(-\\sqrt{2}\\,\\pi)^3 = q(\\sqrt{2} - 1)^3 = q\\bigl\\{\\tan\\bigl[\\tfrac{1}{2}\\arctan(1)\\bigr]\\bigr\\}^3 = " }, { "math_id": 165, "text": "= q\\bigl\\{\\tan\\bigl[\\tfrac{1}{2}\\arctan(1)\\bigr]^3 \\tan\\bigl[\\arctan(\\sqrt{3} + \\sqrt{2}) - \\tfrac{1}{4}\\pi\\bigr]^4\\bigr\\} = {\\color{blue}q\\bigl[(\\sqrt{2} - 1)^3 (\\tfrac{1}{2}\\sqrt{6} - \\tfrac{1}{2}\\sqrt{2})^4\\bigr]} " }, { "math_id": 166, "text": "t = \\Phi^{-2} = \\tfrac{1}{2}(3 - \\sqrt{5})" }, { "math_id": 167, "text": "{\\color{blue}\\exp(-3\\sqrt{10}\\,\\pi)} = \\exp(-\\sqrt{10}\\,\\pi)^3 = q\\bigl[(\\sqrt{10} - 3)(\\sqrt{2} - 1)^2\\bigr]^3 = q\\bigl\\{\\tan\\bigl[\\tfrac{1}{2}\\arctan(\\Phi^{-6})\\bigr]\\bigr\\}^3 = " }, { "math_id": 168, "text": "= q\\bigl\\{\\tan\\bigl[\\tfrac{1}{2}\\arctan(\\Phi^{-6})\\bigr]^3 \\tan\\bigl[\\arctan\\bigl(\\sqrt{2\\sqrt{\\Phi^{-8} - \\Phi^{-4} + 1} - \\Phi^{-4} + 2} + \\sqrt{\\Phi^{-4} + 1}\\bigr) - \\tfrac{1}{4}\\pi\\bigr]^4\\bigr\\} = " }, { "math_id": 169, "text": "= {\\color{blue}q\\bigl\\{(\\sqrt{10} - 3)^3(\\sqrt{2} - 1)^6 \\tan\\bigl[\\arctan\\bigl(\\sqrt{2\\sqrt{\\Phi^{-8} - \\Phi^{-4} + 1} - \\Phi^{-4} + 2} + \\sqrt{\\Phi^{-4} + 1}\\bigr) - \\tfrac{1}{4}\\pi\\bigr]^4\\bigr\\}} " }, { "math_id": 170, "text": " \\Phi " }, { "math_id": 171, "text": " \\Phi = \\tfrac{1}{2}(\\sqrt{5} + 1)" }, { "math_id": 172, "text": "q(\\varepsilon_1 \\in \\mathbb{A}^{+})^{w \\in \\mathbb{Q^{+}}} = q(\\varepsilon_2 \\in \\mathbb{A}^{+}) " }, { "math_id": 173, "text": "q(\\varepsilon)^2 = q\\{\\varepsilon^2\\operatorname{sn}[\\tfrac{1}{2}K(\\varepsilon);\\varepsilon]^4\\} = q[\\varepsilon^2(1+\\sqrt{1-\\varepsilon^2})^{-2}] " }, { "math_id": 174, "text": "q(\\varepsilon)^3 = q\\{\\varepsilon^3\\operatorname{sn}[\\tfrac{1}{3}K(\\varepsilon);\\varepsilon]^4\\} " }, { "math_id": 175, "text": "q(\\varepsilon)^4 = q\\{\\varepsilon^4\\operatorname{sn}[\\tfrac{1}{4}K(\\varepsilon);\\varepsilon]^4\\operatorname{sn}[\\tfrac{3}{4}K(\\varepsilon);\\varepsilon]^4\\} = q[(1-\\sqrt[4]{1-\\varepsilon^2})^{2}(1+\\sqrt[4]{1-\\varepsilon^2})^{-2}] " }, { "math_id": 176, "text": "q(\\varepsilon)^5 = q\\{\\varepsilon^5\\operatorname{sn}[\\tfrac{1}{5}K(\\varepsilon);\\varepsilon]^4\\operatorname{sn}[\\tfrac{3}{5}K(\\varepsilon);\\varepsilon]^4\\} " }, { "math_id": 177, "text": "q(\\varepsilon)^6 = q\\{\\varepsilon^6\\operatorname{sn}[\\tfrac{1}{6}K(\\varepsilon);\\varepsilon]^4\\operatorname{sn}[\\tfrac{1}{2}K(\\varepsilon);\\varepsilon]^4\\operatorname{sn}[\\tfrac{5}{6}K(\\varepsilon);\\varepsilon]^4\\} " }, { "math_id": 178, "text": "q(\\varepsilon)^7 = q\\{\\varepsilon^7\\operatorname{sn}[\\tfrac{1}{7}K(\\varepsilon);\\varepsilon]^4\\operatorname{sn}[\\tfrac{3}{7}K(\\varepsilon);\\varepsilon]^4\\operatorname{sn}[\\tfrac{5}{7}K(\\varepsilon);\\varepsilon]^4\\} " }, { "math_id": 179, "text": "q(\\varepsilon)^8 = q\\{\\varepsilon^8\\operatorname{sn}[\\tfrac{1}{8}K(\\varepsilon);\\varepsilon]^4\\operatorname{sn}[\\tfrac{3}{8}K(\\varepsilon);\\varepsilon]^4\\operatorname{sn}[\\tfrac{5}{8}K(\\varepsilon);\\varepsilon]^4\\operatorname{sn}[\\tfrac{7}{8}K(\\varepsilon);\\varepsilon]^4\\} " }, { "math_id": 180, "text": "q(\\varepsilon)^9 = q\\{\\varepsilon^9\\operatorname{sn}[\\tfrac{1}{9}K(\\varepsilon);\\varepsilon]^4\\operatorname{sn}[\\tfrac{1}{3}K(\\varepsilon);\\varepsilon]^4\\operatorname{sn}[\\tfrac{5}{9}K(\\varepsilon);\\varepsilon]^4\\operatorname{sn}[\\tfrac{7}{9}K(\\varepsilon);\\varepsilon]^4\\} " }, { "math_id": 181, "text": "\\operatorname{sn}" }, { "math_id": 182, "text": "q(\\varepsilon)^{2n} = q\\biggl\\{\\varepsilon^{2n}\\prod_{k = 1}^{n}\\operatorname{sn}\\bigl[\\tfrac{2k-1}{2n}K(\\varepsilon);\\varepsilon\\bigr]^4\\biggr\\} " }, { "math_id": 183, "text": "q(\\varepsilon)^{2n+1} = q\\biggl\\{\\varepsilon^{2n+1}\\prod_{k = 1}^{n}\\operatorname{sn}\\bigl[\\tfrac{2k-1}{2n+1}K(\\varepsilon);\\varepsilon\\bigr]^4\\biggr\\} " }, { "math_id": 184, "text": "a" }, { "math_id": 185, "text": "b" }, { "math_id": 186, "text": "a^2+b^2=1" }, { "math_id": 187, "text": "\\ln[\\operatorname{q}(a)]\\ln[\\operatorname{q}(b)] = \\pi^2" }, { "math_id": 188, "text": "c" }, { "math_id": 189, "text": "d" }, { "math_id": 190, "text": "(c+1)(d+1)=2" }, { "math_id": 191, "text": "\\ln[\\operatorname{q}(c)]\\ln[\\operatorname{q}(d)] = 2\\pi^2" }, { "math_id": 192, "text": "\\ln\\biggl\\langle q\\bigl\\{\\sin\\bigl[\\tfrac{1}{4}\\pi - \\tfrac{1}{2}\\arctan(x)\\bigr]\\bigr\\}\\biggr\\rangle \\ln\\biggl\\langle q\\bigl\\{\\sin\\bigl[\\tfrac{1}{4}\\pi + \\tfrac{1}{2}\\arctan(x)\\bigr]\\bigr\\}\\biggr\\rangle = \\pi^2 " }, { "math_id": 193, "text": "\\ln\\bigl\\{q\\bigl[\\tfrac{1}{2}\\sqrt{2-2x(x^2+1)^{-1/2}}\\bigr] \\bigr\\} \\ln\\bigl\\{q\\bigl[\\tfrac{1}{2} \\sqrt{2+2x(x^2+1)^{-1/2}}\\bigr]\\bigr\\} = \\pi^2 " }, { "math_id": 194, "text": "\\ln\\biggl\\langle q\\bigl\\{\\tan\\bigl[\\tfrac{1}{8}\\pi - \\tfrac{1}{4}\\arctan(x)\\bigr]\\bigr\\}\\biggr\\rangle \\ln \\biggl\\langle q \\bigl\\{\\tan\\bigl[\\tfrac{1}{8}\\pi + \\tfrac{1}{4}\\arctan(x)\\bigr] \\bigr\\} \\biggr\\rangle = 2\\pi^2 " }, { "math_id": 195, "text": "\\ln\\bigl\\{ q\\bigl[\\sqrt{(\\sqrt{x^2+1}+x)^2+1}-\\sqrt{x^2+1}-x\\bigr]\\bigr\\} \\ln\\bigl\\{ q \\bigl[ \\sqrt{(\\sqrt{x^2+1}-x)^2+1}-\\sqrt{x^2+1}+x\\bigr] \\bigr\\} = 2\\pi^2 " }, { "math_id": 196, "text": "\\ln\\bigl\\{q\\bigl[\\sin(\\tfrac{1}{4}\\pi)\\bigr]\\bigr\\}^2 = \\pi^2 " }, { "math_id": 197, "text": "q\\bigl[\\sin(\\tfrac{1}{4}\\pi)\\bigr] = \\exp(-\\pi) " }, { "math_id": 198, "text": "\\ln\\bigl\\{q\\bigl[\\tan(\\tfrac{1}{8}\\pi)\\bigr]\\bigr\\}^2 = 2\\pi^2 " }, { "math_id": 199, "text": "q\\bigl[\\tan(\\tfrac{1}{8}\\pi)\\bigr] = \\exp(-\\sqrt{2}\\,\\pi) " }, { "math_id": 200, "text": "x = \\sqrt{3}" }, { "math_id": 201, "text": "\\ln\\bigl\\{q\\bigl[\\sin(\\tfrac{1}{12}\\pi)\\bigr]\\bigr\\}\\ln\\bigl\\{q\\bigl[\\sin(\\tfrac {5}{12}\\pi)\\bigr]\\bigr\\} = \\pi^2 " }, { "math_id": 202, "text": "u = 1/\\sqrt{2}" }, { "math_id": 203, "text": "q\\bigl[\\sin(\\tfrac{5}{12}\\pi)\\bigr]^3 = q\\bigl[\\sin(\\tfrac{1}{12}\\pi)\\bigr] " }, { "math_id": 204, "text": "q\\bigl[\\sin(\\tfrac{1}{12}\\pi)\\bigr] = \\exp(-\\sqrt{3}\\,\\pi) " }, { "math_id": 205, "text": "q\\bigl[\\sin(\\tfrac{5}{12}\\pi)\\bigr] = \\exp(-\\tfrac{1}{3}\\sqrt{3}\\,\\pi) " }, { "math_id": 206, "text": "x = \\sqrt{8}" }, { "math_id": 207, "text": "\\ln\\bigl\\{q\\bigl[(2 - \\sqrt{3})(\\sqrt{3} - \\sqrt{2})\\bigr]\\bigr\\}\\ln\\bigl\\{q \\bigl[(2 - \\sqrt{3})(\\sqrt{3} + \\sqrt{2})\\bigr]\\bigr\\} = 2\\pi^2 " }, { "math_id": 208, "text": "u = (\\sqrt{3} - 1)/\\sqrt{2}" }, { "math_id": 209, "text": "q\\bigl[(2 - \\sqrt{3})(\\sqrt{3} + \\sqrt{2})\\bigr]^3 = q\\bigl[(2 - \\sqrt{3})( \\sqrt{3} - \\sqrt{2})\\bigr] " }, { "math_id": 210, "text": "q\\bigl[(2 - \\sqrt{3})(\\sqrt{3} - \\sqrt{2})\\bigr] = \\exp(-\\sqrt{6}\\,\\pi) " }, { "math_id": 211, "text": "q\\bigl[(2 - \\sqrt{3})(\\sqrt{3} + \\sqrt{2})\\bigr] = \\exp(-\\tfrac{1}{3}\\sqrt{6 }\\,\\pi) " }, { "math_id": 212, "text": "\\frac{\\pi^2}{2\\varepsilon(1-\\varepsilon^2)K(\\varepsilon)^2} \\,q(\\varepsilon)\\,\\biggl\\{\\frac{\\mathrm{d}}{\\mathrm{d}\\,q(\\varepsilon)}\\,\\vartheta_{00}\\bigl[q(\\varepsilon)\\bigr]\\biggr\\} = \\biggl[\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon} \\,q(\\varepsilon)\\biggr]\\biggl\\{\\frac{\\mathrm{d}}{\\mathrm{d}\\,q(\\varepsilon)}\\,\\vartheta_{00}\\bigl[q(\\varepsilon)\\bigr]\\biggr\\} = \\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}\\,\\vartheta_{00}\\bigl[q(\\varepsilon)\\bigr] = \\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}\\,\\sqrt{2\\pi^{-1}K(\\varepsilon)} = " }, { "math_id": 213, "text": "= \\frac{1}{2}\\sqrt{2}\\,\\pi^{-1/2}\\,K(\\varepsilon)^{-1/2}\\biggl[\\frac{\\mathrm{d}}{\\mathrm{d}\\varepsilon}\\,K(\\varepsilon)\\biggr] = \\frac{1}{2}\\sqrt{2}\\,\\pi^{-1/2}\\,K(\\varepsilon)^{-1/2}\\,\\frac{E(\\varepsilon) - (1 - \\varepsilon^2)K(\\varepsilon)}{\\varepsilon(1 - \\varepsilon^2)} " }, { "math_id": 214, "text": "\\vartheta_{00}[q(\\varepsilon)] = \\sqrt{2\\pi^{-1} K(\\varepsilon)}" }, { "math_id": 215, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}\\,q(\\varepsilon)}\\,\\vartheta_{00}\\bigl[q(\\varepsilon)\\bigr] = \\sqrt{2}\\,\\pi^{-5/2}\\,q(\\varepsilon)^{-1}\\,K(\\varepsilon)^{3/2}\\bigl[E(\\varepsilon) - (1 - \\varepsilon^2)K(\\varepsilon)\\bigr] " }, { "math_id": 216, "text": "(1 + \\sqrt{1 - \\varepsilon^2})\\,E\\left(\\frac{1 - \\sqrt{1 - \\varepsilon^2}}{1 + \\sqrt{1 - \\varepsilon^2}}\\right) = E(\\varepsilon) + \\sqrt{1 - \\varepsilon^2}\\,K(\\varepsilon) " }, { "math_id": 217, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}\\,q(\\varepsilon)}\\,\\vartheta_{00}\\bigl[q(\\varepsilon)\\bigr] = \\sqrt{2}\\,\\pi^{-5/2}\\,q(\\varepsilon)^{-1}\\,K(\\varepsilon)^{3/2}(1 + \\sqrt{1 - \\varepsilon^2})\\left[E\\left(\\frac{1 - \\sqrt{1 - \\varepsilon^2}}{1 + \\sqrt{1 - \\varepsilon^2}}\\right) - \\sqrt{1 - \\varepsilon^2}\\,K(\\varepsilon)\\right] " }, { "math_id": 218, "text": "\\vartheta_{01}[q(\\varepsilon)] = \\sqrt[4]{1 - \\varepsilon^2}\\sqrt{2\\pi^{-1} K(\\varepsilon)}" }, { "math_id": 219, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}\\,q(\\varepsilon)}\\,\\vartheta_{00}\\bigl[q(\\varepsilon)\\bigr] = \\frac{1}{2\\pi}\\,q(\\varepsilon)^{-1}\\vartheta_{00}[q(\\varepsilon)]\\bigl\\{\\vartheta_{00}[q(\\varepsilon)]^2 + \\vartheta_{01}[q(\\varepsilon)]^2\\bigr\\}\\biggl\\langle E\\biggl\\{\\frac{\\vartheta_{00}[q(\\varepsilon)]^2 - \\vartheta_{01}[q(\\varepsilon)]^2}{\\vartheta_{00}[q(\\varepsilon)]^2 + \\vartheta_{01}[q(\\varepsilon)]^2}\\biggr\\} - \\frac{\\pi}{2}\\,\\vartheta_{01}\\bigl[q(\\varepsilon)\\bigr]^2\\biggr\\rangle " }, { "math_id": 220, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}x} \\,\\vartheta_{00}(x) = \\vartheta_{00}(x)\\bigl[\\vartheta_{00}(x)^2+\\vartheta_{01}(x)^2\\bigr]\\biggl\\{\\frac{1}{2\\pi x}E\\biggl[\\frac{\\vartheta_{00}(x)^2-\\vartheta_{01}(x)^2}{\\vartheta_{00}(x)^2+\\vartheta_{01}(x)^2}\\biggr] - \\frac{\\vartheta_{01}(x)^2}{4x}\\biggr\\}" }, { "math_id": 221, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}x} \\,\\vartheta_{01}(x) = \\vartheta_{01}(x)\\bigl[\\vartheta_{00}(x)^2+\\vartheta_{01}(x)^2\\bigr]\\biggl\\{\\frac{1}{2\\pi x}E\\biggl[\\frac{\\vartheta_{00}(x)^2-\\vartheta_{01}(x)^2}{\\vartheta_{00}(x)^2+\\vartheta_{01}(x)^2}\\biggr] - \\frac{\\vartheta_{00}(x)^2}{4x}\\biggr\\}" }, { "math_id": 222, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}x} \\,\\vartheta_{10}(x) = \\frac{1}{2\\pi x} \\vartheta_{10}(x)\\vartheta_{00}(x)^2 E\\biggl[\\frac{\\vartheta_{10}(x)^2}{\\vartheta_{00}(x)^2}\\biggr]" }, { "math_id": 223, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}x} \\,\\frac{\\vartheta_{00}(x)}{\\vartheta_{01}(x)} = \\frac{\\vartheta_{00}(x)^5 - \\vartheta_{00}(x)\\vartheta_{01}(x)^4}{4x\\,\\vartheta_{01}(x)}" }, { "math_id": 224, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}x} \\,\\frac{\\vartheta_{10}(x)}{\\vartheta_{00}(x)} = \\frac{\\vartheta_{10}(x)\\vartheta_{01}(x)^4}{4x\\,\\vartheta_{00}(x)}" }, { "math_id": 225, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}x} \\,\\frac{\\vartheta_{10}(x)}{\\vartheta_{01}(x)} = \\frac{\\vartheta_{10}(x)\\vartheta_{00}(x)^4}{4x\\,\\vartheta_{01}(x)}" }, { "math_id": 226, "text": "\\vartheta_{10}(x) = 2x^{1/4} + 2x^{1/4}\\sum_{n = 1}^{\\infty} x^{2\\bigtriangleup(n)} " }, { "math_id": 227, "text": "\\bigtriangleup(n) = \\tfrac{1}{2}n(n + 1) " } ]
https://en.wikipedia.org/wiki?curid=1471192
1471280
Quarter period
Special function in the theory of elliptic functions In mathematics, the quarter periods "K"("m") and i"K" ′("m") are special functions that appear in the theory of elliptic functions. The quarter periods "K" and i"K" ′ are given by formula_0 and formula_1 When "m" is a real number, 0 &lt; "m" &lt; 1, then both "K" and "K" ′ are real numbers. By convention, "K" is called the "real quarter period" and i"K" ′ is called the "imaginary quarter period". Any one of the numbers "m", "K", "K" ′, or "K" ′/"K" uniquely determines the others. These functions appear in the theory of Jacobian elliptic functions; they are called "quarter periods" because the elliptic functions formula_2 and formula_3 are periodic functions with periods formula_4 and formula_5 However, the formula_6 function is also periodic with a smaller period (in terms of the absolute value) than formula_7, namely formula_8. Notation. The quarter periods are essentially the elliptic integral of the first kind, by making the substitution formula_9. In this case, one writes formula_10 instead of formula_11, understanding the difference between the two depends notationally on whether formula_12 or formula_13 is used. This notational difference has spawned a terminology to go with it: formula_20 The elliptic modulus can be expressed in terms of the quarter periods as formula_21 and formula_22 where formula_23 and formula_24 are Jacobian elliptic functions. The nome formula_25 is given by formula_26 The complementary nome is given by formula_27 The real quarter period can be expressed as a Lambert series involving the nome: formula_28 Additional expansions and relations can be found on the page for elliptic integrals.
[ { "math_id": 0, "text": "K(m)=\\int_0^{\\frac{\\pi}{2}} \\frac{d\\theta}{\\sqrt {1-m \\sin^2 \\theta}}" }, { "math_id": 1, "text": "{\\rm{i}}K'(m) = {\\rm{i}}K(1-m).\\," }, { "math_id": 2, "text": "\\operatorname{sn}u" }, { "math_id": 3, "text": "\\operatorname{cn}u" }, { "math_id": 4, "text": "4K" }, { "math_id": 5, "text": "4{\\rm{i}}K'." }, { "math_id": 6, "text": "\\operatorname{sn}" }, { "math_id": 7, "text": "4\\mathrm iK'" }, { "math_id": 8, "text": "2\\mathrm iK'" }, { "math_id": 9, "text": "k^2=m" }, { "math_id": 10, "text": "K(k)\\," }, { "math_id": 11, "text": "K(m)" }, { "math_id": 12, "text": "k" }, { "math_id": 13, "text": "m" }, { "math_id": 14, "text": "m_1= 1-m" }, { "math_id": 15, "text": "k'" }, { "math_id": 16, "text": "{k'}^2=m_1" }, { "math_id": 17, "text": "\\alpha" }, { "math_id": 18, "text": "k=\\sin \\alpha," }, { "math_id": 19, "text": "\\frac{\\pi}{2}-\\alpha" }, { "math_id": 20, "text": "m_1=\\sin^2\\left(\\frac{\\pi}{2}-\\alpha\\right)=\\cos^2 \\alpha." }, { "math_id": 21, "text": "k=\\operatorname{ns} (K+{\\rm{i}}K')" }, { "math_id": 22, "text": "k'= \\operatorname{dn} K" }, { "math_id": 23, "text": "\\operatorname{ns}" }, { "math_id": 24, "text": "\\operatorname{dn}" }, { "math_id": 25, "text": "q\\," }, { "math_id": 26, "text": "q=e^{-\\frac{\\pi K'}{K}}." }, { "math_id": 27, "text": "q_1=e^{-\\frac{\\pi K}{K'}}." }, { "math_id": 28, "text": "K=\\frac{\\pi}{2} + 2\\pi\\sum_{n=1}^\\infty \\frac{q^n}{1+q^{2n}}." } ]
https://en.wikipedia.org/wiki?curid=1471280
1471307
Counting problem (complexity)
Type of computational problem In computational complexity theory and computability theory, a counting problem is a type of computational problem. If "R" is a search problem then formula_0 is the corresponding counting function and formula_1 denotes the corresponding decision problem. Note that "cR" is a search problem while #"R" is a decision problem, however "cR" can be C Cook-reduced to #"R" (for appropriate C) using a binary search (the reason #"R" is defined the way it is, rather than being the graph of "cR", is to make this binary search possible). Counting complexity class. Just as NP has NP-complete problems via many-one reductions, #P has #P-complete problems via parsimonious reductions, problem transformations that preserve the number of solutions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c_R(x)=\\vert\\{y\\mid R(x,y)\\}\\vert \\," }, { "math_id": 1, "text": "\\#R=\\{(x,y)\\mid y\\leq c_R(x)\\}" } ]
https://en.wikipedia.org/wiki?curid=1471307
14713923
5-Hydroxyeicosatetraenoic acid
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound 5-Hydroxyeicosatetraenoic acid (5-HETE, 5("S")-HETE, or 5"S"-HETE) is an eicosanoid, i.e. a metabolite of arachidonic acid. It is produced by diverse cell types in humans and other animal species. These cells may then metabolize the formed 5("S")-HETE to 5-oxo-eicosatetraenoic acid (5-oxo-ETE), 5("S"),15("S")-dihydroxyeicosatetraenoic acid (5("S"),15("S")-diHETE), or 5-oxo-15-hydroxyeicosatetraenoic acid (5-oxo-15("S")-HETE). 5("S")-HETE, 5-oxo-ETE, 5("S"),15("S")-diHETE, and 5-oxo-15("S")-HETE, while differing in potencies, share a common mechanism for activating cells and a common set of activities. They are therefore a family of structurally related metabolites. Animal studies and a limited set of human studies suggest that this family of metabolites serve as hormone-like autocrine and paracrine signalling agents that contribute to the up-regulation of acute inflammatory and allergic responses. In this capacity, these metabolites may be members of the innate immune system. "In vitro" studies suggest that 5("S")-HETE and/or other of its family members may also be active in promoting the growth of certain types of cancers, in simulating bone reabsorption, in signaling for the secretion of aldosterone and progesterone, in triggering parturition, and in contributing to other responses in animals and humans. However, the roles of 5("S")-HETE family members in these responses as well as in inflammation and allergy are unproven and will require much further study. Among the 5("S")-HETE family members, 5("S")-HETE takes precedence over the other members of this family because it was the first to be discovered and has been studied far more thoroughly. However, 5-oxo-ETE is the most potent member of this family and therefore may be its critical member with respect to physiology and pathology. 5-OxoETE has gained attention in recent studies. Nomenclature. 5-Hydroxyeicosatetraenoic acid is more properly termed 5("S")-hydroxyicosatetraenoic acid or 5("S")-HETE) to signify the ("S") configuration of its 5-hydroxy residue as opposed to its 5("R")-hydroxyicosatetraenoic acid (i.e., 5("R")-HETE) stereoisomer. Since 5("R")-HETE was rarely considered in the early literature, 5("S")-HETE was frequently termed 5-HETE. This practice occasionally continues. 5("S")-HETE's IUPAC name, (5"S",6"E",8"Z",11"Z",14"Z")-5-hydroxyicosa-6,8,11,14-tetraenoic acid, defines 5("S")-HETE's structure unambiguously by notating not only its "S"-hydroxyl chirality but also the cis–trans isomerism geometry for each of its 4 double bonds; E signifies trans and Z signifies cis double bond geometry. The literature commonly uses an alternate but still unambiguous name for 5("S")-HETE viz., 5("S")-hydroxy-6"E",8"Z",11"Z",14"Z"-eicosatetraenoic acid. History of discovery. The Nobel laureate, Bengt I. Samuelsson, and colleagues first described 5("S")-HETE in 1976 as a metabolite of arachidonic acid made by rabbit neutrophils. Biological activity was linked to it several years later when it was found to stimulate human neutrophil rises in cytosolic calcium, chemotaxis, and increases in their cell surface adhesiveness as indicated by their aggregation to each other. Since a previously discovered arachidonic acid metabolite made by neutrophils, leukotriene B4 (LTB4), also stimulates human neutrophil calcium rises, chemotaxis, and auto-aggregation and is structurally similar to 5("S")-HETE in being a 5("S")-hydroxy-eicosateraenoate, it was assumed that 5("S")-HETE stimulated cells through the same cell surface receptors as those used by LTB4 viz., the leukotriene B4 receptors. However, further studies in neutrophils indicated that 5("S")-HETE acts through a receptor distinct from that used by LTB4 as well as various other neutrophil stimuli. This 5("S")-HETE receptor is termed the oxoeicosanoid receptor 1 (abbreviated as OXER1). 5("S")-HETE production. 5("S")-HETE is a product of the cellular metabolism of the n-6 polyunsaturated fatty acid, arachidonic acid (i.e. 5"Z",8"Z",11"Z",14"Z"-eicosatetraenoic acid), by ALOX5 (also termed arachidonate-5-lipoxygenase, 5-lipoxygenase, 5-LO, and 5-LOX). ALOX5 metabolizes arachidonic acid to its hydroperoxide derivative, arachidonic acid 5-hydroperoxide i.e. 5("S")-hydroperoxy-6"E",8"Z",11"Z",14"Z"-eicosatetraenoic acid (5("S")-HpETE). 5("S")-HpETE may then be released and rapidly converted to 5("S")-HETE by ubiquitous cellular peroxidases: Arachidonic acid + O2 → 5("S")-HpETE → 5("S")-HETE Alternatively, 5("S")-HpETE may be further metabolized to its epoxide, 5(6)-oxido-eicosatetraenoic acid viz., leukotriene A4 (i.e. 5"S",6"S"-epoxy-7"E",9"E",11"Z",14"Z"-eicosatetraenoic acid or 5"S"-5,6-oxido-7"E",9"E",11"Z",14"Z"-eicosatetraenoic acid). Leukotriene A4 may then be further metabolized either to leukotriene B4 by leukotriene A4 hydrolase or to leukotriene C4 by leukotriene C4 synthase. Finally, leukotriene C4 may be metabolized to leukotriene D4 and then to leukotriene E4. The relative amounts of these metabolites made by specific cells and tissues depends in large part on the relative content of the appropriate enzymes. The selective synthesis of 5("S")-HETE (i.e. synthesis of 5("S")-HETE without concurrent synthesis of 5("R")-HETE) by cells is dependent on, and generally proportionate to, the presence and levels of its forming enzyme, ALOX5. Human ALOX5 is highly expressed in cells that regulate innate immunity responses, particularly those involved in inflammation and allergy. Examples of such cells include neutrophils, eosinophils, B lymphocytes, monocytes, macrophages, mast cells, dendritic cells, and the monocyte-derived foam cells of atherosclerosis tissues. ALOX5 is also expressed but usually at relatively low levels in many other cell types. The production of 5("S")-HETE by these cells typically serves a physiological function. However, ALOX5 can become overexpressed at high levels in certain types of human cancer cells such as those of the prostate, lung, colon, colorectal and pancreatic as a consequence of their malignant transformation. In these cells, the ALOX5-dependent production of 5("S")-HETE appears to serve a pathological function viz., it promotes the growth and spread of the cancer cells. 5("S")-HETE may also be made in combination with 5("R")-HETE along with numerous other "(S,R)"-hydroxy polyunsaturated fatty acids as a consequence of the non-enzymatic oxidation reactions. Formation of these products can occur in any tissue subjected to oxidative stress. 5("S")-HETE metabolism. In addition to its intrinsic activity, 5("S")-ETE can serve as an intermediate that is converted to other bioactive products. Most importantly, 5-Hydroxyeicosanoid dehydrogenase (i.e. 5-HEDH) converts the 5-hydroxy residue of 5("S")-HETE to a ketone residue to form 5-oxo-eicosatetraenoic acid (i.e. 5-oxo-6"E",8"Z",11"Z",14"Z"-eicosatetraenoate, abbreviated as 5-oxo-ETE). 5-HEDH is a reversibly acting NADP+/NADPH-dependent enzyme that catalyzes to following reaction: 5("S")-HETE + NADP+ formula_0 5-oxo-ETE + NADPH 5-HEDH acts bi-directionally: it preferentially oxygenates 5("S")-HETE to 5-oxo-ETE in the presence of excess NADH+ but preferentially reduces 5-oxo-ETE back to 5("S")-HETE in the presence of excess NADPH. Since cells typically maintain far higher levels of NADPH than NADP+, they usually make little or no 5-oxo-ETE. When undergoing oxidative stress, however, cells contain higher levels of NADH+ than NADPH and make 5-oxo-ETE preferentially. Additionally, "in vitro" studies indicate that cells can transfer their 5("S")-HETE to cells that contain high levels of 5-NEDH and NADP+ and therefore convert the transferred 5("S")-HETE to 5-oxo-ETE. It is suggested that 5-oxo-ETE forms preferentially "in vivo" under conditions of oxidative stress or conditions where ALOX5-rich cells can transfer their 5("S")-HETE to cells epithelial, endothelial, dendritic, and certain (e.g. prostate, breast, and lung) cancer cells which display little or no ALOX5 activity but have high levels of 5-NEDH and NADP+. Since 5-oxo-ETE is 30- to 100-fold more potent than 5("S")-HETE, 5-HEDH main function may be to increase the biological impact of 5-HETE production. Cells metabolize 5-("S")-HETE in other ways. They may use: Alternate pathways that make some of the above products include the: a) metabolism of 5("S")-HpETE to 5-oxo-ETE by cytochrome P450 (CYP) enzymes such as CYP1A1, CYP1A2, CYP1B1, and CYP2S1; b) conversion of 5-HETE to 5-oxo-ETE non-enzymatically by heme or other dehydrating agents; c) formation of 5-oxo-15("S")-hydroxy-ETE through 5-HEDH-based oxidation of 5("S"),15("S")-dihydroxyicosatetraenoate; d) formation of 5("S"),15("R")-dihydroxy-eicosatetraenoate by the attack of ALOX5 on 15-hydroxyicosatetraenoic acid (15("S")-HETE); e) formation of 5-oxo-15("S")-hydroxy-eicosatetreaenoate ("5-oxo-15("S")-hydroxy-ETE") by the arachidonate 15-lipoxygenase-1-based or arachidonate 15-lipoxygenase-2-based metabolism of 5-oxo-ETE; and f) conversion of 5("S")-HpETE and 5("R")-HpETE to 5-oxo-ETE by the action of a mouse macrophage 50-60 kilodalton cytosolic protein. Mechanism of action. The OXER1 receptor. 5("S")-HETE family members share a common receptor target for stimulating cells that differs from the receptors targeted by the other major products of ALOX5, i.e., leukotriene B4, leukotriene C4, leukotriene D4, leukotriene E4, lipoxin A4, and lipoxin B4. It and other members of the 5("S")-HETE family stimulate cells primarily by binding and thereby activating a dedicated G protein-coupled receptor, the oxoeicosanoid receptor 1 (i.e. OXER1, also termed the OXE, OXE-R, hGPCR48, HGPCR48, or R527 receptor). OXER1 couples to the G protein complex composed of the Gi alpha subunit (Gαi) and G beta-gamma complex (Gβγ); when bound to a 5-("S")-HETE family member, OXER1 triggers this G protein complex to dissociate into its Gαi and Gβγ components with Gβγ appearing to be the component responsible for activating the signal pathways which lead to cellular functional responses. The cell-activation pathways stimulated by OXER1 include those mobilizing calcium ions and activating MAPK/ERK, p38 mitogen-activated protein kinases, cytosolic phospholipase A2, PI3K/Akt, and protein kinase C beta and epsilon. The relative potencies of 5-oxo-ETE, 5-oxo-15("S")-HETE, 5("S")-HETE, 5("S"),15("S")-diHETE, 5-oxo-20-hydroxy-ETE, 5("S"),20-diHETE, and 5,15-dioxo-ETE in binding to, activating, and thereby stimulating cell responses through the OXER1 receptor are ~100, 30, 5-10, 1-3, 1-3, 1, and &lt;1, respectively. Other receptors. Progress in proving the role of the 5-HETE family of agonists and their OXER1 receptor in human physiology and disease has been made difficult because mice, rats, and the other rodents so far tested lack OXER1. Rodents are the most common "in vivo" models for investigating these issues. OXER1 is expressed in non-human primates, a wide range of other mammals, and various fish species and a model of allergic airways disease in cats, which express OXER1 and make 5-oxo-ETE, has recently been developed for such studies. In any event, cultured mouse MA-10 Leydig cells, while responding to 5-oxo-ETE, lack OXER1. It is suggested that this cell's, as well as mouse and other rodent, responses to 5-oxo-ETE are mediated by a receptor closely related to OXER11 viz., the mouse niacin receptor 1, Niacr1. Niacr1, an ortholog of OXER1, is a G protein-coupled receptor for niacin, and responds to 5-oxo-ETE. It has also been suggested that one or more of the mouse hydroxycarboxylic acid (HCA) family of the G protein-coupled receptors, HCA1 (GPR81), HCA2 (GPR109A), and HCA3 (GPR109B), which are G protein-coupled receptors for fatty acids may be responsible for rodent responses to 5-oxo-ETE. It is possible that human cellular responses to 5-oxo-ETE and perhaps its analogs may involve, at least in isolated instances, one or more of these receptors. PPARγ. 5-Oxo-15("S")-hydroxy-ETE and to a lesser extent 5-oxo-ETE but not 5("S")-HETE also bind to and activate peroxisome proliferator-activated receptor gamma (PPARγ). Activation of OXER1 receptor and PPARγ by the oxo analogs can have opposing effects on cells. For example, 5-oxo-ETE-bound OXER1 stimulates while 5-oxo-ETE-bound PPARγ inhibits the proliferation of various types of human cancer cell lines. Other mechanisms. 5("S")-HETE acylated into the phosphatidylethanolamines fraction of human neutrophil membranes is associated with the inhibition of these cells from forming neutrophil extracellular traps, i.e. extracellular DNA scaffolds which contain neutrophil-derived antimicrobial proteins that circulate in blood and have the ability to trap bacteria. It seems unlikely that this inhibition reflects involvement of OXER1. 5-Oxo-ETE relaxes pre-contracted human bronchi by a mechanism that does not appear to involve OXER1 but is otherwise undefined. Clinical significance. Inflammation. 5("S")-HETE and other family members were first detected as products of arachidonic acid made by stimulated human polymorphonuclear neutrophils (PMN), a leukocyte blood cell type involved in host immune defense against infection but also implicated in aberrant pro-inflammatory immune responses such as arthritis; soon thereafter they found to be active also in stimulating these cells to migrate (i.e. chemotaxis), degranulate (i.e. release the anti-bacterial and tissue-injuring contents of their granules), produce bacteriocidal and tissue-injuring reactive oxygen species, and mount other pro-defensive as well as pro-inflammatory responses of the innate immune system. For example, the gram-negative bacterium, "Salmonella tryphimurium", and the outer surface of gram-negative bacteria lipopolysaccharide, promote the production of 5("S")-HETE and 5-oxo-ETE by human neutrophils. The family members stimulate another blood cell of the innate immunity system, the human monocyte, acting synergistically with the pro-inflammatory CC chemokines, monocyte chemotactic protein-1 and monocyte chemotactic protein-3, to stimulate monocyte function. 5-Oxo-ETE also stimulates two other cell types that share responsibility with the PMN for regulating inflammation, the human lymphocyte and dendritic cell. And, "in vivo" studies, the injection of 5-oxo-ETE into the skin of human volunteers causes the local accumulation of PMN and monocyte-derived macrophages. Furthermore, the production of one or more 5("S")-HETE family members as well as the expression of orthologs of the human OXER1 receptor occur in various mammalian species including dogs, cats, cows, sheep, elephants, pandas, opossums, and ferrets and in several species of fish; for example, cats undergoing experimentally induced asthma accumulate 5-oxo-ETE in their lung lavage fluid, feline leucocytes make as well as respond to 5-oxo-ETE by an oxer1-dependent mechanism; and an OXER1 ortholog and, apparently, 5-oxo-ETE are necessary for the inflammatory response to tissue damage caused by osmolarity insult in zebrafish. These results given above suggest that members of the 5-oxo-ETE family and the OXER1 receptor or its orthologs may contribute to protection against microbes, the repair of damaged tissues, and pathological inflammatory responses in humans and other animal species. However, an OXER1 ortholog is absent in mice and other rodents; while rodent tissues do exhibit responsiveness to 5-oxo-ETE, the lack of an oxer1 or other clear 5-oxoETE receptor in such valued animal models of diseases as rodents has impeded progress in our understanding of the physiological and pathological roles of 5-oxo-ETE. Allergy. The following human cell types or tissues that are implicated in allergic reactivity produce 5-HETE (stereoisomer typically not defined): alveolar macrophages isolated from asthmatic and non-asthmatic patients, basophils isolated from blood and challenged with anti-IgE antibody, mast cells isolated from lung, cultured pulmonary artery endothelial cells, isolated human pulmonary vasculature, and allergen-sensitized human lung specimens challenged with specific allergen. Additionally, cultured human airway epithelial cell lines, normal bronchial epithelium, and bronchial smooth muscle cells convert 5("S")-HETE to 5-oxo-ETE in a reaction that is greatly increase by oxidative stress, which is a common component in allergic inflammatory reactions. Finally, 5-HETE is found in the bronchoalveolar lavage fluid of asthmatic humans and 5-oxo-ETE is found in the bronchoalveolar lavage fluid of cats undergoing allergen-induced bronchospasm. Among the 5-HETE family of metabolites, 5-oxo-ETE is implicated as the most likely member to contribute to allergic reactions. It has exceptionally high potency in stimulating the chemotaxis, release of granule-bound tissue-injuring enzymes, and production of tissue-injuring reactive oxygen species of a cell type involved in allergic reactions, the human eosinophil granulocyte. It is also exceptionally potent in stimulating eosinophils to activate cytosolic phospholipase A2 (PLA2G4A) and possibly thereby to form platelet-activating factor (PAF) as well as metabolites of the 5-HETE family. PAF is itself a proposed mediator of human allergic reactions which commonly forms concurrently with 5-HETE family metabolites in human leukocytes and acts synergistically with these metabolites, particularly 5-oxo-ETE, to stimulate eosinophils. 5-Oxo-ETE also cooperates positively with at least four other potential contributors to allergic reactions, RANTES, eotaxin, granulocyte macrophage colony-stimulating factor, and granulocyte colony-stimulating factor in stimulating human eosinophils and is a powerful stimulator of chemotaxis in another cell type contributing to allergic reactions, the human basophil granulocyte. Finally, 5-oxo-ETE stimulates the infiltration of eosinophils into the skin of humans following its intradermal injection (its actions are more pronounced in asthmatic compared to healthy subjects) and when instilled into the trachea of Brown Norway rats causes eosinophils to infiltrate lung. These results suggest that the 5-oxo-ETE made at the initial tissue site of allergen insult acting through the OXER1 on target cells attracts circulating eosinophils and basophils to lung, nasal passages, skin, and possibly other sites of allergen deposition to contribute to asthma, rhinitis, and dermatitis, and other sites of allergic reactivity. The role of 5-HETE family agonists in the bronchoconstriction of airways (a hallmark of allergen-induced asthma) in humans is currently unclear. 5-HETE stimulates the contraction of isolated human bronchial muscle, enhances the ability of histamine to contract this muscle, and contracts guinea pig lung strips. 5-Oxo-ETE also stimulates contractile responses in fresh bronchi, cultured bronchi, and cultured lung smooth muscle taken from guinea pigs but in direct contrast to these studies is reported to relax bronchi isolated from humans. The latter bronchi contractile responses were blocked by cyclooxygenase-2 inhibition or a thromboxane A2 receptor antagonist and therefore appear mediated by 5-oxo-ETE-induced production of this thromboxane. In all events, the relaxing action of 5-oxo-ETE on human bronchi does not appear to involve OXER1. Cancer. The 5-oxo-ETE family of agonists have also been proposed to contribute to the growth of several types of human cancers. This is based on their ability to stimulate certain cultured human cancer cell lines to proliferate, the presence of OXER1 mRNA and/or protein in these cell lines, the production of 5-oxo-ETE family members by these cell lines, the induction of cell death (i.e. apoptosis) by inhibiting 5-lipoxygenase in these cells, and/or the overexpression of 5-lipoxygenase in tissue taken from the human tumors. Human cancers whose growth has been implicated by these studies as being mediated at least in part by a member(s) of the 5-oxo-ETE family include those of the prostate, breast, lung, ovary, and pancreas. Steroid production. 5("S")-HETE and 5("S")-HpETE stimulate the production of progesterone by cultured rat ovarian glomerulosa cells and enhance the secretion of progesterone and testosterone by cultured rat testicular Leydig cells. Both metabolites are made by cyclic adenosine monophosphate-stimulated MA-10 mouse Leydig cells; stimulate these cells to transcribe steroidogenic acute regulatory protein, and in consequence produce the steroids. The results suggest that trophic hormones (e.g., leutenizing hormone, adrenocorticotropic hormone) stimulate these steroid producing cells to make 5("S")-HETE and 5("S")-HpEPE which in turn increase the synthesis of steroidogenic acute regulatory protein; the latter protein promotes the rate-limiting step in steroidogenesis, transfer of cholesterol from the outer to the inner membrane of mitochondria and thereby acts in conjunction with trophic hormone-induce activation of protein kinase A to make progesterone and testosterone. This pathway may also operate in humans: Human H295R adrenocortical cells do express OXER1 and respond to 5-oxo-ETE by an increasing the transcription of steroidogenic acute regulatory protein messenger RNA as well as the production of aldosterone and progesterone by an apparent OXER1-dependent pathway.&lt;ref name="doi10.1016/j.mce.2012.11.003"&gt;&lt;/ref&gt; Rat and mouse cells lack OXER1. It has been suggested that the cited mouse MA-10 cell responses to 5-oxo-ETE are mediated by an ortholog to OXER1, mouse niacin receptor 1, Niacr1, which is a G protein-coupled receptor mediating the activity of niacin, or by one or more of the mouse hydroxycarboxylic acid (HCA) family of the G protein-coupled receptors, HCA1 (GPR81), HCA2 (GPR109A), and HCA3 (GPR109B), which are G protein-coupled receptors for fatty acids. In any event, Human H295R adrenocortical cells do express OXER1 and respond to 5-oxo-ETE by an increasing the transcription of steroidogenic acute regulatory protein messenger RNA as well as the production of aldosterone and progesterone by an apparent OXER1-dependent pathway. Bone remodeling. In an "in vitro" mixed culture system, 5("S")-HETE is released by monocytes to stimulate, at sub-nanomolar concentrations, osteoclast-dependent bone reabsorption. It also inhibits morphogenetic protein-2 (BMP-2)-induced bone-like nodule formation in mouse calvarial organ cultures. These results allow that 5("S")-HETE and perhaps more potently, 5-oxo-ETE contribute to the regulation of bone remodeling. Parturition. 5("S")-HETE is: elevated in the human uterus during labor; at 3–150 nM, increases both the rates of spontaneous contractions and overall contractility of myometrial strips obtained at term but prior to labor from human lower uterine segments; and in an "in vitro" system crosses either amnion or intact amnion-chorion-decidua and thereby may along with prostaglandin E2 move from the amnion to uterus during labor in humans. These studies allow that 5("S")-HETE, perhaps in cooperation with established role of prostaglandin E2, may play a role in the onset of human labor. Other actions. 5("S")-HETE is reported to modulate tubuloglomerular feedback. 5("S")-HpETE is also reported to inhibit the -ATPase activity of synaptosome membrane preparations prepared from rat cerebral cortex and may thereby inhibit synapse-dependent communications between neurons. 5("S")-HETE acylated into phosphatidylethanolamine is reported to increase the stimulated production of superoxide anion and interleukin-8 release by isolated human neutrophils and to inhibit the formation of neutrophil extracellular traps (i.e. NETS); NETS trap blood-circulating bacteria to assist in their neutralization. 5("S")-HETE esterified to phosphatidylcholine and glycerol esters by human endothelial cells is reported to be associated with the inhibition of prostaglandin production. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14713923
1471429
Promise problem
Type of computational problem In computational complexity theory, a promise problem is a generalization of a decision problem where the input is promised to belong to a particular subset of all possible inputs. Unlike decision problems, the "yes" instances (the inputs for which an algorithm must return "yes") and "no" instances do not exhaust the set of all inputs. Intuitively, the algorithm has been "promised" that the input does indeed belong to set of "yes" instances or "no" instances. There may be inputs which are neither "yes" nor "no". If such an input is given to an algorithm for solving a promise problem, the algorithm is allowed to output anything, and may even not halt. Formal definition. A decision problem can be associated with a language formula_0, where the problem is to accept all inputs in formula_1 and reject all inputs not in formula_1. For a promise problem, there are two languages, formula_2 and formula_3, which must be disjoint, which means formula_4, such that all the inputs in formula_2 are to be accepted and all inputs in formula_3 are to be rejected. The set formula_5 is called the "promise". There are no requirements on the output if the input does not belong to the promise. If the promise equals formula_6, then this is also a decision problem, and the promise is said to be trivial. Examples. Many natural problems are actually promise problems. For instance, consider the following problem: Given a directed acyclic graph, determine if the graph has a path of length 10. The "yes" instances are directed acyclic graphs with a path of length 10, whereas the "no" instances are directed acyclic graphs with no path of length 10. The promise is the set of directed acyclic graphs. In this example, the promise is easy to check. In particular, it is very easy to check if a given graph is cyclic. However, the promised property could be difficult to evaluate. For instance, consider the problem "Given a Hamiltonian graph, determine if the graph has a cycle of size 4." Now the promise is NP-hard to evaluate, yet the promise problem is easy to solve since checking for cycles of size 4 can be done in polynomial time. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L \\subseteq \\{0,1\\}^*" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "L_{\\text{YES}}" }, { "math_id": 3, "text": "L_{\\text{NO}}" }, { "math_id": 4, "text": "L_{\\text{YES}} \\cap L_{\\text{NO}} = \\varnothing" }, { "math_id": 5, "text": "L_{\\text{YES}} \\cup L_{\\text{NO}}" }, { "math_id": 6, "text": "\\{0,1\\}^*" } ]
https://en.wikipedia.org/wiki?curid=1471429
14715209
S&amp;P/ASX 300
Australian stock market index The S&amp;P/ASX 300, or simply, ASX 300, is a stock market index of Australian stocks listed on the Australian Securities Exchange (ASX). The index is market-capitalisation weighted, meaning each company included is in proportion to the indexes total market value, and float-adjusted, meaning the index only considers shares available to public investors. The index measures the performance of the top 300 companies listed on the ASX. The index was formed in April 2000, by Standard and Poor's Dow Jones Indices. It was created to provide broader exposure to the Australian equity market compared to the S&amp;P/ASX200 The index incorporates the same companies within the S&amp;P/ASX 200, with the inclusion of 100 additional companies based on their market-capitalisation. Index components are reviewed semi-annually by Standard &amp; Poor's. The average annual total return of the index is 19.3% as of 08/04/2020, however, there have been multiple periods where the index fell over 30%. Selection criteria. In order for a company to be included within the ASX 300, it must meet the following selection criteria: Listing. Securities must be listed on the ASX to be included in the index. Domicile. The ASX consists of primary and secondary listings. A primary listing is when the company's equity is listed on a single exchange. A secondary listing (cross-listing) is when the ASX is not the primary exchange and the equity is listed on multiple foreign stock-exchanges. The ASX300 includes both primary and secondary listings. Foreign and domestic domiciled securities can be included within the index. Eligible securities. Securities must be common or equity preferred stocks. Hybrid securities (e.g. convertible stock, bonds, warrants and preferred stock) that provide the holder with a promised fixed return, are excluded from the index as these instruments possess inherent characteristics that differ from standard equity securities. Companies that are in the process of a merger or acquisition are excluded. Market capitalisation. Market capitalisation is the product of price per share multiplied by the total number of shares outstanding. Stocks must meet the minimum threshold of AUD 100 million and is based on the average daily market capitalization of the security over the last 6 months. Liquidity. Strict liquidity requirements ensure the index maintains accurate pricing. Relative liquidity is calculated: formula_0 Stocks require a minimum relative liquidity of 30%. Is this drops to 15%, the stock is removed in the next rebalancing. Calculation. The ASX 300 is capitalisation weighted, meaning, a share's weight within the index is proportional to its total market value. Market capitalisation is equal to share price multiplied by total shares outstanding, while market value is equal to share price multiplied by the total number of publicly tradable shares. Index level is calculated: formula_1 Where, formula_2 formula_3 The index can be calculated by dividing the sum of each securities market capitalisation by a factor referred to as the "Divisor". For example, if the total adjusted market capitalisation of all included shares were $1 trillion, and the divisor were $1 billion, the index level would be equal to 1000. The divisor is a tool used by the S&amp;P to ensure the index only represents changes in market driven price movements. When stocks are added or deleted, the divisor is adjusted to maintain the same market value of the index. Similarly, in the event of non-market driven price movements (e.g. corporate actions, company inclusion or exclusion from the index), the divisor is adjusted to remove the effects of these actions on index value. Divisor Adjustments. An event that causes deviations in total market value of the ASX 300 while each stock price is held constant requires an adjustment of the divisor. The divisor can be calculated:    formula_4 Where, formula_5 In the case where stocks are removed or added, the market value of the index will change, causing the index level to also change. The divisor is adjusted to account for the change in market value in order to maintain a constant index level. Adjusted Divisor can be calculated: formula_6                  Where, formula_7 Float adjustment. The ASX 300 is a float-adjusted index – the number of shares outstanding is reduced in order to remove shares that are not available to public investors. Each stock is assigned an Investable Weight Factor. An IWF can be defined as the percentage of shares freely available to be traded to the total number of shares outstanding. Float-adjusted number of shares (Q) can be calculated: formula_8 Adjustments to share count can be made to reflect foreign ownership restrictions or to adjust the weight of a stock. IWF's can be adjusted downwards by the Standard and Poor's Australian Index Committee to prevent illiquid stocks from being included at a disproportionately high weight. Each company IWF is reviewed annually unless an event occurs causing the float of the company to change by more than 5%. Index maintenance. Rebalancing. Rebalancing is the process of removing or adding stocks in order to achieve a desired risk profile and to meet certain index requirements (e.g. liquidity, weighting). This usually occurs after asset valuations have deviated from initial values over a certain period of time. For the ASX 300, rebalancing occurs semi-annually and involves updating included shares and investable weight factors (IWFs). Eligible shares receive a review to determine their inclusion based on their relative ranking with other included shares. This is based on market capitalisation ranking and is subject to liquidity standards that must be met. Stocks that fail to meet the minimum liquidity threshold are removed from the ranking. Buffers. The S&amp;P employs exclusion and inclusion buffers to minimise turnover, a term used to identify the action of replacing one stock with another. A stock will be considered for inclusion once a current constituent stock reaches a rank below the deletion threshold. The potential company must also satisfy the addition ranking. Rankings are based on float-adjusted market capitalisation. Investing. Investing in the ASX300 is possible via an index fund, in the form of mutual funds or exchange traded funds. These investment vehicles, depending on the operating philosophy, aim to replicate or exceed the performance characteristics of the ASX300. Mutual Funds. This investment method involves pooling money from different investors to purchase securities. Passive funds will aim to manage a portfolio of securities that replicate the weightings of each constituent of the ASX300. Funds will, ideally, match the returns of the ASX300 before management fees and other expenses. Active managers will seek to outperform the index benchmark by exploiting market inefficiencies. Outperforming the index can be defined as either providing superior risk-adjusted returns, or simply generating excess returns. Exchange Traded Funds. These are similar to mutual funds, only, ETF's are traded on an exchange and require a lower minimum investment. Since ETF's trade as securities, their price can deviate from the underlying index value. An example of an ETF tracking the ASX300 is the Vanguard Australia Shares ETF The Vanguard Australian Shares ETF has underperformed by 0.17 pps as of 31/03/2020 since its inception on 4 May 2009, generating a return of 6.83%5. Australian active equity managers tracking the ASX 300, on average, have underperformed. A recent study has also found that successful Australian active managers add value for investors if the distribution of returns is large, thereby enabling a greater number of profitable opportunities. It has been shown that the investment style of ‘"switching’", where an investor moves into a different investment strategy by selling their previous investment, occurs in the Australian equity market between the Mining and Financial industry. Performance. 2000–2010. On 31 March 2000, the ASX300 closed its first day of trading at 3133.26 points. Mining Boom. From 2003 to 2008, the index achieved a gross return of 186%. This period in Australia was characterised by the mining boom that occurred when bulk commodity prices rose in response to the industrialisation and urbanisation of the Chinese economy. During the period from 2000 to 2010, 30%  of the total market capitalisation of the ASX300 was concentrated within the materials sector, doubling within a period of five years. This was largely driven by increased company earnings. Revenue from the mining industry increased by $60 bil during this period, likewise, earnings increased by $37 bil, leading to higher resource company valuations relative to other sectors. Global Financial Crisis of 2007–2008. The index ended 2007 at 6356.72 points, after previously reaching its all-time high of 6845.38 on 1 November 2007. Throughout 2008, the ASX300 experienced extensive losses as a consequence of the growing Global Financial Crisis, particularly with the folding credit markets and the collapse of Lehman Brothers in mid-September, 2008. Despite the relatively subdued impact upon the Australian economy compared to other markets such as the US, Europe and Asia, the ASX300 fell by 41.94% during 2008 and exceeded the S&amp;P500's loss which stood at – 35.61%3. Major losses were incurred by retirees and retail investors. From its inception to 4 January 2010, the ASX 300 gained 1,741.03 points, closing at 4874.29 points. This represented a gross return of 55.56%. 2010–2020. On 4 January 2010, the ASX300 closed at 4874.29, 55% above its lowest point following the GFC. On 10 January 2020, the ASX 300 broke its previous record high of 6845.38 points set on 10 November 2007. From 2010 to January 2020, the ASX 300 increased by 1828.9 points to post a 10-year gain of 37.6%. Coronavirus Pandemic. The coronavirus pandemic first emerged in December 2019. Despite this time, the ASX300 maintained its bull run from the previous decade into 2020. The index reached its all-time high of 7115.69 on 20 January. The index began to slowly retreat as COVID-19 extended its spread beyond China. By the end of February, the index fell by 720.1 points (10.1%) and on 8 March, losses were extended with the onset of the 2020 Russia-Saudi Arabia oil price war. The index closed at 4500 on 24 March, marking a 34.12% drop since February's high. This put the index well into bear territory which is typically defined by a 20% fall within a 52-weeks of a high. On 18 March, the Reserve Bank of Australia (RBA) agreed to engage in a variety of measures designed to support the economy. The ASX300 gained 1250 points (27%) during the months of April–May 2020, aided by the RBA's loose monetary policy, including quantitative easing and a Term Funding Facility for the banking system. Components and characteristics. Annual Returns. The following table lists the annual return of the ASX 300 over the last ten years as of 30 April 2020. Constituents. Top ten companies included in the ASX300 ranked by index weight as of 30 April 2020. Risk Characteristics. Standard deviation is used as a proxy for risk. Using this measure, annualized returns are adjusted to arrive at a figure representing the return an investor receives by taking on an additional unit of risk. The ASX 200 has shown a difference of 0.01 points in the 10 year annualized risk-adjusted return. Issues with market representation. Mathematically, weighing securities by their market value causes overpriced stocks to be over-weighted, and under-priced stocks to be under-weighted. This creates deviation in true company value relative to its present value of future cash flows, thereby hindering accurate price discovery. The ASX 300 is heavily skewed towards large-cap stocks, making up 40-50% of the index. As of 30 April 2020, the financial sector constituted for 26% of index value and materials accounted for 19.2%. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\text{Relative Liquidity} = {Stock Median Liquidity \\over Market Liquidity}" }, { "math_id": 1, "text": "\\text{Index Level} = {\\sum \\left({P_i} \\cdot {Q_i}\\right) \\over Divisor}" }, { "math_id": 2, "text": "P_i=Price" }, { "math_id": 3, "text": "Q_i= Share Count" }, { "math_id": 4, "text": "Divisor={MV\\over Index Value}" }, { "math_id": 5, "text": "MV=Market Value" }, { "math_id": 6, "text": "Divisor_{New}={MV+CMV\\over Index Value}" }, { "math_id": 7, "text": "CMV=Change In Market Value" }, { "math_id": 8, "text": "Q_i=IWF_i*TotalShares_i" } ]
https://en.wikipedia.org/wiki?curid=14715209
1471533
Back-stripping
Geophysical analysis technique Back-stripping (also back stripping or backstripping) is a geophysical analysis technique used on sedimentary rock sequences. It is used to quantitatively estimate the depth that the basement would be in the absence of sediment and water loading. This depth provides a measure of the unknown tectonic driving forces that are responsible for basin formation (otherwise known as tectonic subsidence or uplift). By comparing backstripped curves to theoretical curves for basin subsidence and uplift it is possible to deduce information on the basin forming mechanisms. The technique developed by Watts &amp; Ryan in 1976 allows for the recovery of the basement subsidence and uplift history in the absence of sediment and water loading and, therefore isolate the contribution from the tectonic forces responsible for the formation of a rift basin. It is a method by which successive layers of basin fill sediment are "stripped off" the total stratigraphy during analysis of that basin's history. In a typical scenario, a sedimentary basin deepens away from a marginal flexure, and the accompanying isochronous strata typically thicken basinward. By isolating the isochronous packages one-by-one, these can be "peeled off" or backstripped - and the lower bounding surface rotated upward to a datum. By successively backstripping isochrons, the basin's deepening history can be plotted in reverse, leading to clues as to its tectonic or isostatic origin. A more complete analysis uses decompaction of the remaining sequence following each stage of the back-stripping. This takes into account the amount of compaction caused by the loading of the later layers and allows a better estimation of the depositional thickness of the remaining layers and the variation of water depth with time. General Theory. As a result of their porosity, sedimentary strata are compacted by overlaying sedimentary layers after deposition. Consequently, the thickness of each layer in a sedimentary sequence was larger at the time of its deposition than it is when measured in the field. In order to consider the influence of sediment compaction on the thickness and density of the stratigraphic column, the porosity must be known. Empirical studies show that the porosity of rocks decreases exponentially with depth. In general we can describe this with the relationship: where formula_0 is the porosity of the rock at depth formula_1, formula_2 is the porosity at the surface and formula_3 is a rock specific compaction constant. Back-stripping Equation. The fundamental equation in back-stripping corrects the observed stratigraphic record for the effects of sediment and water loading and changes in water depth, and is given by: where formula_4 is the tectonically driven subsidence, formula_5 is the decompacted sediment thickness, formula_6 is the mean sediment density, formula_7 is the average depth at which the sedimentary units were deposited, formula_8 and formula_9 are the densities of the water and mantle respectively, and formula_10 the difference in sea-level height between the Present and the time at which the sediments were deposited. The three independent terms account for the contributions of sediment loading, water depth and sea-level oscillations to the subsidence of the basin. Derivation. To derive equation (2) one should first consider a 'loaded' column that represents a sedimentary unit accumulated over a certain geological time period, and a corresponding 'unloaded' column that represents the position of the underlying basement without the effects of the sediments. In the scenario, the pressure at the base of the loaded column, is given by: where formula_11 is the water depth of deposition, formula_3 is the mean thickness of the crust, formula_12 is the sediment thickness corrected for compaction, formula_13 is the average gravity and formula_14,formula_15 and formula_16 are the densities of water, the sediment and the crust respectively. The pressure at the base of the unloaded column is given by: where formula_4 is the tectonic or corrected subsidence, formula_17 is the density of the mantle, and formula_18 is the distance from the base of the unloaded crust to the depth of compensation (which is assumed to be at the base of the loaded crust) and is given by: Substitution of (3),(4) and (5) after simplifying, we obtain (2). Multi-layer Case. For a multi-layered sedimentary basin, it is necessary to successively back-strip each individually identifiable layer separately to obtain a complete evolution of the tectonic subsidence. Using equation (2),a complete subsidence analysis is performed by stepwise removal of the top layer at any one stage during the analysis and performing back-stripping as if for a single layer case. For the remaining column, mean densities and thickness must be used at each time, or calculation, step. Equation (2) then becomes the tectonic amount of subsidence during sedimentation of the top most layer only. In this case formula_19 and formula_20 can be defined as the thickness and density of the entire remaining sedimentary column after removal of the top layer formula_21 (i.e. the decompacted thickness). The thickness of a sediment pile with formula_21 layers is then: The density of the sedimentary column underneath layer formula_21 is given by the mean density of all the remaining layers. This is the sum of all the densities of the remaining layers multiplied by the respective thickness and divided by formula_22: Effectively you iteratively apply (1) and (2) using formula_22 and formula_23 instead of formula_24 and formula_20. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\phi" }, { "math_id": 1, "text": "z" }, { "math_id": 2, "text": "\\phi_0" }, { "math_id": 3, "text": "c" }, { "math_id": 4, "text": " Y " }, { "math_id": 5, "text": " S " }, { "math_id": 6, "text": "\\rho_s" }, { "math_id": 7, "text": " W_d " }, { "math_id": 8, "text": " \\rho_w " }, { "math_id": 9, "text": " \\rho_m " }, { "math_id": 10, "text": " \\Delta_{SL} " }, { "math_id": 11, "text": "W_d" }, { "math_id": 12, "text": "S" }, { "math_id": 13, "text": "g" }, { "math_id": 14, "text": "\\rho_w" }, { "math_id": 15, "text": " \\rho_s" }, { "math_id": 16, "text": "\\rho_c" }, { "math_id": 17, "text": "\\rho_m" }, { "math_id": 18, "text": "b" }, { "math_id": 19, "text": " L^*" }, { "math_id": 20, "text": "\\rho_L" }, { "math_id": 21, "text": "l" }, { "math_id": 22, "text": "L^*" }, { "math_id": 23, "text": "\\rho_{L^*}" }, { "math_id": 24, "text": "L" } ]
https://en.wikipedia.org/wiki?curid=1471533
147164
Goldbach's weak conjecture
Solved conjecture about prime numbers In number theory, Goldbach's weak conjecture, also known as the odd Goldbach conjecture, the ternary Goldbach problem, or the 3-primes problem, states that Every odd number greater than 5 can be expressed as the sum of three primes. (A prime may be used more than once in the same sum.) This conjecture is called "weak" because if Goldbach's "strong" conjecture (concerning sums of two primes) is proven, then this would also be true. For if every even number greater than 4 is the sum of two odd primes, adding 3 to each even number greater than 4 will produce the odd numbers greater than 7 (and 7 itself is equal to 2+2+3). In 2013, Harald Helfgott released a proof of Goldbach's weak conjecture. The proof was accepted for publication in the "Annals of Mathematics Studies" series in 2015, and has been undergoing further review and revision since; fully-refereed chapters in close to final form are being made public in the process. Some state the conjecture as Every odd number greater than 7 can be expressed as the sum of three odd primes. This version excludes 7 = 2+2+3 because this requires the even prime 2. On odd numbers larger than 7 it is slightly stronger as it also excludes sums like 17 = 2+2+13, which are allowed in the other formulation. Helfgott's proof covers both versions of the conjecture. Like the other formulation, this one also immediately follows from Goldbach's strong conjecture. Origins. The conjecture originated in correspondence between Christian Goldbach and Leonhard Euler. One formulation of the strong Goldbach conjecture, equivalent to the more common one in terms of sums of two primes, is Every integer greater than 5 can be written as the sum of three primes. The weak conjecture is simply this statement restricted to the case where the integer is odd (and possibly with the added requirement that the three primes in the sum be odd). Timeline of results. In 1923, Hardy and Littlewood showed that, assuming the generalized Riemann hypothesis, the weak Goldbach conjecture is true for all sufficiently large odd numbers. In 1937, Ivan Matveevich Vinogradov eliminated the dependency on the generalised Riemann hypothesis and proved directly (see Vinogradov's theorem) that all sufficiently large odd numbers can be expressed as the sum of three primes. Vinogradov's original proof, as it used the ineffective Siegel–Walfisz theorem, did not give a bound for "sufficiently large"; his student K. Borozdkin (1956) derived that formula_0 is large enough. The integer part of this number has 4,008,660 decimal digits, so checking every number under this figure would be completely infeasible. In 1997, Deshouillers, Effinger, te Riele and Zinoviev published a result showing that the generalized Riemann hypothesis implies Goldbach's weak conjecture for all numbers. This result combines a general statement valid for numbers greater than 1020 with an extensive computer search of the small cases. Saouter also conducted a computer search covering the same cases at approximately the same time. Olivier Ramaré in 1995 showed that every even number "n" ≥ 4 is in fact the sum of at most six primes, from which it follows that every odd number "n" ≥ 5 is the sum of at most seven primes. Leszek Kaniecki showed every odd integer is a sum of at most five primes, under the Riemann Hypothesis. In 2012, Terence Tao proved this without the Riemann Hypothesis; this improves both results. In 2002, Liu Ming-Chit (University of Hong Kong) and Wang Tian-Ze lowered Borozdkin's threshold to approximately formula_1. The exponent is still much too large to admit checking all smaller numbers by computer. (Computer searches have only reached as far as 1018 for the strong Goldbach conjecture, and not much further than that for the weak Goldbach conjecture.) In 2012 and 2013, Peruvian mathematician Harald Helfgott released a pair of papers improving major and minor arc estimates sufficiently to unconditionally prove the weak Goldbach conjecture. Here, the major arcs formula_2 is the union of intervals formula_3 around the rationals formula_4 where formula_5 is a constant. Minor arcs formula_6 are defined to be formula_7. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e^{e^{16.038}}\\approx3^{3^{15}}" }, { "math_id": 1, "text": "n>e^{3100}\\approx 2 \\times 10^{1346}" }, { "math_id": 2, "text": "\\mathfrak M" }, { "math_id": 3, "text": "\\left (a/q-cr_0/qx,a/q+cr_0/qx\\right )" }, { "math_id": 4, "text": "a/q,q<r_0" }, { "math_id": 5, "text": "c" }, { "math_id": 6, "text": "\\mathfrak{m}" }, { "math_id": 7, "text": "\\mathfrak{m}=(\\mathbb R/\\mathbb Z)\\setminus\\mathfrak{M}" } ]
https://en.wikipedia.org/wiki?curid=147164
1471697
Non-linear sigma model
Class of quantum field theory models In quantum field theory, a nonlinear "σ" model describes a scalar field Σ which takes on values in a nonlinear manifold called the target manifold  "T". The non-linear "σ"-model was introduced by , who named it after a field corresponding to a spinless meson called "σ" in their model. This article deals primarily with the quantization of the non-linear sigma model; please refer to the base article on the sigma model for general definitions and classical (non-quantum) formulations and results. Description. The target manifold "T" is equipped with a Riemannian metric "g". Σ is a differentiable map from Minkowski space "M" (or some other space) to "T". The Lagrangian density in contemporary chiral form is given by formula_0 where we have used a + − − − metric signature and the partial derivative "∂Σ" is given by a section of the jet bundle of "T"×"M" and V is the potential. In the coordinate notation, with the coordinates "Σa", "a" = 1, ..., "n" where "n" is the dimension of "T", formula_1 In more than two dimensions, nonlinear "σ" models contain a dimensionful coupling constant and are thus not perturbatively renormalizable. Nevertheless, they exhibit a non-trivial ultraviolet fixed point of the renormalization group both in the lattice formulation and in the double expansion originally proposed by Kenneth G. Wilson. In both approaches, the non-trivial renormalization-group fixed point found for the "O(n)"-symmetric model is seen to simply describe, in dimensions greater than two, the critical point separating the ordered from the disordered phase. In addition, the improved lattice or quantum field theory predictions can then be compared to laboratory experiments on critical phenomena, since the "O(n)" model describes physical Heisenberg ferromagnets and related systems. The above results point therefore to a failure of naive perturbation theory in describing correctly the physical behavior of the "O(n)"-symmetric model above two dimensions, and to the need for more sophisticated non-perturbative methods such as the lattice formulation. This means they can only arise as effective field theories. New physics is needed at around the distance scale where the two point connected correlation function is of the same order as the curvature of the target manifold. This is called the UV completion of the theory. There is a special class of nonlinear σ models with the internal symmetry group "G" *. If "G" is a Lie group and "H" is a Lie subgroup, then the quotient space "G"/"H" is a manifold (subject to certain technical restrictions like H being a closed subset) and is also a homogeneous space of "G" or in other words, a nonlinear realization of "G". In many cases, "G"/"H" can be equipped with a Riemannian metric which is "G"-invariant. This is always the case, for example, if "G" is compact. A nonlinear σ model with G/H as the target manifold with a "G"-invariant Riemannian metric and a zero potential is called a quotient space (or coset space) nonlinear σ model. When computing path integrals, the functional measure needs to be "weighted" by the square root of the determinant of "g", formula_2 Renormalization. This model proved to be relevant in string theory where the two-dimensional manifold is named worldsheet. Appreciation of its generalized renormalizability was provided by Daniel Friedan. He showed that the theory admits a renormalization group equation, at the leading order of perturbation theory, in the form formula_3 "Rab" being the Ricci tensor of the target manifold. This represents a Ricci flow, obeying Einstein field equations for the target manifold as a fixed point. The existence of such a fixed point is relevant, as it grants, at this order of perturbation theory, that conformal invariance is not lost due to quantum corrections, so that the quantum field theory of this model is sensible (renormalizable). Further adding nonlinear interactions representing flavor-chiral anomalies results in the Wess–Zumino–Witten model, which augments the geometry of the flow to include torsion, preserving renormalizability and leading to an infrared fixed point as well, on account of teleparallelism ("geometrostasis"). O(3) non-linear sigma model. A celebrated example, of particular interest due to its topological properties, is the "O(3)" nonlinear σ-model in 1 + 1 dimensions, with the Lagrangian density formula_4 where "n̂"=("n1, n2, n3") with the constraint "n̂"⋅"n̂"=1 and μ=1,2. This model allows for topological finite action solutions, as at infinite space-time the Lagrangian density must vanish, meaning "n̂" = constant at infinity. Therefore, in the class of finite-action solutions, one may identify the points at infinity as a single point, i.e. that space-time can be identified with a Riemann sphere. Since the "n̂"-field lives on a sphere as well, the mapping "S2→ S2" is in evidence, the solutions of which are classified by the second homotopy group of a 2-sphere: These solutions are called the O(3) Instantons. This model can also be considered in 1+2 dimensions, where the topology now comes only from the spatial slices. These are modelled as R^2 with a point at infinity, and hence have the same topology as the O(3) instantons in 1+1 dimensions. They are called sigma model lumps. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{L}={1\\over 2}g(\\partial^\\mu\\Sigma,\\partial_\\mu\\Sigma)-V(\\Sigma)" }, { "math_id": 1, "text": "\\mathcal{L}={1\\over 2}g_{ab}(\\Sigma) (\\partial^\\mu \\Sigma^a) (\\partial_\\mu \\Sigma^b) - V(\\Sigma)." }, { "math_id": 2, "text": "\\sqrt{\\det g}\\mathcal{D}\\Sigma." }, { "math_id": 3, "text": "\\lambda\\frac{\\partial g_{ab}}{\\partial\\lambda}=\\beta_{ab}(T^{-1}g)=R_{ab}+O(T^2)~," }, { "math_id": 4, "text": "\\mathcal L= \\tfrac{1}{2}\\ \\partial^\\mu \\hat n \\cdot\\partial_\\mu \\hat n " } ]
https://en.wikipedia.org/wiki?curid=1471697
14717142
HP-34C
Continuous memory calculator The HP-34C continuous memory calculator is an advanced scientific programmable calculator of the HP 30 series. It was produced between 1979 (cost US$150) and 1983 (cost US$100). Features. Root-finding and integration. Significant to the HP-34C calculator is the capability for integration and root-finding (a first for any pocket calculator). Integration and root-finding works by having the user input a formula as a program. Multiple roots are found using the technique of first finding a root formula_0, then dividing the equation by formula_1, thus driving the solution of the equation away from the root at that point. This technique for multiple root-finding is referred to as "deflation". The user would usually programmatically recall the root value from a storage register to improve its precision. Programming. The common method of converting registers to program memory allowed the calculator a maximum of 210 program steps. Programming features such as indirect jumps provides substantial capability to the calculator's programmer. The HP-34C shipped with an "applications" manual that included two games (Moon Rocket Lander and Nimb). This made the calculator probably one of the first pocket game computers ever invented. The winner was announced via calculator spelling by turning the display upside down and the words BLISS or I'LOSE (55178 or 3507,1) were displayed. A game of blackjack was easily programmable by converting some of the registers to lines of program. Pedigree. The calculator was superseded, in 1982, by the HP-15C. Although it is argued the HP-41C (introduced late 1979 and only a matter of months after the HP-34C) was a replacement for the HP-34C, they were in fact differentiated as much by price (the HP-34C being 50% that of the HP-41C) as by functionality and performance (the HP-41C being the first HP LCD-based and module-expandable calculator, with its standard functionality lacking the root-finding and integration capabilities as well as the gamma-function implementation of the HP-34C though). This price difference allowed those with economic constraints to still buy a high-end HP (HP-34C) scientific programmable within a reasonable cost. As such they were sold side-by-side for a number of years. Design. The HP-34C came in a number of variants, such as plastic- and metal-keyboard versions and those with soldered (later 1983 variants) vs pressure-mounted circuitry (earlier variants 1979–1983). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x=x_0" }, { "math_id": 1, "text": "(x-x_0)" } ]
https://en.wikipedia.org/wiki?curid=14717142
14717542
Graduated Random Presidential Primary System
Proposed election reform The Graduated Random Presidential Primary System, also known as the California Plan or the American Plan, is a proposed system to reform the conduct of United States Presidential primary campaigns. Under this system the campaign period would be broken into ten two-week periods in which an escalating number of electoral votes would be contested. It was developed by aerospace engineer and political scientist Thomas Gangale in 2003 in response to the trend toward front-loading in recent primary campaigns and the influence wielded by Iowa and New Hampshire, which traditionally hold their nominating events before any other state. The Plan. Under the American Plan, the primary season would be divided into ten two-week periods. In the first period, any combination of randomly selected states (or territories) could vote, as long as their combined number of electoral votes added up to eight. The territories of American Samoa, Guam, Puerto Rico, and the Virgin Islands, which do not hold electoral votes but do send delegates to nominating conventions, are counted as holding one electoral vote each, as would the District of Columbia. (The 23rd Amendment states that the District may send electors to the Electoral College, as long as it does not have more votes than the least populous state.) In each subsequent period, the number of votes contested would increase by eight. As a result, the early campaign would feature contests in several small states or a few larger ones, becoming more and more demanding as time went by. The mathematical expression is: formula_0 Because of the large gap between populations of the most populous states, California - the state with the highest population - could vote no earlier than the seventh period, while the second most populous state, Texas, as well as New York and Florida, the third and fourth largest, could vote in the fourth. California, unlike all other states, would always have to hold its primary toward the end of the campaign. To remedy this, the later stages of the California Plan primary are staggered. The seventh period (8x7) is moved before the fourth (8x4), the eighth (8x8) before the fifth (8x5), and the ninth (8x9) before the sixth (8x6). Criticism and support. In response to criticism that the random selection system of the American Plan could lead to high travel costs for candidates, John Nichols claims these costs are minimal compared to the costs of running full TV, radio, print and online media campaigns in several states simultaneously, as would happen under the large regional plans. However, such advertising buys would also be necessary under the American Plan in later rounds. The plan is supported by: The American Plan was the only systematic reform cited in the December 2005 Report of the Commission on Presidential Nomination Timing and Scheduling to Democratic National Committee chairman Howard Dean: "In considering the options for 2012 the Commission encourages the Party to think boldly, including for example, [Rules and Bylaws Committee] consideration of the proposal known as the American Plan which would spread the calendar of contests across ten intervals of time and randomly select the order of the states from one presidential election cycle to the next." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\sum_{n=1}^{10} 8n\n" } ]
https://en.wikipedia.org/wiki?curid=14717542
1471798
Kraft–McMillan inequality
In coding theory, the Kraft–McMillan inequality gives a necessary and sufficient condition for the existence of a prefix code (in Leon G. Kraft's version) or a uniquely decodable code (in Brockway McMillan's version) for a given set of codeword lengths. Its applications to prefix codes and trees often find use in computer science and information theory. The prefix code can contain either finitely many or infinitely many codewords. Kraft's inequality was published in . However, Kraft's paper discusses only prefix codes, and attributes the analysis leading to the inequality to Raymond Redheffer. The result was independently discovered in . McMillan proves the result for the general case of uniquely decodable codes, and attributes the version for prefix codes to a spoken observation in 1955 by Joseph Leo Doob. Applications and intuitions. Kraft's inequality limits the lengths of codewords in a prefix code: if one takes an exponential of the length of each valid codeword, the resulting set of values must look like a probability mass function, that is, it must have total measure less than or equal to one. Kraft's inequality can be thought of in terms of a constrained budget to be spent on codewords, with shorter codewords being more expensive. Among the useful properties following from the inequality are the following statements: Formal statement. Let each source symbol from the alphabet formula_0 be encoded into a uniquely decodable code over an alphabet of size formula_1 with codeword lengths formula_2 Then formula_3 Conversely, for a given set of natural numbers formula_4 satisfying the above inequality, there exists a uniquely decodable code over an alphabet of size formula_1 with those codeword lengths. Example: binary trees. Any binary tree can be viewed as defining a prefix code for the leaves of the tree. Kraft's inequality states that formula_5 Here the sum is taken over the leaves of the tree, i.e. the nodes without any children. The depth is the distance to the root node. In the tree to the right, this sum is formula_6 Proof. Proof for prefix codes. First, let us show that the Kraft inequality holds whenever the code for formula_7 is a prefix code. Suppose that formula_8. Let formula_9 be the full formula_1-ary tree of depth formula_10 (thus, every node of formula_9 at level formula_11 has formula_1 children, while the nodes at level formula_10 are leaves). Every word of length formula_12 over an formula_1-ary alphabet corresponds to a node in this tree at depth formula_13. The formula_14th word in the prefix code corresponds to a node formula_15; let formula_16 be the set of all leaf nodes (i.e. of nodes at depth formula_10) in the subtree of formula_9 rooted at formula_15. That subtree being of height formula_17, we have formula_18 Since the code is a prefix code, those subtrees cannot share any leaves, which means that formula_19 Thus, given that the total number of nodes at depth formula_10 is formula_20, we have formula_21 from which the result follows. Conversely, given any ordered sequence of formula_22 natural numbers, formula_23 satisfying the Kraft inequality, one can construct a prefix code with codeword lengths equal to each formula_24 by choosing a word of length formula_24 arbitrarily, then ruling out all words of greater length that have it as a prefix. There again, we shall interpret this in terms of leaf nodes of an formula_1-ary tree of depth formula_10. First choose any node from the full tree at depth formula_25; it corresponds to the first word of our new code. Since we are building a prefix code, all the descendants of this node (i.e., all words that have this first word as a prefix) become unsuitable for inclusion in the code. We consider the descendants at depth formula_10 (i.e., the leaf nodes among the descendants); there are formula_26 such descendant nodes that are removed from consideration. The next iteration picks a (surviving) node at depth formula_27 and removes formula_28 further leaf nodes, and so on. After formula_22 iterations, we have removed a total of formula_29 nodes. The question is whether we need to remove more leaf nodes than we actually have available — formula_30 in all — in the process of building the code. Since the Kraft inequality holds, we have indeed formula_31 and thus a prefix code can be built. Note that as the choice of nodes at each step is largely arbitrary, many different suitable prefix codes can be built, in general. Proof of the general case. Now we will prove that the Kraft inequality holds whenever formula_7 is a uniquely decodable code. (The converse needs not be proven, since we have already proven it for prefix codes, which is a stronger claim.) The proof is by Jack I. Karush. We need only prove it when there are finitely many codewords. If there are infinitely many codewords, then any finite subset of it is also uniquely decodable, so it satisfies the Kraft–McMillan inequality. Taking the limit, we have the inequality for the full code. Denote formula_32. The idea of the proof is to get an upper bound on formula_33 for formula_34 and show that it can only hold for all formula_35 if formula_36. Rewrite formula_33 as formula_37 Consider all "m"-powers formula_38, in the form of words formula_39, where formula_40 are indices between 1 and formula_22. Note that, since "S" was assumed to uniquely decodable, formula_41 implies formula_42. This means that each summand corresponds to exactly one word in formula_38. This allows us to rewrite the equation to formula_43 where formula_44 is the number of codewords in formula_38 of length formula_13 and formula_45 is the length of the longest codeword in formula_7. For an formula_1-letter alphabet there are only formula_46 possible words of length formula_13, so formula_47. Using this, we upper bound formula_33: formula_48 Taking the formula_35-th root, we get formula_49 This bound holds for any formula_34. The right side is 1 asymptotically, so formula_50 must hold (otherwise the inequality would be broken for a large enough formula_35). Alternative construction for the converse. Given a sequence of formula_22 natural numbers, formula_23 satisfying the Kraft inequality, we can construct a prefix code as follows. Define the "i"th codeword, "C"i, to be the first formula_24 digits after the radix point (e.g. decimal point) in the base "r" representation of formula_51 Note that by Kraft's inequality, this sum is never more than 1. Hence the codewords capture the entire value of the sum. Therefore, for "j" &gt; "i", the first formula_24 digits of "C""j" form a larger number than "C""i", so the code is prefix free. Generalizations. The following generalization is found in. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — If formula_52 are uniquely decodable, and every codeword in formula_53 is a concatenation of codewords in formula_54, then formula_55 The previous theorem is the special case when formula_56.&lt;templatestyles src="Math_proof/styles.css" /&gt;Proof Let formula_57 be the generating function for the code. That is, formula_58 By a counting argument, the formula_59-th coefficient of formula_60 is the number of strings of length formula_61 with code length formula_59. That is, formula_62 Similarly, formula_63 Since the code is uniquely decodable, any power of formula_64 is absolutely bounded by formula_65, so each of formula_66 and formula_67 is analytic in the disk formula_68. We claim that for all formula_69, formula_70 The left side is formula_71 and the right side is formula_72 Now, since every codeword in formula_53 is a concatenation of codewords in formula_54, and formula_54 is uniquely decodable, each string of length formula_61 with formula_53-code formula_73 of length formula_59 corresponds to a unique string formula_74 whose formula_54-code is formula_73. The string has length at least formula_61. Therefore, the coefficients on the left are less or equal to the coefficients on the right. Thus, for all formula_75, and all formula_76, we have formula_77 Taking formula_78 limit, we have formula_79 for all formula_75. Since formula_80 and formula_81 both converge, we have formula_82 by taking the limit and applying Abel's theorem. There is a generalization to quantum code. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S=\\{\\,s_1,s_2,\\ldots,s_n\\,\\}" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "\\ell_1,\\ell_2,\\ldots,\\ell_n." }, { "math_id": 3, "text": "\\sum_{i=1}^{n} r^{-\\ell_i} \\leqslant 1." }, { "math_id": 4, "text": "\\ell_1,\\ell_2,\\ldots,\\ell_n" }, { "math_id": 5, "text": " \\sum_{\\ell \\in \\text{leaves}} 2^{-\\text{depth}(\\ell)} \\leqslant 1. " }, { "math_id": 6, "text": " \\frac{1}{4} + 4 \\left( \\frac{1}{8} \\right) = \\frac{3}{4} \\leqslant 1." }, { "math_id": 7, "text": "S" }, { "math_id": 8, "text": "\\ell_1 \\leqslant \\ell_2 \\leqslant \\cdots \\leqslant \\ell_n " }, { "math_id": 9, "text": "A" }, { "math_id": 10, "text": "\\ell_n" }, { "math_id": 11, "text": "< \\ell_n" }, { "math_id": 12, "text": "\\ell \\leqslant \\ell_n" }, { "math_id": 13, "text": "\\ell" }, { "math_id": 14, "text": "i" }, { "math_id": 15, "text": "v_i" }, { "math_id": 16, "text": "A_i" }, { "math_id": 17, "text": "\\ell_n-\\ell_i" }, { "math_id": 18, "text": "|A_i| = r^{\\ell_n-\\ell_i}." }, { "math_id": 19, "text": "A_i \\cap A_j = \\varnothing,\\quad i\\neq j." }, { "math_id": 20, "text": "r^{\\ell_n}" }, { "math_id": 21, "text": " \\left|\\bigcup_{i=1}^n A_i\\right|= \\sum_{i=1}^n |A_i| = \\sum_{i=1}^n r^{\\ell_n-\\ell_i} \\leqslant r^{\\ell_n}" }, { "math_id": 22, "text": "n" }, { "math_id": 23, "text": "\\ell_1 \\leqslant \\ell_2 \\leqslant \\cdots \\leqslant \\ell_n" }, { "math_id": 24, "text": "\\ell_i" }, { "math_id": 25, "text": "\\ell_1" }, { "math_id": 26, "text": "r^{\\ell_n-\\ell_1}" }, { "math_id": 27, "text": "\\ell_2" }, { "math_id": 28, "text": "r^{\\ell_n-\\ell_2}" }, { "math_id": 29, "text": "\\sum_{i=1}^n r^{\\ell_n-\\ell_i}" }, { "math_id": 30, "text": " r^{\\ell_n}" }, { "math_id": 31, "text": " \\sum_{i=1}^n r^{\\ell_n-\\ell_i} \\leqslant r^{\\ell_n}" }, { "math_id": 32, "text": "C = \\sum_{i=1}^n r^{-l_i}" }, { "math_id": 33, "text": "C^m" }, { "math_id": 34, "text": "m \\in \\mathbb{N}" }, { "math_id": 35, "text": "m" }, { "math_id": 36, "text": "C \\leq 1" }, { "math_id": 37, "text": "\n\\begin{align}\nC^m & = \\left( \\sum_{i=1}^n r^{-l_i} \\right)^m \\\\\n& = \\sum_{i_1=1}^n \\sum_{i_2=1}^n \\cdots \\sum_{i_m=1}^n r^{-\\left(l_{i_1} + l_{i_2} + \\cdots + l_{i_m} \\right)} \\\\\n\\end{align}\n" }, { "math_id": 38, "text": "S^m" }, { "math_id": 39, "text": "s_{i_1}s_{i_2}\\dots s_{i_m}" }, { "math_id": 40, "text": "i_1, i_2, \\dots, i_m" }, { "math_id": 41, "text": "s_{i_1}s_{i_2}\\dots s_{i_m}=s_{j_1}s_{j_2}\\dots s_{j_m}" }, { "math_id": 42, "text": "i_1=j_1, i_2=j_2, \\dots, i_m=j_m" }, { "math_id": 43, "text": "\nC^m = \\sum_{\\ell=1}^{m \\cdot \\ell_{max}} q_\\ell \\, r^{-\\ell}\n" }, { "math_id": 44, "text": "q_\\ell" }, { "math_id": 45, "text": "\\ell_{max}" }, { "math_id": 46, "text": "r^\\ell" }, { "math_id": 47, "text": "q_\\ell \\leq r^\\ell" }, { "math_id": 48, "text": "\n\\begin{align}\nC^m & = \\sum_{\\ell=1}^{m \\cdot \\ell_{max}} q_\\ell \\, r^{-\\ell} \\\\\n& \\leq \\sum_{\\ell=1}^{m \\cdot \\ell_{max}} r^\\ell \\, r^{-\\ell} = m \\cdot \\ell_{max}\n\\end{align}\n" }, { "math_id": 49, "text": "\nC = \\sum_{i=1}^n r^{-l_i} \\leq \\left( m \\cdot \\ell_{max} \\right)^{\\frac{1}{m}}\n" }, { "math_id": 50, "text": "\\sum_{i=1}^n r^{-l_i} \\leq 1" }, { "math_id": 51, "text": "\\sum_{j = 1}^{i - 1} r^{-\\ell_j}." }, { "math_id": 52, "text": "C, D" }, { "math_id": 53, "text": "C" }, { "math_id": 54, "text": "D" }, { "math_id": 55, "text": "\\sum_{c\\in C} r^{-|c|} \\leq \\sum_{c\\in D} r^{-|c|} " }, { "math_id": 56, "text": "D= \\{a_1, \\dots, a_r\\}" }, { "math_id": 57, "text": "Q_{C}(x)" }, { "math_id": 58, "text": "Q_C(x) := \\sum_{c\\in C} x^{|c|}" }, { "math_id": 59, "text": "k" }, { "math_id": 60, "text": "Q_C^n" }, { "math_id": 61, "text": "n" }, { "math_id": 62, "text": "Q_C^n(x) = \\sum_{k\\geq 0}x^k \\#(\\text{strings of length }n\\text{ with }C\\text{-codes of length }k)" }, { "math_id": 63, "text": "\\frac{1}{1-Q_C(x)} = 1 + Q_C(x) + Q_C(x)^2 + \\cdots = \\sum_{k\\geq 0}x^k \\#(\\text{strings with }C\\text{-codes of length }k)" }, { "math_id": 64, "text": "Q_C" }, { "math_id": 65, "text": "r|x| + r^2|x|^2 + \\cdots = \\frac{r|x|}{1-r|x|}" }, { "math_id": 66, "text": "Q_C, Q_C^2, \\dots" }, { "math_id": 67, "text": "\\frac{1}{1-Q_C(x)}" }, { "math_id": 68, "text": "|x| < 1/r" }, { "math_id": 69, "text": "x \\in (0, 1/r)" }, { "math_id": 70, "text": "Q_C^n \\leq Q_D^n + Q_D^{n+1} + \\cdots" }, { "math_id": 71, "text": "\\sum_{k\\geq 0}x^k \\#(\\text{strings of length }n\\text{ with }C\\text{-codes of length }k)" }, { "math_id": 72, "text": "\\sum_{k\\geq 0}x^k \\#(\\text{strings of length}\\geq n\\text{ with }D\\text{-codes of length }k)" }, { "math_id": 73, "text": "c_1\\dots c_n" }, { "math_id": 74, "text": "s_{c_1}\\dots s_{c_n}" }, { "math_id": 75, "text": "x\\in (0, 1/r)" }, { "math_id": 76, "text": "n = 1, 2, \\dots" }, { "math_id": 77, "text": "Q_C \\leq \\frac{Q_D}{(1-Q_D)^{1/n}}" }, { "math_id": 78, "text": "n\\to \\infty" }, { "math_id": 79, "text": "Q_C(x) \\leq Q_D(x)" }, { "math_id": 80, "text": "Q_C(1/r)" }, { "math_id": 81, "text": "Q_D(1/r)" }, { "math_id": 82, "text": "Q_C(1/r) \\leq Q_D(1/r)" } ]
https://en.wikipedia.org/wiki?curid=1471798
14717987
Global element
In category theory, a global element of an object "A" from a category is a morphism formula_0 where 1 is a terminal object of the category. Roughly speaking, global elements are a generalization of the notion of "elements" from the category of sets, and they can be used to import set-theoretic concepts into category theory. However, unlike a set, an object of a general category need not be determined by its global elements (not even up to isomorphism). For example, the terminal object of the category Grph of graph homomorphisms has one vertex and one edge, a self-loop, whence the global elements of a graph are its self-loops, conveying no information either about other kinds of edges, or about vertices having no self-loop, or about whether two self-loops share a vertex. In an elementary topos the global elements of the subobject classifier Ω form a Heyting algebra when ordered by inclusion of the corresponding subobjects of the terminal object. For example, Grph happens to be a topos, whose subobject classifier Ω is a two-vertex directed clique with an additional self-loop (so five edges, three of which are self-loops and hence the global elements of Ω). The internal logic of Grph is therefore based on the three-element Heyting algebra as its truth values. A well-pointed category is a category that has enough global elements to distinguish every two morphisms. That is, for each pair of distinct arrows "A" → "B" in the category, there should exist a global element whose compositions with them are different from each other. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h\\colon 1 \\to A," } ]
https://en.wikipedia.org/wiki?curid=14717987
1471816
Search problem
In the mathematics of computational complexity theory, computability theory, and decision theory, a search problem is a type of computational problem represented by a binary relation. Intuitively, the problem consists in finding structure "y" in object "x". An algorithm is said to solve the problem if at least one corresponding structure exists, and then one occurrence of this structure is made output; otherwise, the algorithm stops with an appropriate output ("not found" or any message of the like). Every search problem also has a corresponding decision problem, namely formula_0 This definition may be generalized to "n"-ary relations using any suitable encoding which allows multiple strings to be compressed into one string (for instance by listing them consecutively with a delimiter). More formally, a relation "R" can be viewed as a search problem, and a Turing machine which calculates "R" is also said to solve it. More formally, if "R" is a binary relation such that field("R") ⊆ Γ+ and "T" is a Turing machine, then "T" calculates "R" if: Such problems occur very frequently in graph theory and combinatorial optimization, for example, where searching for structures such as particular matchings, optional cliques, particular stable sets, etc. are subjects of interest. Definition. A search problem is often characterized by: Objective. Find a solution when not given an algorithm to solve a problem, but only a specification of what a solution looks like. Search method. Input: a graph, a set of start nodes, Boolean procedure goal(n) that tests if n is a goal node. frontier := {s : s is a start node}; while frontier is not empty: select and remove path &lt;n0, ..., nk&gt; from frontier; if goal(nk) return &lt;n0, ..., nk&gt;; for every neighbor n of nk add &lt;n0, ..., nk, n&gt; to frontier; end while References. &lt;templatestyles src="Reflist/styles.css" /&gt; "This article incorporates material from search problem on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "L(R)=\\{x\\mid \\exists y R(x,y)\\}. \\, " } ]
https://en.wikipedia.org/wiki?curid=1471816
14721784
Coupon collector's problem
Problem in probability theory In probability theory, the coupon collector's problem refers to mathematical analysis of "collect all coupons and win" contests. It asks the following question: if each box of a given product (e.g., breakfast cereals) contains a coupon, and there are "n" different types of coupons, what is the probability that more than "t" boxes need to be bought to collect all "n" coupons? An alternative statement is: given "n" coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as formula_0. For example, when "n" = 50 it takes about 225 trials on average to collect all 50 coupons. Solution. Via generating functions. By definition of Stirling numbers of the second kind, the probability that exactly "T" draws are needed isformula_1By manipulating the generating function of the Stirling numbers, we can explicitly calculate all moments of "T":formula_2In general, the "k"-th moment is formula_3, where formula_4 is the derivative operator formula_5. For example, the 0th moment isformula_6and the 1st moment is formula_7, which can be explicitly evaluated to formula_8, etc. Calculating the expectation. Let time "T" be the number of draws needed to collect all "n" coupons, and let "ti" be the time to collect the "i"-th coupon after "i" − 1 coupons have been collected. Then formula_9. Think of "T" and "ti" as random variables. Observe that the probability of collecting a new coupon is formula_10. Therefore, formula_11 has geometric distribution with expectation formula_12. By the linearity of expectations we have: formula_13 Here "Hn" is the "n"-th harmonic number. Using the asymptotics of the harmonic numbers, we obtain: formula_14 where formula_15 is the Euler–Mascheroni constant. Using the Markov inequality to bound the desired probability: formula_16 The above can be modified slightly to handle the case when we've already collected some of the coupons. Let "k" be the number of coupons already collected, then: formula_17 And when formula_18 then we get the original result. Calculating the variance. Using the independence of random variables "ti", we obtain: formula_19 since formula_20 (see Basel problem). Bound the desired probability using the Chebyshev inequality: formula_21 Tail estimates. A stronger tail estimate for the upper tail be obtained as follows. Let formula_22 denote the event that the formula_23-th coupon was not picked in the first formula_24 trials. Then formula_25 Thus, for formula_26, we have formula_27. Via a union bound over the formula_28 coupons, we obtain formula_29 formula_30 which is a Gumbel distribution. A simple proof by martingales is in the next section. formula_31 Here "m" is fixed. When "m" = 1 we get the earlier formula for the expectation. formula_32 formula_33 This is equal to formula_34 where "m" denotes the number of coupons to be collected and "PJ" denotes the probability of getting any coupon in the set of coupons "J". Martingales. This section is based on. Define a discrete random process formula_35 by letting formula_36 be the number of coupons not yet seen after formula_37 draws. The random process is just a sequence generated by a Markov chain with states formula_38, and transition probabilitiesformula_39Now define formula_40then it is a martingale, sinceformula_41Consequently, we have formula_42. In particular, we have a limit law formula_43 for any formula_44. This suggests to us a limit law for formula_45. More generally, each formula_46 is a martingale process, which allows us to calculate all moments of formula_36. For example, formula_47giving another limit law formula_48. More generally, formula_49meaning that formula_50 has all moments converging to constants, so it converges to some probability distribution on formula_51. Let formula_52 be the random variable with the limit distribution. We haveformula_53By introducing a new variable formula_37, we can sum up both sides explicitly:formula_54giving formula_55. At the formula_56 limit, we have formula_57, which is precisely what the limit law states. By taking the derivative formula_58 multiple times, we find that formula_59, which is a Poisson distribution. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Theta(n\\log(n))" }, { "math_id": 1, "text": "\\frac{S(T-1, n-1)n!}{n^T}" }, { "math_id": 2, "text": "f_k(x) := \\sum_T S(T, k) x^T = \\prod_{r=1}^k \\frac{x}{1-rx}" }, { "math_id": 3, "text": "(n-1)! ((D_x x)^kf_{n-1}(x)) \\Big|_{x=1/n}" }, { "math_id": 4, "text": "D_x" }, { "math_id": 5, "text": "d/dx" }, { "math_id": 6, "text": "\\sum_T \\frac{S(T-1, n-1)n!}{n^T} = (n-1)! f_{n-1}(1/n) = (n-1)! \\times \\prod_{r=1}^{n-1} \\frac{1/n}{1-r/n} = 1 " }, { "math_id": 7, "text": "(n-1)! (D_x xf_{n-1}(x)) \\Big|_{x=1/n}" }, { "math_id": 8, "text": "nH_n" }, { "math_id": 9, "text": "T=t_1 + \\cdots + t_n" }, { "math_id": 10, "text": "p_i = \\frac{n - (i - 1)}{n} = \\frac{n - i + 1}{n}" }, { "math_id": 11, "text": "t_i" }, { "math_id": 12, "text": "\\frac{1}{p_i} = \\frac{n}{n - i + 1}" }, { "math_id": 13, "text": "\n\\begin{align}\n\\operatorname{E}(T) & {}= \\operatorname{E}(t_1 + t_2 + \\cdots + t_n) \\\\\n& {}= \\operatorname{E}(t_1) + \\operatorname{E}(t_2) + \\cdots + \\operatorname{E}(t_n) \\\\\n& {}= \\frac{1}{p_1} + \\frac{1}{p_2} + \\cdots + \\frac{1}{p_n} \\\\\n& {}= \\frac{n}{n} + \\frac{n}{n-1} + \\cdots + \\frac{n}{1} \\\\\n& {}= n \\cdot \\left(\\frac{1}{1} + \\frac{1}{2} + \\cdots + \\frac{1}{n}\\right) \\\\\n& {}= n \\cdot H_n.\n\\end{align}\n" }, { "math_id": 14, "text": "\n\\operatorname{E}(T) = n \\cdot H_n = n \\log n + \\gamma n + \\frac{1}{2} + O(1/n),\n" }, { "math_id": 15, "text": "\\gamma \\approx 0.5772156649" }, { "math_id": 16, "text": "\\operatorname{P}(T \\geq cn H_n) \\le \\frac{1}{c}." }, { "math_id": 17, "text": "\n\\begin{align}\n\\operatorname{E}(T_k) & {}= \\operatorname{E}(t_{k+1} + t_{k+2} + \\cdots + t_n) \\\\\n& {}= n \\cdot \\left(\\frac{1}{1} + \\frac{1}{2} + \\cdots + \\frac{1}{n-k}\\right) \\\\\n& {}= n \\cdot H_{n-k}\n\\end{align}\n" }, { "math_id": 18, "text": "k=0" }, { "math_id": 19, "text": "\n\\begin{align}\n\\operatorname{Var}(T)& {}= \\operatorname{Var}(t_1 + \\cdots + t_n) \\\\\n& {} = \\operatorname{Var}(t_1) + \\operatorname{Var}(t_2) + \\cdots + \\operatorname{Var}(t_n) \\\\\n& {} = \\frac{1-p_1}{p_1^2} + \\frac{1-p_2}{p_2^2} + \\cdots + \\frac{1-p_n}{p_n^2} \\\\\n& {} < \\left(\\frac{n^2}{n^2} + \\frac{n^2}{(n-1)^2} + \\cdots + \\frac{n^2}{1^2}\\right) \\\\\n& {} = n^2 \\cdot \\left(\\frac{1}{1^2} + \\frac{1}{2^2} + \\cdots + \\frac{1}{n^2} \\right) \\\\\n& {} < \\frac{\\pi^2}{6} n^2\n\\end{align}\n" }, { "math_id": 20, "text": "\\frac{\\pi^2}6=\\frac{1}{1^2}+\\frac{1}{2^2}+\\cdots+\\frac{1}{n^2}+\\cdots" }, { "math_id": 21, "text": "\\operatorname{P}\\left(|T- n H_n| \\geq cn\\right) \\le \\frac{\\pi^2}{6c^2}." }, { "math_id": 22, "text": "{Z}_i^r" }, { "math_id": 23, "text": "i" }, { "math_id": 24, "text": "r" }, { "math_id": 25, "text": "\n\\begin{align}\nP\\left [ {Z}_i^r \\right ] = \\left(1-\\frac{1}{n}\\right)^r \\le e^{-r / n}.\n\\end{align}\n" }, { "math_id": 26, "text": "r = \\beta n \\log n" }, { "math_id": 27, "text": "P\\left [ {Z}_i^r \\right ] \\le e^{(-\\beta n \\log n ) / n} = n^{-\\beta}" }, { "math_id": 28, "text": "n" }, { "math_id": 29, "text": "\n\\begin{align}\nP\\left [ T > \\beta n \\log n \\right ] = P \\left [ \t\\bigcup_i {Z}_i^{\\beta n \\log n} \\right ] \\le n \\cdot P [ {Z}_1^{\\beta n \\log n} ] \\le n^{-\\beta + 1}.\n\\end{align}\n" }, { "math_id": 30, "text": "\\operatorname{P}(T < n\\log n + cn) \\to e^{-e^{-c}}, \\text{ as } n \\to \\infty." }, { "math_id": 31, "text": "\\operatorname{E}(T_m) = n \\log n + (m-1) n \\log\\log n + O(n), \\text{ as } n \\to \\infty." }, { "math_id": 32, "text": "\\operatorname{P}\\left(T_m < n\\log n + (m-1) n \\log\\log n + cn\\right) \\to e^{-e^{-c}/(m-1)!}, \\text{ as } n \\to \\infty." }, { "math_id": 33, "text": "\\operatorname{E}(T)=\\int_0^\\infty \\left(1 - \\prod_{i=1}^m \\left(1-e^{-p_it}\\right)\\right)dt. " }, { "math_id": 34, "text": "\\operatorname{E}(T)=\\sum_{q=0}^{m-1} (-1)^{m-1-q} \\sum_{|J|=q} \\frac{1}{1-P_J}," }, { "math_id": 35, "text": "N(0), N(1), \\dots" }, { "math_id": 36, "text": "N(t)" }, { "math_id": 37, "text": "t" }, { "math_id": 38, "text": "n, n-1, \\dots, 1, 0" }, { "math_id": 39, "text": "p_{i \\to i-1} = i/n, \\quad p_{i \\to i} = 1-i/n" }, { "math_id": 40, "text": "M(t) := N(t) \\left(\\frac{n}{n-1}\\right)^t" }, { "math_id": 41, "text": "E[M(t+1)|M(t)] = (n/(n-1))^{t+1} E[N(t+1)|N(t)] = (n/(n-1))^{t+1} (N(t) - N(t)/n)= M(t) " }, { "math_id": 42, "text": "E[N(t)] = n(1-1/n)^t" }, { "math_id": 43, "text": "\\lim_{n \\to \\infty} E[N(n\\ln n + cn)] = e^{-c} " }, { "math_id": 44, "text": "c > 0" }, { "math_id": 45, "text": "T" }, { "math_id": 46, "text": "\\left(\\frac{n}{n-k}\\right)^t N(t) \\cdots (N(t) - k+1)" }, { "math_id": 47, "text": "E [N(t)^2] = n(n-1)\\left(\\frac{n-2}{n}\\right)^t + n\\left(\\frac{n-1}{n}\\right)^t, \\quad n \\geq 2" }, { "math_id": 48, "text": "\\lim_{n \\to \\infty} Var[N(n\\ln n + cn)] = e^{-c} " }, { "math_id": 49, "text": "\\lim_{n \\to \\infty} E[N(n\\ln n + cn) \\cdots (N(n\\ln n + cn) - k+1 )] = e^{-kc} " }, { "math_id": 50, "text": "N(n \\ln n + cn)" }, { "math_id": 51, "text": "0, 1, 2, \\dots" }, { "math_id": 52, "text": "N" }, { "math_id": 53, "text": "\\begin{aligned}\nE[1] &= 1 \\\\ \nE[N] &= e^{-c} \\\\\nE[N(N-1)] &= e^{-2c} \\\\\nE[N(N-1)(N-2)] &= e^{-3c} \\\\\n & \\vdots\n\\end{aligned}\n" }, { "math_id": 54, "text": "E[1 + Nt/1! + N(N-1)t^2/2! + \\cdots ] = 1 + e^{-c}t/1! + e^{-2c}t^2/2! + \\cdots " }, { "math_id": 55, "text": "E[(1+t)^N] = e^{e^{-c}t} " }, { "math_id": 56, "text": "t \\to -1" }, { "math_id": 57, "text": "Pr(N = 0) = e^{-e^{-c}} " }, { "math_id": 58, "text": "d/dt" }, { "math_id": 59, "text": "Pr(N=k) = \\frac{e^{-kc}}{k!} e^{-e^{-c}}" } ]
https://en.wikipedia.org/wiki?curid=14721784
14721989
Field arithmetic
In mathematics, field arithmetic is a subject that studies the interrelations between arithmetic properties of a [This page is a .? field] and its absolute Galois group. It is an interdisciplinary subject as it uses tools from algebraic number theory, arithmetic geometry, algebraic geometry, model theory, the theory of finite groups and of profinite groups. Fields with finite absolute Galois groups. Let "K" be a field and let "G" = Gal("K") be its absolute Galois group. If "K" is algebraically closed, then "G" = 1. If "K" = R is the real numbers, then formula_0 Here C is the field of complex numbers and Z is the ring of integer numbers. A theorem of Artin and Schreier asserts that (essentially) these are all the possibilities for finite absolute Galois groups. Artin–Schreier theorem. Let "K" be a field whose absolute Galois group "G" is finite. Then either "K" is separably closed and "G" is trivial or "K" is real closed and "G" = Z/2Z. Fields that are defined by their absolute Galois groups. Some profinite groups occur as the absolute Galois group of non-isomorphic fields. A first example for this is formula_1 This group is isomorphic to the absolute Galois group of an arbitrary finite field. Also the absolute Galois group of the field of formal Laurent series C(("t")) over the complex numbers is isomorphic to that group. To get another example, we bring below two non-isomorphic fields whose absolute Galois groups are free (that is free profinite group). In contrast to the above examples, if the fields in question are finitely generated over Q, Florian Pop proves that an isomorphism of the absolute Galois groups yields an isomorphism of the fields: Theorem. Let "K", "L" be finitely generated fields over Q and let "a": Gal("K") → Gal("L") be an isomorphism. Then there exists a unique isomorphism of the algebraic closures, "b": "K"alg → "L"alg, that induces "a". This generalizes an earlier work of Jürgen Neukirch and Koji Uchida on number fields. Pseudo algebraically closed fields. A pseudo algebraically closed field (in short PAC) "K" is a field satisfying the following geometric property. Each absolutely irreducible algebraic variety "V" defined over "K" has a "K"-rational point. Over PAC fields there is a firm link between arithmetic properties of the field and group theoretic properties of its absolute Galois group. A nice theorem in this spirit connects Hilbertian fields with ω-free fields ("K" is ω-free if any embedding problem for "K" is properly solvable). Theorem. Let "K" be a PAC field. Then "K" is Hilbertian if and only if "K" is ω-free. Peter Roquette proved the right-to-left direction of this theorem and conjectured the opposite direction. Michael Fried and Helmut Völklein applied algebraic topology and complex analysis to establish Roquette's conjecture in characteristic zero. Later Pop proved the Theorem for arbitrary characteristic by developing "rigid patching". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G=\\operatorname{Gal}(\\mathbf{C}/\\mathbf{R})=\\mathbf{Z}/2 \\mathbf{Z}." }, { "math_id": 1, "text": "\\hat{\\mathbf{Z}}=\\lim_{\\longleftarrow}\\mathbf{Z}/n \\mathbf{Z}." } ]
https://en.wikipedia.org/wiki?curid=14721989
147230
Golomb coding
Lossless data compression method Golomb coding is a lossless data compression method using a family of data compression codes invented by Solomon W. Golomb in the 1960s. Alphabets following a geometric distribution will have a Golomb code as an optimal prefix code, making Golomb coding highly suitable for situations in which the occurrence of small values in the input stream is significantly more likely than large values. Rice coding. Rice coding (invented by Robert F. Rice) denotes using a subset of the family of Golomb codes to produce a simpler (but possibly suboptimal) prefix code. Rice used this set of codes in an adaptive coding scheme; "Rice coding" can refer either to that adaptive scheme or to using that subset of Golomb codes. Whereas a Golomb code has a tunable parameter that can be any positive integer value, Rice codes are those in which the tunable parameter is a power of two. This makes Rice codes convenient for use on a computer, since multiplication and division by 2 can be implemented more efficiently in binary arithmetic. Rice was motivated to propose this simpler subset due to the fact that geometric distributions are often varying with time, not precisely known, or both, so selecting the seemingly optimal code might not be very advantageous. Rice coding is used as the entropy encoding stage in a number of lossless image compression and audio data compression methods. Overview. Construction of codes. Golomb coding uses a tunable parameter M to divide an input value x into two parts: q, the result of a division by M, and r, the remainder. The quotient is sent in unary coding, followed by the remainder in truncated binary encoding. When formula_0, Golomb coding is equivalent to unary coding. Golomb–Rice codes can be thought of as codes that indicate a number by the position of the "bin" (q), and the "offset" within the "bin" (r). The example figure shows the position q and offset r for the encoding of integer x using Golomb–Rice parameter "M" 3, with source probabilities following a geometric distribution with "p"(0) 0.2. Formally, the two parts are given by the following expression, where x is the nonnegative integer being encoded: formula_1 and formula_2. Both q and r will be encoded using variable numbers of bits: q by a unary code, and r by b bits for Rice code, or a choice between b and bits for Golomb code (i.e. M is not a power of 2), with formula_3. If formula_4, then use b bits to encode r; otherwise, use b+1 bits to encode r. Clearly, formula_5 if M is a power of 2 and we can encode all values of r with b bits. The integer x treated by Golomb was the run length of a Bernoulli process, which has a geometric distribution starting at 0. The best choice of parameter M is a function of the corresponding Bernoulli process, which is parameterized by formula_6 the probability of success in a given Bernoulli trial. M is either the median of the distribution or the median ±1. It can be determined by these inequalities: formula_7 which are solved by formula_8. For the example with "p"(0) 0.2: formula_9. The Golomb code for this distribution is equivalent to the Huffman code for the same probabilities, if it were possible to compute the Huffman code for the infinite set of source values. Use with signed integers. Golomb's scheme was designed to encode sequences of non-negative numbers. However, it is easily extended to accept sequences containing negative numbers using an "overlap and interleave" scheme, in which all values are reassigned to some positive number in a unique and reversible way. The sequence begins: 0, −1, 1, −2, 2, −3, 3, −4, 4, ... The "n"-th negative value (i.e., &amp;NoBreak;&amp;NoBreak;) is mapped to the "n"th odd number (&amp;NoBreak;&amp;NoBreak;), and the "m"th positive value is mapped to the "m"-th even number (&amp;NoBreak;&amp;NoBreak;). This may be expressed mathematically as follows: a positive value x is mapped to (formula_10), and a negative value y is mapped to (formula_11). Such a code may be used for simplicity, even if suboptimal. Truly optimal codes for two-sided geometric distributions include multiple variants of the Golomb code, depending on the distribution parameters, including this one. Simple algorithm. Below is the Rice–Golomb encoding, where the remainder code uses simple truncated binary encoding, also named "Rice coding" (other varying-length binary encodings, like arithmetic or Huffman encodings, are possible for the remainder codes, if the statistic distribution of remainder codes is not flat, and notably when not all possible remainders after the division are used). In this algorithm, if the "M" parameter is a power of 2, it becomes equivalent to the simpler Rice encoding: Decoding: Example. Set "M" 10. Thus formula_19. The cutoff is formula_20. For example, with a Rice–Golomb encoding using parameter "M" 10, the decimal number 42 would first be split into q = 4 and r = 2, and would be encoded as qcode(q),rcode(r) = qcode(4),rcode(2) = 11110,010 (you don't need to encode the separating comma in the output stream, because the 0 at the end of the q code is enough to say when q ends and r begins; both the qcode and rcode are self-delimited). "Note that p and 1 – p are reversed in this section compared to the use in earlier sections." Use for run-length encoding. Given an alphabet of two symbols, or a set of two events, "P" and "Q", with probabilities "p" and (1 − "p") respectively, where "p" ≥ 1/2, Golomb coding can be used to encode runs of zero or more "P"′s separated by single "Q"′s. In this application, the best setting of the parameter "M" is the nearest integer to formula_21. When "p" = 1/2, "M" = 1, and the Golomb code corresponds to unary ("n" ≥ 0 "P"′s followed by a "Q" is encoded as "n" ones followed by a zero). If a simpler code is desired, one can assign Golomb–Rice parameter b (i.e., Golomb parameter formula_22) to the integer nearest to formula_23; although not always the best parameter, it is usually the best Rice parameter and its compression performance is quite close to the optimal Golomb code. (Rice himself proposed using various codes for the same data to figure out which was best. A later JPL researcher proposed various methods of optimizing or estimating the code parameter.) Consider using a Rice code with a binary portion having b bits to run-length encode sequences where "P" has a probability p. If formula_24 is the probability that a bit will be part of an k-bit run (formula_25 "P"s and one "Q") and formula_26 is the compression ratio of that run, then the expected compression ratio is formula_27 Compression is often expressed in terms of formula_28, the proportion compressed. For formula_29, the run-length coding approach results in compression ratios close to entropy. For example, using Rice code formula_30 for formula_31 yields compression, while the entropy limit is . Adaptive run-length Golomb–Rice encoding. When a probability distribution for integers is not known, the optimal parameter for a Golomb–Rice encoder cannot be determined. Thus, in many applications, a two-pass approach is used: first, the block of data is scanned to estimate a probability density function (PDF) for the data. The Golomb–Rice parameter is then determined from that estimated PDF. A simpler variation of that approach is to assume that the PDF belongs to a parametrized family, estimate the PDF parameters from the data, and then compute the optimal Golomb–Rice parameter. That is the approach used in most of the applications discussed below. An alternative approach to efficiently encode integer data whose PDF is not known, or is varying, is to use a backwards-adaptive encoder. The RLGR encoder achieves that using a very simple algorithm that adjusts the Golomb–Rice parameter up or down, depending on the last encoded symbol. A decoder can follow the same rule to track the variation of the encoding parameters, so no side information needs to be transmitted, just the encoded data. Assuming a generalized Gaussian PDF, which covers a wide range of statistics seen in data such as prediction errors or transform coefficients in multimedia codecs, the RLGR encoding algorithm can perform very well in such applications. Applications. Numerous signal codecs use a Rice code for prediction residues. In predictive algorithms, such residues tend to fall into a two-sided geometric distribution, with small residues being more frequent than large residues, and the Rice code closely approximates the Huffman code for such a distribution without the overhead of having to transmit the Huffman table. One signal that does not match a geometric distribution is a sine wave, because the differential residues create a sinusoidal signal whose values are not creating a geometric distribution (the highest and lowest residue values have similar high frequency of occurrences, only the median positive and negative residues occur less often). Several lossless audio codecs, such as Shorten, FLAC, Apple Lossless, and MPEG-4 ALS, use a Rice code after the linear prediction step (called "adaptive FIR filter" in Apple Lossless). Rice coding is also used in the FELICS lossless image codec. The Golomb–Rice coder is used in the entropy coding stage of Rice algorithm based "lossless image codecs". One such experiment yields the compression ratio graph shown. The JPEG-LS scheme uses Rice–Golomb to encode the prediction residuals. The adaptive version of Golomb–Rice coding mentioned above, the RLGR encoder ,is used for encoding screen content in virtual machines in the RemoteFX component of the Microsoft Remote Desktop Protocol. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M=1" }, { "math_id": 1, "text": "q = \\left \\lfloor \\frac{x}{M} \\right \\rfloor" }, { "math_id": 2, "text": "r = x - qM" }, { "math_id": 3, "text": "b = \\lfloor\\log_2(M)\\rfloor" }, { "math_id": 4, "text": "r < 2^{b+1} - M" }, { "math_id": 5, "text": "b=\\log_2(M)" }, { "math_id": 6, "text": "p = P(x=0)" }, { "math_id": 7, "text": "(1-p)^M + (1-p)^{M+1} \\leq 1 < (1-p)^{M-1} + (1-p)^M," }, { "math_id": 8, "text": "M = \\left\\lceil -\\frac{\\log(2 -p)}{\\log(1-p)}\\right\\rceil" }, { "math_id": 9, "text": "M = \\left\\lceil -\\frac{\\log(1.8)}{\\log(0.8)}\\right\\rceil = \\left\\lceil 2.634 \\right\\rceil = 3" }, { "math_id": 10, "text": "x' = 2|x| = 2x,\\ x \\ge 0" }, { "math_id": 11, "text": "y' = 2|y| - 1 = -2y - 1,\\ y < 0" }, { "math_id": 12, "text": "r < 2^{b+1}-M" }, { "math_id": 13, "text": "r \\ge 2^{b+1}-M" }, { "math_id": 14, "text": "r+2^{b+1}-M" }, { "math_id": 15, "text": "r' < 2^{b+1}-M" }, { "math_id": 16, "text": " r = r' " }, { "math_id": 17, "text": "r = r' - 2^{b+1} + M" }, { "math_id": 18, "text": "N = q * M + r" }, { "math_id": 19, "text": "b = \\lfloor\\log_2(10)\\rfloor = 3" }, { "math_id": 20, "text": "2^{b+1} - M = 16 - 10 = 6" }, { "math_id": 21, "text": "- \\frac{1}{\\log_{2}p}" }, { "math_id": 22, "text": "M=2^b" }, { "math_id": 23, "text": "- \\log_2(-\\log_2 p)" }, { "math_id": 24, "text": "\\mathbb{P}[\\text{bit is part of }k\\text{-run}]" }, { "math_id": 25, "text": "k-1" }, { "math_id": 26, "text": "(\\text{compression ratio of }k\\text{-run})" }, { "math_id": 27, "text": "\\begin{align}\n\\mathbb{E}[\\text{compression ratio}]\n&= \\sum_{k=1}^\\infty (\\text{compression ratio of }k\\text{-run}) \\cdot \\mathbb{P}[\\text{bit is part of }k\\text{-run}] \\\\\n&= \\sum_{k=1}^\\infty \\frac{b+1+\\lfloor 2^{-b}(k-1) \\rfloor}{k} \\cdot kp^{k-1} (1-p)^2 \\\\\n&= (1-p)^2 \\sum_{j=0}^\\infty (b+1+j) \\cdot \\sum_{i=j2^b+1}^{(j+1)2^b} p^{i-1} \\\\\n&= (1-p)^2 \\sum_{j=0}^\\infty (b+1+j) \\cdot \\left(p^{2^b j} - p^{2^{b} (j+1)}\\right) \\\\\n&= (1-p) \\cdot \\left (b + \\sum_{m=0}^\\infty p^{2^b m} \\right ) \\\\\n&= (1-p) \\cdot \\left(b + {\\left (1-p^{2^b} \\right )}^{-1}\\right ) \\\\\n\\end{align}" }, { "math_id": 28, "text": "1-\\mathbb{E}[\\text{compression ratio}]" }, { "math_id": 29, "text": "p \\approx 1" }, { "math_id": 30, "text": "b=6" }, { "math_id": 31, "text": "p=0.99" } ]
https://en.wikipedia.org/wiki?curid=147230
147252
Integration by parts
Mathematical method in calculus In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation; it is indeed derived using the product rule. The integration by parts formula states: formula_0 Or, letting formula_1 and formula_2 while formula_3 and formula_4 the formula can be written more compactly: formula_5 The former expression is written as a definite integral and the latter is written as an indefinite integral. Applying the appropriate limits to the latter expression should yield the former, but the latter is not necessarily equivalent to the former. Mathematician Brook Taylor discovered integration by parts, first publishing the idea in 1715. More general formulations of integration by parts exist for the Riemann–Stieltjes and Lebesgue–Stieltjes integrals. The discrete analogue for sequences is called summation by parts. Theorem. Product of two functions. The theorem can be derived as follows. For two continuously differentiable functions formula_6 and formula_7, the product rule states: formula_8 Integrating both sides with respect to formula_9, formula_10 and noting that an indefinite integral is an antiderivative gives formula_11 where we neglect writing the constant of integration. This yields the formula for integration by parts: formula_12 or in terms of the differentials formula_13, formula_14 formula_15 This is to be understood as an equality of functions with an unspecified constant added to each side. Taking the difference of each side between two values formula_16 and formula_17 and applying the fundamental theorem of calculus gives the definite integral version: formula_18 The original integral formula_19 contains the derivative v'; to apply the theorem, one must find v, the antiderivative of v', then evaluate the resulting integral formula_20 Validity for less smooth functions. It is not necessary for formula_21 and formula_22 to be continuously differentiable. Integration by parts works if formula_21 is absolutely continuous and the function designated formula_23 is Lebesgue integrable (but not necessarily continuous). (If formula_23 has a point of discontinuity then its antiderivative formula_22 may not have a derivative at that point.) If the interval of integration is not compact, then it is not necessary for formula_21 to be absolutely continuous in the whole interval or for formula_23 to be Lebesgue integrable in the interval, as a couple of examples (in which formula_21 and formula_22 are continuous and continuously differentiable) will show. For instance, if formula_24 formula_21 is not absolutely continuous on the interval [1, ∞), but nevertheless formula_25 so long as formula_26 is taken to mean the limit of formula_27 as formula_28 and so long as the two terms on the right-hand side are finite. This is only true if we choose formula_29 Similarly, if formula_30 formula_23 is not Lebesgue integrable on the interval [1, ∞), but nevertheless formula_25 with the same interpretation. One can also easily come up with similar examples in which formula_21 and formula_22 are "not" continuously differentiable. Further, if formula_31 is a function of bounded variation on the segment formula_32 and formula_33 is differentiable on formula_32 then formula_34 where formula_35 denotes the signed measure corresponding to the function of bounded variation formula_36, and functions formula_37 are extensions of formula_38 to formula_39 which are respectively of bounded variation and differentiable. Product of many functions. Integrating the product rule for three multiplied functions, formula_6, formula_7, formula_40, gives a similar result: formula_41 In general, for formula_42 factors formula_43 which leads to formula_44 Visualization. Consider a parametric curve by ("x", "y") = ("f"("t"), "g"("t")). Assuming that the curve is locally one-to-one and integrable, we can define formula_45 The area of the blue region is formula_46 Similarly, the area of the red region is formula_47 The total area "A"1 + "A"2 is equal to the area of the bigger rectangle, "x"2"y"2, minus the area of the smaller one, "x"1"y"1: formula_48 Or, in terms of "t", formula_49 Or, in terms of indefinite integrals, this can be written as formula_50 Rearranging: formula_51 Thus integration by parts may be thought of as deriving the area of the blue region from the area of rectangles and that of the red region. This visualization also explains why integration by parts may help find the integral of an inverse function "f"−1("x") when the integral of the function "f"("x") is known. Indeed, the functions "x"("y") and "y"("x") are inverses, and the integral ∫ "x" "dy" may be calculated as above from knowing the integral ∫ "y" "dx". In particular, this explains use of integration by parts to integrate logarithm and inverse trigonometric functions. In fact, if formula_52 is a differentiable one-to-one function on an interval, then integration by parts can be used to derive a formula for the integral of formula_53in terms of the integral of formula_52. This is demonstrated in the article, Integral of inverse functions. Applications. Finding antiderivatives. Integration by parts is a heuristic rather than a purely mechanical process for solving integrals; given a single function to integrate, the typical strategy is to carefully separate this single function into a product of two functions "u"("x")"v"("x") such that the residual integral from the integration by parts formula is easier to evaluate than the single function. The following form is useful in illustrating the best strategy to take: formula_54 On the right-hand side, "u" is differentiated and "v" is integrated; consequently it is useful to choose "u" as a function that simplifies when differentiated, or to choose "v" as a function that simplifies when integrated. As a simple example, consider: formula_55 Since the derivative of ln("x") is , one makes (ln("x")) part "u"; since the antiderivative of is −, one makes part "v". The formula now yields: formula_56 The antiderivative of − can be found with the power rule and is . Alternatively, one may choose "u" and "v" such that the product "u"′ (∫"v" "dx") simplifies due to cancellation. For example, suppose one wishes to integrate: formula_57 If we choose "u"("x") = ln(|sin("x")|) and "v"("x") = sec2x, then "u" differentiates to 1/ tan "x" using the chain rule and "v" integrates to tan "x"; so the formula gives: formula_58 The integrand simplifies to 1, so the antiderivative is "x". Finding a simplifying combination frequently involves experimentation. In some applications, it may not be necessary to ensure that the integral produced by integration by parts has a simple form; for example, in numerical analysis, it may suffice that it has small magnitude and so contributes only a small error term. Some other special techniques are demonstrated in the examples below. Polynomials and trigonometric functions. In order to calculate formula_59 let: formula_60 then: formula_61 where "C" is a constant of integration. For higher powers of formula_9 in the form formula_62 repeatedly using integration by parts can evaluate integrals such as these; each application of the theorem lowers the power of formula_9 by one. Exponentials and trigonometric functions. An example commonly used to examine the workings of integration by parts is formula_63 Here, integration by parts is performed twice. First let formula_64 then: formula_65 Now, to evaluate the remaining integral, we use integration by parts again, with: formula_66 Then: formula_67 Putting these together, formula_68 The same integral shows up on both sides of this equation. The integral can simply be added to both sides to get formula_69 which rearranges to formula_70 where again formula_71 (and formula_72) is a constant of integration. A similar method is used to find the integral of secant cubed. Functions multiplied by unity. Two other well-known examples are when integration by parts is applied to a function expressed as a product of 1 and itself. This works if the derivative of the function is known, and the integral of this derivative times formula_9 is also known. The first example is formula_73. We write this as: formula_74 Let: formula_75 formula_76 then: formula_77 where formula_71 is the constant of integration. The second example is the inverse tangent function formula_78: formula_79 Rewrite this as formula_80 Now let: formula_81 formula_76 then formula_82 using a combination of the inverse chain rule method and the natural logarithm integral condition. LIATE rule. The LIATE rule is a rule of thumb for integration by parts. It involves choosing as "u" the function that comes first in the following list: The function which is to be "dv" is whichever comes last in the list. The reason is that functions lower on the list generally have simpler antiderivatives than the functions above them. The rule is sometimes written as "DETAIL", where "D" stands for "dv" and the top of the list is the function chosen to be "dv". An alternative to this rule is the ILATE rule, where inverse trigonometric functions come before logarithmic functions. To demonstrate the LIATE rule, consider the integral formula_88 Following the LIATE rule, "u" = "x", and "dv" = cos("x") "dx", hence "du" = "dx", and "v" = sin("x"), which makes the integral become formula_89 which equals formula_90 In general, one tries to choose "u" and "dv" such that "du" is simpler than "u" and "dv" is easy to integrate. If instead cos("x") was chosen as "u", and "x dx" as "dv", we would have the integral formula_91 which, after recursive application of the integration by parts formula, would clearly result in an infinite recursion and lead nowhere. Although a useful rule of thumb, there are exceptions to the LIATE rule. A common alternative is to consider the rules in the "ILATE" order instead. Also, in some cases, polynomial terms need to be split in non-trivial ways. For example, to integrate formula_92 one would set formula_93 so that formula_94 Then formula_95 Finally, this results in formula_96 Integration by parts is often used as a tool to prove theorems in mathematical analysis. Wallis product. The Wallis infinite product for formula_97 formula_98 may be derived using integration by parts. Gamma function identity. The gamma function is an example of a special function, defined as an improper integral for formula_99. Integration by parts illustrates it to be an extension of the factorial function: formula_100 Since formula_101 when formula_102 is a natural number, that is, formula_103, applying this formula repeatedly gives the factorial: formula_104 Use in harmonic analysis. Integration by parts is often used in harmonic analysis, particularly Fourier analysis, to show that quickly oscillating integrals with sufficiently smooth integrands decay quickly. The most common example of this is its use in showing that the decay of function's Fourier transform depends on the smoothness of that function, as described below. Fourier transform of derivative. If formula_52 is a formula_105-times continuously differentiable function and all derivatives up to the formula_105th one decay to zero at infinity, then its Fourier transform satisfies formula_106 where formula_107 is the formula_105th derivative of formula_52. (The exact constant on the right depends on the convention of the Fourier transform used.) This is proved by noting that formula_108 so using integration by parts on the Fourier transform of the derivative we get formula_109 Applying this inductively gives the result for general formula_105. A similar method can be used to find the Laplace transform of a derivative of a function. Decay of Fourier transform. The above result tells us about the decay of the Fourier transform, since it follows that if formula_52 and formula_107 are integrable then formula_110 In other words, if formula_52 satisfies these conditions then its Fourier transform decays at infinity at least as quickly as 1/|"ξ"|"k". In particular, if formula_111 then the Fourier transform is integrable. The proof uses the fact, which is immediate from the definition of the Fourier transform, that formula_112 Using the same idea on the equality stated at the start of this subsection gives formula_113 Summing these two inequalities and then dividing by gives the stated inequality. Use in operator theory. One use of integration by parts in operator theory is that it shows that the −∆ (where ∆ is the Laplace operator) is a positive operator on formula_114 (see "L""p" space). If formula_52 is smooth and compactly supported then, using integration by parts, we have formula_115 Repeated integration by parts. Considering a second derivative of formula_22 in the integral on the LHS of the formula for partial integration suggests a repeated application to the integral on the RHS: formula_116 Extending this concept of repeated partial integration to derivatives of degree n leads to formula_117 This concept may be useful when the successive integrals of formula_118 are readily available (e.g., plain exponentials or sine and cosine, as in Laplace or Fourier transforms), and when the nth derivative of formula_21 vanishes (e.g., as a polynomial function with degree formula_119). The latter condition stops the repeating of partial integration, because the RHS-integral vanishes. In the course of the above repetition of partial integrations the integrals formula_120 and formula_121 and formula_122 get related. This may be interpreted as arbitrarily "shifting" derivatives between formula_22 and formula_21 within the integrand, and proves useful, too (see Rodrigues' formula). Tabular integration by parts. The essential process of the above formula can be summarized in a table; the resulting method is called "tabular integration" and was featured in the film "Stand and Deliver" (1988). For example, consider the integral formula_123 and take formula_124 Begin to list in column A the function formula_125 and its subsequent derivatives formula_126 until zero is reached. Then list in column B the function formula_127 and its subsequent integrals formula_128 until the size of column B is the same as that of column A. The result is as follows: The product of the entries in of columns A and B together with the respective sign give the relevant integrals in in the course of repeated integration by parts. yields the original integral. For the complete result in the must be added to all the previous products (0 ≤ "j" &lt; "i") of the of column A and the of column B (i.e., multiply the 1st entry of column A with the 2nd entry of column B, the 2nd entry of column A with the 3rd entry of column B, etc. ...) with the given This process comes to a natural halt, when the product, which yields the integral, is zero ("i" 4 in the example). The complete result is the following (with the alternating signs in each term): formula_129 This yields formula_130 The repeated partial integration also turns out useful, when in the course of respectively differentiating and integrating the functions formula_126 and formula_128 their product results in a multiple of the original integrand. In this case the repetition may also be terminated with this index i.This can happen, expectably, with exponentials and trigonometric functions. As an example consider formula_131 In this case the product of the terms in columns A and B with the appropriate sign for index "i" 2 yields the negative of the original integrand (compare formula_132 Observing that the integral on the RHS can have its own constant of integration formula_133, and bringing the abstract integral to the other side, gives formula_134 and finally: formula_135 where formula_136. Higher dimensions. Integration by parts can be extended to functions of several variables by applying a version of the fundamental theorem of calculus to an appropriate product rule. There are several such pairings possible in multivariate calculus, involving a scalar-valued function "u" and vector-valued function (vector field) V. The product rule for divergence states: formula_137 Suppose formula_138 is an open bounded subset of formula_139 with a piecewise smooth boundary formula_140. Integrating over formula_138 with respect to the standard volume form formula_141, and applying the divergence theorem, gives: formula_142 where formula_143 is the outward unit normal vector to the boundary, integrated with respect to its standard Riemannian volume form formula_144. Rearranging gives: formula_145 or in other words formula_146 The regularity requirements of the theorem can be relaxed. For instance, the boundary formula_147 need only be Lipschitz continuous, and the functions "u", "v" need only lie in the Sobolev space formula_148. Green's first identity. Consider the continuously differentiable vector fields formula_149 and formula_150, where formula_151is the "i"-th standard basis vector for formula_152. Now apply the above integration by parts to each formula_153 times the vector field formula_154: formula_155 Summing over "i" gives a new integration by parts formula: formula_156 The case formula_157, where formula_158, is known as the first of Green's identities: formula_159
[ { "math_id": 0, "text": "\\begin{align}\n \\int_a^b u(x) v'(x) \\, dx \n & = \\Big[u(x) v(x)\\Big]_a^b - \\int_a^b u'(x) v(x) \\, dx\\\\\n & = u(b) v(b) - u(a) v(a) - \\int_a^b u'(x) v(x) \\, dx.\n \\end{align}" }, { "math_id": 1, "text": "u = u(x)" }, { "math_id": 2, "text": "du = u'(x) \\,dx" }, { "math_id": 3, "text": "v = v(x)" }, { "math_id": 4, "text": "dv = v'(x) \\, dx," }, { "math_id": 5, "text": "\\int u \\, dv \\ =\\ uv - \\int v \\, du." }, { "math_id": 6, "text": "u(x)" }, { "math_id": 7, "text": "v(x)" }, { "math_id": 8, "text": "\\Big(u(x)v(x)\\Big)' = u'(x) v(x) + u(x) v'(x)." }, { "math_id": 9, "text": "x" }, { "math_id": 10, "text": "\\int \\Big(u(x)v(x)\\Big)'\\,dx = \\int u'(x)v(x)\\,dx + \\int u(x)v'(x) \\,dx, " }, { "math_id": 11, "text": "u(x)v(x) = \\int u'(x)v(x)\\,dx + \\int u(x)v'(x)\\,dx," }, { "math_id": 12, "text": "\\int u(x)v'(x)\\,dx = u(x)v(x) - \\int u'(x)v(x) \\,dx, " }, { "math_id": 13, "text": " du=u'(x)\\,dx" }, { "math_id": 14, "text": "dv=v'(x)\\,dx, \\quad" }, { "math_id": 15, "text": "\\int u(x)\\,dv = u(x)v(x) - \\int v(x)\\,du." }, { "math_id": 16, "text": "x = a" }, { "math_id": 17, "text": "x = b" }, { "math_id": 18, "text": " \\int_a^b u(x) v'(x) \\, dx \n = u(b) v(b) - u(a) v(a) - \\int_a^b u'(x) v(x) \\, dx . " }, { "math_id": 19, "text": "\\int uv' \\, dx" }, { "math_id": 20, "text": "\\int vu' \\, dx." }, { "math_id": 21, "text": "u" }, { "math_id": 22, "text": "v" }, { "math_id": 23, "text": "v'" }, { "math_id": 24, "text": "u(x)= e^x/x^2, \\, v'(x) =e^{-x}" }, { "math_id": 25, "text": "\\int_1^\\infty u(x)v'(x)\\,dx = \\Big[u(x)v(x)\\Big]_1^\\infty - \\int_1^\\infty u'(x)v(x)\\,dx" }, { "math_id": 26, "text": "\\left[u(x)v(x)\\right]_1^\\infty" }, { "math_id": 27, "text": "u(L)v(L)-u(1)v(1)" }, { "math_id": 28, "text": "L\\to\\infty" }, { "math_id": 29, "text": "v(x)=-e^{-x}." }, { "math_id": 30, "text": "u(x)= e^{-x},\\, v'(x) =x^{-1}\\sin(x)" }, { "math_id": 31, "text": "f(x)" }, { "math_id": 32, "text": "[a,b]," }, { "math_id": 33, "text": "\\varphi(x)" }, { "math_id": 34, "text": "\\int_{a}^{b}f(x)\\varphi'(x)\\,dx=-\\int_{-\\infty}^{\\infty} \\widetilde\\varphi(x)\\,d(\\widetilde\\chi_{[a,b]}(x)\\widetilde f(x))," }, { "math_id": 35, "text": "d(\\chi_{[a,b]}(x)\\widetilde f(x))" }, { "math_id": 36, "text": "\\chi_{[a,b]}(x)f(x)" }, { "math_id": 37, "text": "\\widetilde f, \\widetilde \\varphi" }, { "math_id": 38, "text": "f, \\varphi" }, { "math_id": 39, "text": "\\R," }, { "math_id": 40, "text": "w(x)" }, { "math_id": 41, "text": "\\int_a^b u v \\, dw \\ =\\ \\Big[u v w\\Big]^b_a - \\int_a^b u w \\, dv - \\int_a^b v w \\, du." }, { "math_id": 42, "text": "n" }, { "math_id": 43, "text": "\\left(\\prod_{i=1}^n u_i(x) \\right)' \\ =\\ \\sum_{j=1}^n u_j'(x)\\prod_{i\\neq j}^n u_i(x), " }, { "math_id": 44, "text": " \\left[ \\prod_{i=1}^n u_i(x) \\right]_a^b \\ =\\ \\sum_{j=1}^n \\int_a^b u_j'(x) \\prod_{i\\neq j}^n u_i(x). " }, { "math_id": 45, "text": "\\begin{align}\n x(y) &= f(g^{-1}(y)) \\\\\n y(x) &= g(f^{-1}(x))\n\\end{align}" }, { "math_id": 46, "text": "A_1=\\int_{y_1}^{y_2}x(y) \\, dy" }, { "math_id": 47, "text": "A_2=\\int_{x_1}^{x_2}y(x)\\,dx" }, { "math_id": 48, "text": "\\overbrace{\\int_{y_1}^{y_2}x(y) \\, dy}^{A_1}+\\overbrace{\\int_{x_1}^{x_2}y(x) \\, dx}^{A_2}\\ =\\ \\biggl.x \\cdot y(x)\\biggl|_{x_1}^{x_2} \\ =\\ \\biggl.y \\cdot x(y)\\biggl|_{y_1}^{y_2}" }, { "math_id": 49, "text": "\\int_{t_1}^{t_2}x(t) \\, dy(t) + \\int_{t_1}^{t_2}y(t) \\, dx(t) \\ =\\ \\biggl. x(t)y(t) \\biggl|_{t_1}^{t_2}" }, { "math_id": 50, "text": "\\int x\\,dy + \\int y \\,dx \\ =\\ xy" }, { "math_id": 51, "text": "\\int x\\,dy \\ =\\ xy - \\int y \\,dx" }, { "math_id": 52, "text": "f" }, { "math_id": 53, "text": "f^{-1}" }, { "math_id": 54, "text": "\\int uv\\,dx = u \\int v\\,dx - \\int\\left(u' \\int v\\,dx \\right)\\,dx." }, { "math_id": 55, "text": "\\int\\frac{\\ln(x)}{x^2}\\,dx\\,." }, { "math_id": 56, "text": "\\int\\frac{\\ln(x)}{x^2}\\,dx = -\\frac{\\ln(x)}{x} - \\int \\biggl(\\frac1{x}\\biggr) \\biggl(-\\frac1{x}\\biggr)\\,dx\\,." }, { "math_id": 57, "text": "\\int\\sec^2(x)\\cdot\\ln\\Big(\\bigl|\\sin(x)\\bigr|\\Big)\\,dx." }, { "math_id": 58, "text": "\\int\\sec^2(x)\\cdot\\ln\\Big(\\bigl|\\sin(x)\\bigr|\\Big)\\,dx = \\tan(x)\\cdot\\ln\\Big(\\bigl|\\sin(x)\\bigr|\\Big)-\\int\\tan(x)\\cdot\\frac1{\\tan(x)} \\, dx\\ ." }, { "math_id": 59, "text": "I=\\int x\\cos(x)\\,dx\\,," }, { "math_id": 60, "text": "\\begin{alignat}{3}\n u &= x\\ &\\Rightarrow\\ &&du &= dx \\\\\n dv &= \\cos(x)\\,dx\\ &\\Rightarrow\\ && v &= \\int\\cos(x)\\,dx = \\sin(x)\n\\end{alignat}" }, { "math_id": 61, "text": "\\begin{align}\n \\int x\\cos(x)\\,dx & = \\int u\\ dv \\\\\n & = u\\cdot v - \\int v \\, du \\\\\n & = x\\sin(x) - \\int \\sin(x)\\,dx \\\\\n & = x\\sin(x) + \\cos(x) + C,\n \\end{align}" }, { "math_id": 62, "text": "\\int x^n e^x\\,dx,\\ \\int x^n\\sin(x)\\,dx,\\ \\int x^n\\cos(x)\\,dx\\,," }, { "math_id": 63, "text": "I=\\int e^x\\cos(x)\\,dx." }, { "math_id": 64, "text": "\\begin{alignat}{3}\n u &= \\cos(x)\\ &\\Rightarrow\\ &&du &= -\\sin(x)\\,dx \\\\\n dv &= e^x\\,dx\\ &\\Rightarrow\\ &&v &= \\int e^x\\,dx = e^x\n\\end{alignat}" }, { "math_id": 65, "text": "\\int e^x\\cos(x)\\,dx = e^x\\cos(x) + \\int e^x\\sin(x)\\,dx." }, { "math_id": 66, "text": "\\begin{alignat}{3}\n u &= \\sin(x)\\ &\\Rightarrow\\ &&du &= \\cos(x)\\,dx \\\\\n dv &= e^x\\,dx\\,&\\Rightarrow\\ && v &= \\int e^x\\,dx = e^x.\n\\end{alignat}" }, { "math_id": 67, "text": "\\int e^x\\sin(x)\\,dx = e^x\\sin(x) - \\int e^x\\cos(x)\\,dx." }, { "math_id": 68, "text": "\\int e^x\\cos(x)\\,dx = e^x\\cos(x) + e^x\\sin(x) - \\int e^x\\cos(x)\\,dx." }, { "math_id": 69, "text": "2\\int e^x\\cos(x)\\,dx = e^x\\bigl[\\sin(x)+\\cos(x)\\bigr] + C," }, { "math_id": 70, "text": "\\int e^x\\cos(x)\\,dx = \\frac{1}{2}e^x\\bigl[\\sin(x)+\\cos(x)\\bigr] + C'" }, { "math_id": 71, "text": "C" }, { "math_id": 72, "text": "C' = C/2" }, { "math_id": 73, "text": "\\int \\ln(x) dx" }, { "math_id": 74, "text": "I=\\int\\ln(x)\\cdot 1\\,dx\\,." }, { "math_id": 75, "text": "u = \\ln(x)\\ \\Rightarrow\\ du = \\frac{dx}{x}" }, { "math_id": 76, "text": "dv = dx\\ \\Rightarrow\\ v = x" }, { "math_id": 77, "text": "\n\\begin{align}\n\\int \\ln(x)\\,dx & = x\\ln(x) - \\int\\frac{x}{x}\\,dx \\\\\n& = x\\ln(x) - \\int 1\\,dx \\\\\n& = x\\ln(x) - x + C\n\\end{align}\n" }, { "math_id": 78, "text": "\\arctan(x)" }, { "math_id": 79, "text": "I=\\int\\arctan(x)\\,dx." }, { "math_id": 80, "text": "\\int\\arctan(x)\\cdot 1\\,dx." }, { "math_id": 81, "text": "u = \\arctan(x)\\ \\Rightarrow\\ du = \\frac{dx}{1+x^2}" }, { "math_id": 82, "text": "\n\\begin{align}\n\\int\\arctan(x)\\,dx\n& = x\\arctan(x) - \\int\\frac{x}{1+x^2}\\,dx \\\\[8pt]\n& = x\\arctan(x) - \\frac{\\ln(1+x^2)}{2} + C\n\\end{align}\n" }, { "math_id": 83, "text": "\\ln(x),\\ \\log_b(x)," }, { "math_id": 84, "text": "\\arctan(x),\\ \\arcsec(x),\\ \\operatorname{arsinh}(x)," }, { "math_id": 85, "text": "x^2,\\ 3x^{50}," }, { "math_id": 86, "text": "\\sin(x),\\ \\tan(x),\\ \\operatorname{sech}(x)," }, { "math_id": 87, "text": "e^x,\\ 19^x," }, { "math_id": 88, "text": "\\int x \\cdot \\cos(x) \\,dx." }, { "math_id": 89, "text": "x \\cdot \\sin(x) - \\int 1 \\sin(x) \\,dx," }, { "math_id": 90, "text": "x \\cdot \\sin(x) + \\cos(x) + C." }, { "math_id": 91, "text": "\\frac{x^2}{2} \\cos(x) + \\int \\frac{x^2}{2} \\sin(x) \\,dx," }, { "math_id": 92, "text": "\\int x^3 e^{x^2} \\,dx," }, { "math_id": 93, "text": "u = x^2, \\quad dv = x \\cdot e^{x^2} \\,dx," }, { "math_id": 94, "text": "du = 2x \\,dx, \\quad v = \\frac{e^{x^2}}{2}." }, { "math_id": 95, "text": "\\int x^3 e^{x^2} \\,dx = \\int \\left(x^2\\right) \\left(xe^{x^2}\\right) \\,dx = \\int u \\,dv\n= uv - \\int v \\,du = \\frac{x^2 e^{x^2}}{2} - \\int x e^{x^2} \\,dx." }, { "math_id": 96, "text": "\\int x^3 e^{x^2} \\,dx = \\frac{e^{x^2}\\left(x^2 - 1\\right)}{2} + C." }, { "math_id": 97, "text": "\\pi" }, { "math_id": 98, "text": "\\begin{align}\n\\frac{\\pi}{2} & = \\prod_{n=1}^\\infty \\frac{ 4n^2 }{ 4n^2 - 1 } = \\prod_{n=1}^\\infty \\left(\\frac{2n}{2n-1} \\cdot \\frac{2n}{2n+1}\\right) \\\\[6pt]\n& = \\Big(\\frac{2}{1} \\cdot \\frac{2}{3}\\Big) \\cdot \\Big(\\frac{4}{3} \\cdot \\frac{4}{5}\\Big) \\cdot \\Big(\\frac{6}{5} \\cdot \\frac{6}{7}\\Big) \\cdot \\Big(\\frac{8}{7} \\cdot \\frac{8}{9}\\Big) \\cdot \\; \\cdots\n\\end{align}" }, { "math_id": 99, "text": "z > 0 " }, { "math_id": 100, "text": "\\begin{align}\n\\Gamma(z) & = \\int_0^\\infty e^{-x} x^{z-1} dx \\\\[6pt]\n & = - \\int_0^\\infty x^{z-1} \\, d\\left(e^{-x}\\right) \\\\[6pt]\n & = - \\Biggl[e^{-x} x^{z-1}\\Biggl]_0^\\infty + \\int_0^\\infty e^{-x} d\\left(x^{z-1}\\right) \\\\[6pt]\n & = 0 + \\int_0^\\infty \\left(z-1\\right) x^{z-2} e^{-x} dx\\\\[6pt]\n & = (z-1)\\Gamma(z-1).\n\\end{align} " }, { "math_id": 101, "text": "\\Gamma(1) = \\int_0^\\infty e^{-x} \\, dx = 1," }, { "math_id": 102, "text": "z" }, { "math_id": 103, "text": " z = n \\in \\mathbb{N} " }, { "math_id": 104, "text": "\\Gamma(n+1) = n!" }, { "math_id": 105, "text": "k" }, { "math_id": 106, "text": "(\\mathcal{F}f^{(k)})(\\xi) = (2\\pi i\\xi)^k \\mathcal{F}f(\\xi)," }, { "math_id": 107, "text": "f^{(k)}" }, { "math_id": 108, "text": "\\frac{d}{dy} e^{-2\\pi iy\\xi} = -2\\pi i\\xi e^{-2\\pi iy\\xi}," }, { "math_id": 109, "text": "\\begin{align}\n(\\mathcal{F}f')(\\xi) &= \\int_{-\\infty}^\\infty e^{-2\\pi iy\\xi} f'(y)\\,dy \\\\\n&=\\left[e^{-2\\pi iy\\xi} f(y)\\right]_{-\\infty}^\\infty - \\int_{-\\infty}^\\infty (-2\\pi i\\xi e^{-2\\pi iy\\xi}) f(y)\\,dy \\\\[5pt]\n&=2\\pi i\\xi \\int_{-\\infty}^\\infty e^{-2\\pi iy\\xi} f(y)\\,dy \\\\[5pt]\n&=2\\pi i\\xi \\mathcal{F}f(\\xi).\n\\end{align}" }, { "math_id": 110, "text": "\\vert\\mathcal{F}f(\\xi)\\vert \\leq \\frac{I(f)}{1+\\vert 2\\pi\\xi\\vert^k}, \\text{ where } I(f) = \\int_{-\\infty}^\\infty \\Bigl(\\vert f(y)\\vert + \\vert f^{(k)}(y)\\vert\\Bigr) \\, dy." }, { "math_id": 111, "text": "k \\geq 2" }, { "math_id": 112, "text": "\\vert\\mathcal{F}f(\\xi)\\vert \\leq \\int_{-\\infty}^\\infty \\vert f(y) \\vert \\,dy." }, { "math_id": 113, "text": "\\vert(2\\pi i\\xi)^k \\mathcal{F}f(\\xi)\\vert \\leq \\int_{-\\infty}^\\infty \\vert f^{(k)}(y) \\vert \\,dy." }, { "math_id": 114, "text": "L^2" }, { "math_id": 115, "text": "\\begin{align}\n\\langle -\\Delta f, f \\rangle_{L^2} &= -\\int_{-\\infty}^\\infty f''(x)\\overline{f(x)}\\,dx \\\\[5pt]\n&=-\\left[f'(x)\\overline{f(x)}\\right]_{-\\infty}^\\infty + \\int_{-\\infty}^\\infty f'(x)\\overline{f'(x)}\\,dx \\\\[5pt]\n&=\\int_{-\\infty}^\\infty \\vert f'(x)\\vert^2\\,dx \\geq 0.\n\\end{align}" }, { "math_id": 116, "text": "\\int u v''\\,dx = uv' - \\int u'v'\\,dx = uv' - \\left( u'v - \\int u''v\\,dx \\right)." }, { "math_id": 117, "text": "\\begin{align}\n\\int u^{(0)} v^{(n)}\\,dx &= u^{(0)} v^{(n-1)} - u^{(1)}v^{(n-2)} + u^{(2)}v^{(n-3)} - \\cdots + (-1)^{n-1}u^{(n-1)} v^{(0)} + (-1)^n \\int u^{(n)} v^{(0)} \\,dx.\\\\[5pt]\n&= \\sum_{k=0}^{n-1}(-1)^k u^{(k)}v^{(n-1-k)} + (-1)^n \\int u^{(n)} v^{(0)} \\,dx.\n\\end{align}" }, { "math_id": 118, "text": "v^{(n)}" }, { "math_id": 119, "text": "(n-1)" }, { "math_id": 120, "text": "\\int u^{(0)} v^{(n)}\\,dx \\quad" }, { "math_id": 121, "text": "\\quad \\int u^{(\\ell)} v^{(n-\\ell)}\\,dx \\quad" }, { "math_id": 122, "text": "\\quad \\int u^{(m)} v^{(n-m)}\\,dx \\quad\\text{ for } 1 \\le m,\\ell \\le n" }, { "math_id": 123, "text": "\\int x^3 \\cos x \\,dx \\quad" }, { "math_id": 124, "text": "\\quad u^{(0)} = x^3, \\quad v^{(n)} = \\cos x." }, { "math_id": 125, "text": "u^{(0)} = x^3" }, { "math_id": 126, "text": "u^{(i)}" }, { "math_id": 127, "text": "v^{(n)} = \\cos x" }, { "math_id": 128, "text": "v^{(n-i)}" }, { "math_id": 129, "text": "\\underbrace{(+1)(x^3)(\\sin x)}_{j=0} + \\underbrace{(-1)(3x^2)(-\\cos x)}_{j=1} + \\underbrace{(+1)(6x)(-\\sin x)}_{j=2} +\\underbrace{(-1)(6)(\\cos x)}_{j=3}+ \\underbrace{\\int(+1)(0)(\\cos x) \\,dx}_{i=4: \\;\\to \\;C}." }, { "math_id": 130, "text": "\\underbrace{\\int x^3 \\cos x \\,dx}_{\\text{step 0}} = x^3\\sin x + 3x^2\\cos x - 6x\\sin x - 6\\cos x + C. " }, { "math_id": 131, "text": "\\int e^x \\cos x \\,dx. " }, { "math_id": 132, "text": " \\underbrace{\\int e^x \\cos x \\,dx}_{\\text{step 0}} = \\underbrace{(+1)(e^x)(\\sin x)}_{j=0} + \\underbrace{(-1)(e^x)(-\\cos x)}_{j=1} + \\underbrace{\\int(+1)(e^x)(-\\cos x) \\,dx}_{i= 2}. " }, { "math_id": 133, "text": "C'" }, { "math_id": 134, "text": " 2 \\int e^x \\cos x \\,dx = e^x\\sin x + e^x\\cos x + C', " }, { "math_id": 135, "text": "\\int e^x \\cos x \\,dx = \\frac 12 \\left(e^x ( \\sin x + \\cos x ) \\right) + C," }, { "math_id": 136, "text": "C = C'/2" }, { "math_id": 137, "text": "\\nabla \\cdot ( u \\mathbf{V} ) \\ =\\ u\\, \\nabla \\cdot \\mathbf V \\ +\\ \\nabla u\\cdot \\mathbf V." }, { "math_id": 138, "text": "\\Omega" }, { "math_id": 139, "text": "\\R^n" }, { "math_id": 140, "text": "\\Gamma=\\partial\\Omega" }, { "math_id": 141, "text": "d\\Omega" }, { "math_id": 142, "text": "\\int_{\\Gamma} u \\mathbf{V} \\cdot \\hat{\\mathbf n} \\,d\\Gamma \\ =\\ \\int_\\Omega\\nabla\\cdot ( u \\mathbf{V} )\\,d\\Omega \\ =\\ \\int_\\Omega u\\, \\nabla \\cdot \\mathbf V\\,d\\Omega \\ +\\ \\int_\\Omega\\nabla u\\cdot \\mathbf V\\,d\\Omega," }, { "math_id": 143, "text": "\\hat{\\mathbf n}" }, { "math_id": 144, "text": "d\\Gamma" }, { "math_id": 145, "text": "\n\\int_\\Omega u \\,\\nabla \\cdot \\mathbf V\\,d\\Omega \\ =\\ \\int_\\Gamma u \\mathbf V \\cdot \\hat{\\mathbf n}\\,d\\Gamma - \\int_\\Omega \\nabla u \\cdot \\mathbf V \\, d\\Omega,\n" }, { "math_id": 146, "text": "\n\\int_\\Omega u\\,\\operatorname{div}(\\mathbf V)\\,d\\Omega \\ =\\ \\int_\\Gamma u \\mathbf V \\cdot \\hat{\\mathbf n}\\,d\\Gamma - \\int_\\Omega \\operatorname{grad}(u)\\cdot\\mathbf V\\,d\\Omega .\n" }, { "math_id": 147, "text": " \\Gamma=\\partial\\Omega" }, { "math_id": 148, "text": "H^1(\\Omega)" }, { "math_id": 149, "text": "\\mathbf U = u_1\\mathbf e_1+\\cdots+u_n\\mathbf e_n" }, { "math_id": 150, "text": "v \\mathbf e_1,\\ldots, v\\mathbf e_n" }, { "math_id": 151, "text": "\\mathbf e_i" }, { "math_id": 152, "text": "i=1,\\ldots,n" }, { "math_id": 153, "text": "u_i" }, { "math_id": 154, "text": "v\\mathbf e_i" }, { "math_id": 155, "text": "\\int_\\Omega u_i\\frac{\\partial v}{\\partial x_i}\\,d\\Omega \\ =\\ \\int_\\Gamma u_i v \\,\\mathbf e_i\\cdot\\hat\\mathbf{n}\\,d\\Gamma - \\int_\\Omega \\frac{\\partial u_i}{\\partial x_i} v\\,d\\Omega." }, { "math_id": 156, "text": " \\int_\\Omega \\mathbf U \\cdot \\nabla v\\,d\\Omega \\ =\\ \\int_\\Gamma v \\mathbf{U}\\cdot \\hat{\\mathbf n}\\,d\\Gamma - \\int_\\Omega v\\, \\nabla \\cdot \\mathbf{U}\\,d\\Omega." }, { "math_id": 157, "text": "\\mathbf{U}=\\nabla u" }, { "math_id": 158, "text": "u\\in C^2(\\bar{\\Omega})" }, { "math_id": 159, "text": " \\int_\\Omega \\nabla u \\cdot \\nabla v\\,d\\Omega\\ =\\ \\int_\\Gamma v\\, \\nabla u\\cdot\\hat{\\mathbf n}\\,d\\Gamma - \\int_\\Omega v\\, \\nabla^2 u \\, d\\Omega." } ]
https://en.wikipedia.org/wiki?curid=147252
14726322
Complex network zeta function
Different definitions have been given for the dimension of a complex network or graph. For example, metric dimension is defined in terms of the resolving set for a graph. Dimension has also been defined based on the box covering method applied to graphs. Here we describe the definition based on the complex network zeta function. This generalises the definition based on the scaling property of the volume with distance. The best definition depends on the application. Definition. One usually thinks of dimension for a set which is dense, like the points on a line, for example. Dimension makes sense in a discrete setting, like for graphs, only in the large system limit, as the size tends to infinity. For example, in Statistical Mechanics, one considers discrete points which are located on regular lattices of different dimensions. Such studies have been extended to arbitrary networks, and it is interesting to consider how the definition of dimension can be extended to cover these cases. A very simple and obvious way to extend the definition of dimension to arbitrary large networks is to consider how the volume (number of nodes within a given distance from a specified node) scales as the distance (shortest path connecting two nodes in the graph) is increased. For many systems arising in physics, this is indeed a useful approach. This definition of dimension could be put on a strong mathematical foundation, similar to the definition of Hausdorff dimension for continuous systems. The mathematically robust definition uses the concept of a zeta function for a graph. The complex network zeta function and the graph surface function were introduced to characterize large graphs. They have also been applied to study patterns in Language Analysis. In this section we will briefly review the definition of the functions and discuss further some of their properties which follow from the definition. We denote by formula_0 the distance from node formula_1 to node formula_2, i.e., the length of the shortest path connecting the first node to the second node. formula_3 is formula_4 if there is no path from node formula_1 to node formula_2. With this definition, the nodes of the complex network become points in a metric space. Simple generalisations of this definition can be studied, e.g., we could consider weighted edges. The graph surface function, formula_5, is defined as the number of nodes which are exactly at a distance formula_6 from a given node, averaged over all nodes of the network. The complex network zeta function formula_7 is defined as formula_8 where formula_9 is the graph size, measured by the number of nodes. When formula_10 is zero all nodes contribute equally to the sum in the previous equation. This means that formula_11 is formula_12, and it diverges when formula_13. When the exponent formula_10 tends to infinity, the sum gets contributions only from the nearest neighbours of a node. The other terms tend to zero. Thus, formula_7 tends to the average degree formula_14 for the graph as formula_15. formula_16 The need for taking an average over all nodes can be avoided by using the concept of supremum over nodes, which makes the concept much easier to apply for formally infinite graphs. The definition can be expressed as a weighted sum over the node distances. This gives the Dirichlet series relation formula_17 This definition has been used in the shortcut model to study several processes and their dependence on dimension. Properties. formula_7 is a decreasing function of formula_10, formula_18, if formula_19. If the average degree of the nodes (the mean coordination number for the graph) is finite, then there is exactly one value of formula_10, formula_20, at which the complex network zeta function transitions from being infinite to being finite. This has been defined as the dimension of the complex network. If we add more edges to an existing graph, the distances between nodes will decrease. This results in an increase in the value of the complex network zeta function, since formula_21 will get pulled inward. If the new links connect remote parts of the system, i.e., if the distances change by amounts which do not remain finite as the graph size formula_13, then the dimension tends to increase. For regular discrete "d"-dimensional lattices formula_22 with distance defined using the formula_23 norm formula_24 the transition occurs at formula_25. The definition of dimension using the complex network zeta function satisfies properties like monotonicity (a subset has a lower or the same dimension as its containing set), stability (a union of sets has the maximum dimension of the component sets forming the union) and Lipschitz invariance, provided the operations involved change the distances between nodes only by finite amounts as the graph size formula_9 goes to formula_4. Algorithms to calculate the complex network zeta function have been presented. Values for discrete regular lattices. For a one-dimensional regular lattice the graph surface function formula_26 is exactly two for all values of formula_27 (there are two nearest neighbours, two next-nearest neighbours, and so on). Thus, the complex network zeta function formula_7 is equal to formula_28, where formula_29 is the usual Riemann zeta function. By choosing a given axis of the lattice and summing over cross-sections for the allowed range of distances along the chosen axis the recursion relation below can be derived formula_30 From combinatorics the surface function for a regular lattice can be written as formula_31 The following expression for the sum of positive integers raised to a given power formula_32 will be useful to calculate the surface function for higher values of formula_33: formula_34 Another formula for the sum of positive integers raised to a given power formula_32 is formula_35 formula_36 as formula_37. The Complex network zeta function for some lattices is given below. formula_38 : formula_39 formula_40 : formula_41 formula_42 : formula_43) formula_44 : formula_45 formula_37 : formula_46 (for formula_47 near the transition point.) Random graph zeta function. Random graphs are networks having some number formula_9 of vertices, in which each pair is connected with probability formula_48, or else the pair is disconnected. Random graphs have a diameter of two with probability approaching one, in the infinite limit (formula_13). To see this, consider two nodes formula_49 and formula_50. For any node formula_51 different from formula_49 or formula_50, the probability that formula_51 is not simultaneously connected to both formula_49 and formula_50 is formula_52. Thus, the probability that none of the formula_53 nodes provides a path of length formula_54 between nodes formula_49 and formula_50 is formula_55. This goes to zero as the system size goes to infinity, and hence most random graphs have their nodes connected by paths of length at most formula_54. Also, the mean vertex degree will be formula_56. For large random graphs almost all nodes are at a distance of one or two from any given node, formula_57 is formula_56, formula_58 is formula_59, and the graph zeta function is formula_60 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\textstyle r_{ij} " }, { "math_id": 1, "text": "\\textstyle i" }, { "math_id": 2, "text": "\\textstyle j" }, { "math_id": 3, "text": "\\textstyle r_{ij}" }, { "math_id": 4, "text": "\\textstyle \\infty" }, { "math_id": 5, "text": "\\textstyle S(r )" }, { "math_id": 6, "text": " \\textstyle r " }, { "math_id": 7, "text": "\\textstyle \\zeta_G ( \\alpha )" }, { "math_id": 8, "text": " \\zeta_G ( \\alpha ) := \\frac{1}{N}\\sum_i \\sum_{j\\neq i }r^{-\\alpha}_{ij}, " }, { "math_id": 9, "text": "\\textstyle N" }, { "math_id": 10, "text": "\\textstyle \\alpha" }, { "math_id": 11, "text": "\\textstyle \\zeta_{G}(0)" }, { "math_id": 12, "text": "\\textstyle N-1" }, { "math_id": 13, "text": "\\textstyle N \\rightarrow \\infty" }, { "math_id": 14, "text": "\\textstyle <k>" }, { "math_id": 15, "text": "\\textstyle \\alpha \\rightarrow \\infty" }, { "math_id": 16, "text": " \\langle k \\rangle = \\lim_{\\alpha \\rightarrow \\infty} \\zeta_G ( \\alpha ). " }, { "math_id": 17, "text": " \\zeta_G ( \\alpha ) = \\sum_r S(r)/r^{\\alpha}. " }, { "math_id": 18, "text": "\\textstyle \\zeta_G ( \\alpha_1 ) > \\zeta_G ( \\alpha_2 )" }, { "math_id": 19, "text": "\\textstyle \\alpha_1 < \\alpha_2" }, { "math_id": 20, "text": "\\textstyle \\alpha_{transition}" }, { "math_id": 21, "text": "\\textstyle S(r)" }, { "math_id": 22, "text": "\\textstyle \\mathbf Z^d" }, { "math_id": 23, "text": "\\textstyle L^1" }, { "math_id": 24, "text": " \\|\\vec{n}\\|_1=\\|n_1\\|+\\cdots +\\|n_d\\|, " }, { "math_id": 25, "text": "\\textstyle \\alpha = d" }, { "math_id": 26, "text": "\\textstyle S_{1}(r)" }, { "math_id": 27, "text": "\\textstyle r" }, { "math_id": 28, "text": "\\textstyle 2\\zeta(\\alpha)" }, { "math_id": 29, "text": "\\textstyle \\zeta(\\alpha)" }, { "math_id": 30, "text": " S_{d+1}(r) = 2+S_d (r)+2\\sum^{r-1}_{i=1}S_d(i). " }, { "math_id": 31, "text": " S_d(r) = \\sum^{d-1}_{i=0}(-1)^{i}2^{d-i}{d \\choose i} { d+r-i-1 \\choose d-i-1 }. " }, { "math_id": 32, "text": "\\textstyle k" }, { "math_id": 33, "text": "\\textstyle d" }, { "math_id": 34, "text": " \\sum^{r}_{i=1}i^k = \\frac{r^{k+1}}{(k+1)}+ \\frac{r^k}{2}+\\sum^{\\lfloor (k+1)/2 \\rfloor}_{j=1}\\frac{(-1)^{j+1}2\\zeta(2j)k!r^{k+1-2j}}{(2\\pi)^{2j}(k+1-2j)!}. " }, { "math_id": 35, "text": " \\sum^{n}_{k=1}\\bigl(\\begin{smallmatrix} n+1\\ k \\end{smallmatrix}\\bigr)\\sum^{r}_{i=1}i^{k} = (r+1)((r+1)^{n}-1). " }, { "math_id": 36, "text": "\\textstyle S_{d}(r) \\rightarrow O(2^{d}r^{d-1}/\\Gamma(d))" }, { "math_id": 37, "text": "\\textstyle r\\rightarrow \\infty" }, { "math_id": 38, "text": "\\textstyle d=1" }, { "math_id": 39, "text": "\\textstyle \\zeta_G(\\alpha)=2\\zeta(\\alpha)" }, { "math_id": 40, "text": "\\textstyle d=2" }, { "math_id": 41, "text": "\\textstyle \\zeta_G(\\alpha)=4\\zeta(\\alpha-1)" }, { "math_id": 42, "text": "\\textstyle d=3" }, { "math_id": 43, "text": "\\textstyle \\zeta_G(\\alpha)=4\\zeta(\\alpha-2)+2\\zeta(\\alpha" }, { "math_id": 44, "text": "\\textstyle d=4" }, { "math_id": 45, "text": "\\textstyle \\zeta_G(\\alpha)=\\frac{8}{3}\\zeta(\\alpha-3)+\\frac{16}{3}\\zeta(\\alpha-1)" }, { "math_id": 46, "text": "\\textstyle \\zeta_{G}(\\alpha)=2^{d}\\zeta(\\alpha-d+1)/\\Gamma(d)" }, { "math_id": 47, "text": "\\alpha" }, { "math_id": 48, "text": "\\textstyle p" }, { "math_id": 49, "text": "\\textstyle A" }, { "math_id": 50, "text": "\\textstyle B" }, { "math_id": 51, "text": "\\textstyle C" }, { "math_id": 52, "text": "\\textstyle (1-p^2)" }, { "math_id": 53, "text": "\\textstyle N-2" }, { "math_id": 54, "text": "\\textstyle 2" }, { "math_id": 55, "text": "\\textstyle (1-p^2)^{N-2}" }, { "math_id": 56, "text": "\\textstyle p(N-1)" }, { "math_id": 57, "text": "\\textstyle S(1)" }, { "math_id": 58, "text": "\\textstyle S(2)" }, { "math_id": 59, "text": "\\textstyle (N-1)(1-p)" }, { "math_id": 60, "text": " \\zeta_G ( \\alpha ) = p(N-1) + (N-1)(1-p)2^{-\\alpha}. " } ]
https://en.wikipedia.org/wiki?curid=14726322
14729575
Lattice sieving
Integer factorization algorithm Lattice sieving is a technique for finding smooth values of a bivariate polynomial formula_0 over a large region. It is almost exclusively used in conjunction with the number field sieve. The original idea of the lattice sieve came from John Pollard. The algorithm implicitly involves the ideal structure of the number field of the polynomial; it takes advantage of the theorem that any prime ideal above some rational prime "p" can be written as formula_1. One then picks many prime numbers "q" of an appropriate size, usually just above the factor base limit, and proceeds by For each "q", list the prime ideals above "q" by factorising the polynomial f(a,b) over formula_2 For each of these prime ideals, which are called 'special formula_3's, construct a reduced basis formula_4 for the lattice L generated by formula_5; set a two-dimensional array called the sieve region to zero. For each prime ideal formula_6 in the factor base, construct a reduced basis formula_7 for the sublattice of L generated byformula_8 For each element of that sublattice lying within a sufficiently large sieve region, add formula_9 to that entry. Read out all the entries in the sieve region with a large enough value For the number field sieve application, it is necessary for two polynomials both to have smooth values; this is handled by running the inner loop over both polynomials, whilst the special-q can be taken from either side. Treatments of the inmost loop. There are a number of clever approaches to implementing the inmost loop, since listing the elements of a lattice within a rectangular region efficiently is itself a non-trivial problem, and efficiently batching together updates to a sieve region in order to take advantage of cache structures is another non-trivial problem. The normal solution to the first is to have an ordering of the lattice points defined by couple of generators picked so that the decision rule which takes you from one lattice point to the next is straightforward; the normal solution to the second is to collect a series of lists of updates to sub-regions of the array smaller than the size of the level-2 cache, with the number of lists being roughly the number of lines in the L1 cache so that adding an entry to a list is generally a cache hit, and then applying the lists of updates one at a time, where each application will be a level-2 cache hit. For this to be efficient you need to be able to store a number of updates at least comparable to the size of the sieve array, so this can be quite profligate in memory usage. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(a,b)" }, { "math_id": 1, "text": "p \\mathbb Z[\\alpha] + (u + v \\alpha) \\mathbb Z[\\alpha]" }, { "math_id": 2, "text": "GF(q)" }, { "math_id": 3, "text": "\\mathfrak q" }, { "math_id": 4, "text": "\\mathbf x, \\mathbf y" }, { "math_id": 5, "text": "\\mathfrak {q}" }, { "math_id": 6, "text": "\\mathfrak p" }, { "math_id": 7, "text": "\\mathbf x_\\mathfrak p, \\mathbf y_\\mathfrak p" }, { "math_id": 8, "text": "\\mathfrak {pq}" }, { "math_id": 9, "text": "\\log |\\mathfrak p|" } ]
https://en.wikipedia.org/wiki?curid=14729575
147337
List of relativistic equations
Following is a list of the frequently occurring equations in the theory of special relativity. Postulates of Special Relativity. To derive the equations of special relativity, one must start with two other In this context, "speed of light" really refers to the speed supremum of information transmission or of the movement of ordinary (nonnegative mass) matter, locally, as in a classical vacuum. Thus, a more accurate description would refer to formula_0 rather than the speed of light per se. However, light and other massless particles do theoretically travel at formula_0 under vacuum conditions and experiment has nonfalsified this notion with fairly high precision. Regardless of whether light itself does travel at formula_0, though formula_0 does act as such a supremum, and that is the assumption which matters for Relativity. From these two postulates, all of special relativity follows. In the following, the relative velocity "v" between two inertial frames is restricted fully to the "x"-direction, of a Cartesian coordinate system. Kinematics. Lorentz transformation. The following notations are used very often in special relativity: formula_1 where formula_2 and "v" is the relative velocity between two inertial frames. For two frames at rest, γ = 1, and increases with relative velocity between the two inertial frames. As the relative velocity approaches the speed of light, γ → ∞. formula_3 In this example the time measured in the frame on the vehicle, "t", is known as the proper time. The proper time between two events - such as the event of light being emitted on the vehicle and the event of light being received on the vehicle - is the time between the two events in a frame where the events occur at the same location. So, above, the emission and reception of the light both took place in the vehicle's frame, making the time that an observer in the vehicle's frame would measure the proper time. formula_4 This is the formula for length contraction. As there existed a proper time for time dilation, there exists a proper length for length contraction, which in this case is "ℓ". The proper length of an object is the length of the object in the frame in which the object is at rest. Also, this contraction only affects the dimensions of the object which are parallel to the relative velocity between the object and observer. Thus, lengths perpendicular to the direction of motion are unaffected by length contraction. formula_5 formula_6 formula_7 formula_8 formula_9 formula_10 formula_11 The metric and four-vectors. In what follows, bold sans serif is used for 4-vectors while normal bold roman is used for ordinary 3-vectors. formula_12 where formula_13 is known as the metric tensor. In special relativity, the metric tensor is the Minkowski metric: formula_14 formula_15 In the above, "ds"2 is known as the spacetime interval. This inner product is invariant under the Lorentz transformation, that is, formula_16 The sign of the metric and the placement of the "ct", "ct"', "cdt", and "cdt′" time-based terms can vary depending on the author's choice. For instance, many times the time-based terms are placed first in the four-vectors, with the spatial terms following. Also, sometimes "η" is replaced with −"η", making the spatial terms produce negative contributions to the dot product or spacetime interval, while the time term makes a positive contribution. These differences can be used in any combination, so long as the choice of standards is followed completely throughout the computations performed. Lorentz transforms. It is possible to express the above coordinate transformation via a matrix. To simplify things, it can be best to replace "t", "t′", "dt", and "dt′" with "ct", "ct"', "cdt", and "cdt′", which has the dimensions of distance. So: formula_17 formula_6 formula_7 formula_18 then in matrix form: formula_19 The vectors in the above transformation equation are known as four-vectors, in this case they are specifically the position four-vectors. In general, in special relativity, four-vectors can be transformed from one reference frame to another as follows: formula_20 In the above, formula_21 and formula_22 are the four-vector and the transformed four-vector, respectively, and Λ is the transformation matrix, which, for a given transformation is the same for all four-vectors one might want to transform. So formula_21 can be a four-vector representing position, velocity, or momentum, and the same Λ can be used when transforming between the same two frames. The most general Lorentz transformation includes boosts and rotations; the components are complicated and the transformation requires spinors. 4-vectors and frame-invariant results. Invariance and unification of physical quantities both arise from four-vectors. The inner product of a 4-vector with itself is equal to a scalar (by definition of the inner product), and since the 4-vectors are physical quantities their magnitudes correspond to physical quantities also. Doppler shift. General doppler shift: formula_23 Doppler shift for emitter and observer moving right towards each other (or directly away): formula_24 Doppler shift for emitter and observer moving in a direction perpendicular to the line connecting them: formula_25 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c_0" }, { "math_id": 1, "text": " \\gamma = \\frac{1}{\\sqrt{1 - \\beta^2}}" }, { "math_id": 2, "text": "\\beta= \\frac{v}{c}" }, { "math_id": 3, "text": " t' = \\gamma t" }, { "math_id": 4, "text": "\\ell' = \\frac{\\ell}{\\gamma}" }, { "math_id": 5, "text": "x' = \\gamma \\left ( x - v t \\right )" }, { "math_id": 6, "text": "y' = y \\," }, { "math_id": 7, "text": "z' = z \\," }, { "math_id": 8, "text": "t' = \\gamma \\left ( t - \\frac{v x}{c^2} \\right )" }, { "math_id": 9, "text": "V'_x=\\frac{ V_x - v }{ 1 - \\frac{V_x v}{c^2} }" }, { "math_id": 10, "text": "V'_y=\\frac{ V_y }{ \\gamma \\left ( 1 - \\frac{V_x v}{c^2} \\right ) }" }, { "math_id": 11, "text": "V'_z=\\frac{ V_z }{ \\gamma \\left ( 1 - \\frac{V_x v}{c^2} \\right ) }" }, { "math_id": 12, "text": " \\boldsymbol{\\mathsf{a}} \\cdot \\boldsymbol{\\mathsf{b}} =\\eta (\\boldsymbol{\\mathsf{a}} , \\boldsymbol{\\mathsf{b}})" }, { "math_id": 13, "text": "\\eta" }, { "math_id": 14, "text": "\\eta = \\begin{pmatrix} -1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 1 \\end{pmatrix}" }, { "math_id": 15, "text": "ds^2 = dx^2 + dy^2 + dz^2 - c^2 dt^2 = \\begin{pmatrix} cdt & dx & dy & dz \\end{pmatrix} \\begin{pmatrix} -1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 1 \\end{pmatrix} \\begin{pmatrix} cdt \\\\ dx \\\\ dy \\\\ dz \\end{pmatrix}" }, { "math_id": 16, "text": " \\eta ( \\boldsymbol{\\mathsf{a}}' , \\boldsymbol{\\mathsf{b}}' ) = \\eta \\left ( \\Lambda \\boldsymbol{\\mathsf{a}} , \\Lambda \\boldsymbol{\\mathsf{b}} \\right ) = \\eta ( \\boldsymbol{\\mathsf{a}} , \\boldsymbol{\\mathsf{b}} )" }, { "math_id": 17, "text": "x' = \\gamma x - \\gamma \\beta c t \\," }, { "math_id": 18, "text": "c t' = \\gamma c t - \\gamma \\beta x \\," }, { "math_id": 19, "text": "\\begin{pmatrix} c t' \\\\ x' \\\\ y' \\\\ z' \\end{pmatrix} = \\begin{pmatrix} \\gamma & - \\gamma \\beta & 0 & 0 \\\\ - \\gamma \\beta & \\gamma & 0 & 0\\\\ 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 1 \\end{pmatrix}\\begin{pmatrix} c t \\\\ x \\\\ y \\\\ z \\end{pmatrix}" }, { "math_id": 20, "text": "\\boldsymbol{\\mathsf{a}}' = \\Lambda \\boldsymbol{\\mathsf{a}}" }, { "math_id": 21, "text": "\\boldsymbol{\\mathsf{a}}'" }, { "math_id": 22, "text": "\\boldsymbol{\\mathsf{a}}" }, { "math_id": 23, "text": "\\nu' = \\gamma \\nu \\left ( 1 - \\beta \\cos \\theta \\right )" }, { "math_id": 24, "text": "\\nu' = \\nu \\frac{\\sqrt{1 - \\beta}}{\\sqrt{1 + \\beta}}" }, { "math_id": 25, "text": "\\nu' = \\gamma \\nu" } ]
https://en.wikipedia.org/wiki?curid=147337
14734259
CLP(R)
Constraint logic programming over rational and real numbers CLP(R) is a declarative programming language. It stands for constraint logic programming (real) where real refers to the real numbers. It can be considered and is generally implemented as a superset or add-on package for a Prolog implementation. Example rule. The simultaneous linear equations: formula_0 are expressed in CLP(R) as: 3*X + 4*Y - 2*Z = 8, X - 5*Y + Z = 10, 2*X + 3*Y -Z = 20. and a typical implementation's response would be:&lt;br&gt; &lt;samp&gt; Z = 35.75&lt;br&gt; Y = 8.25&lt;br&gt; X = 15.5&lt;br&gt; &lt;br&gt; Yes&lt;br&gt; &lt;/samp&gt; Example program. CLP(R) allows the definition of predicates using recursive definitions. For example a "mortgage" relation can be defined as relating the principal P, the number of time periods of the loan T, the repayment each period R, the interest rate per period I and the final balance owing at the end of the loan B. mg(P, T, R, I, B) :- T = 0, B = R. mg(P, T, R, I, B) :- T &gt;= 1, P1 = P*(1+I) - R, mg(P1, T - 1, R, I, B). The first rule expresses that for a 0 period loan the balance owing at the end is simply the original principal. The second rule expresses that for a loan of at least one time period we can calculate the new owing amount P1 by multiplying the principal by 1 plus the interest rate and subtracting the repayment. The remainder of the loan is treated as another mortgage for the new principal and one less time period. What can you do with it? You can ask many questions. If I borrow 1000$ for 10 years at 10% per year repaying 150 per year, how much will I owe at the end? ?- mg(1000, 10, 150, 10/100, B). The system responds with the answer B = 203.129. How much can I borrow with a 10 year loan at 10% repaying 150 each year to owe nothing at the end? ?- mg(P, 10, 150, 10/100, 0). The system responds with the answer P = 921.685. What is the relationship between the principal, repayment and balance on a 10 year loan at 10% interest? ?- mg(P, 10, R, 10/100, B). The system responds with the answer P = 0.3855*B + 6.1446 * R. This shows the relationship between the variables, without requiring any to take a particular value. Prolog Integration. CLP(R) has first been integrated into a Prolog system in 1994, namely into SICStus Prolog. This implementation has since been ported to many popular Prolog systems, including Ciao, SWI-Prolog and XSB. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{cases}\n3x + 4y - 2z = 8\\\\\nx - 5y + z = 10\\\\\n2x + 3y -z = 20\n\\end{cases}" } ]
https://en.wikipedia.org/wiki?curid=14734259
14736250
Delannoy number
Number of paths between grid corners, allowing diagonal steps In mathematics, a Delannoy number formula_0 counts the paths from the southwest corner (0, 0) of a rectangular grid to the northeast corner ("m", "n"), using only single steps north, northeast, or east. The Delannoy numbers are named after French army officer and amateur mathematician Henri Delannoy. The Delannoy number formula_1 also counts the global alignments of two sequences of lengths formula_2 and formula_3, the points in an "m"-dimensional integer lattice or cross polytope which are at most "n" steps from the origin, and, in cellular automata, the cells in an "m"-dimensional von Neumann neighborhood of radius "n". Example. The Delannoy number "D"(3,3) equals 63. The following figure illustrates the 63 Delannoy paths from (0, 0) to (3, 3): The subset of paths that do not rise above the SW–NE diagonal are counted by a related family of numbers, the Schröder numbers. Delannoy array. The Delannoy array is an infinite matrix of the Delannoy numbers: In this array, the numbers in the first row are all one, the numbers in the second row are the odd numbers, the numbers in the third row are the centered square numbers, and the numbers in the fourth row are the centered octahedral numbers. Alternatively, the same numbers can be arranged in a triangular array resembling Pascal's triangle, also called the tribonacci triangle, in which each number is the sum of the three numbers above it: 1 1 1 1 3 1 1 5 5 1 1 7 13 7 1 1 9 25 25 9 1 1 11 41 63 41 11 1 Central Delannoy numbers. The central Delannoy numbers "D"("n") = "D"("n","n") are the numbers for a square "n" × "n" grid. The first few central Delannoy numbers (starting with "n"=0) are: 1, 3, 13, 63, 321, 1683, 8989, 48639, 265729, ... (sequence in the OEIS). Computation. Delannoy numbers. For formula_4 diagonal (i.e. northeast) steps, there must be formula_5 steps in the formula_6 direction and formula_7 steps in the formula_8 direction in order to reach the point formula_9; as these steps can be performed in any order, the number of such paths is given by the multinomial coefficient formula_10. Hence, one gets the closed-form expression formula_11 An alternative expression is given by formula_12 or by the infinite series formula_13 And also formula_14 where formula_15 is given with (sequence in the OEIS). The basic recurrence relation for the Delannoy numbers is easily seen to be formula_16 This recurrence relation also leads directly to the generating function formula_17 Central Delannoy numbers. Substituting formula_18 in the first closed form expression above, replacing formula_19, and a little algebra, gives formula_20 while the second expression above yields formula_21 The central Delannoy numbers satisfy also a three-term recurrence relationship among themselves, formula_22 and have a generating function formula_23 The leading asymptotic behavior of the central Delannoy numbers is given by formula_24 where formula_25 and formula_26. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D" }, { "math_id": 1, "text": " D(m,n) " }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": " k " }, { "math_id": 5, "text": " m-k " }, { "math_id": 6, "text": " x " }, { "math_id": 7, "text": " n-k " }, { "math_id": 8, "text": " y " }, { "math_id": 9, "text": " (m, n) " }, { "math_id": 10, "text": " \\binom{m+n-k}{k , m-k , n-k} = \\binom{m+n-k}{m} \\binom{m}{k} " }, { "math_id": 11, "text": " D(m,n) = \\sum_{k=0}^{\\min(m,n)} \\binom{m+n-k}{m} \\binom{m}{k} . " }, { "math_id": 12, "text": " D(m,n) = \\sum_{k=0}^{\\min(m,n)} \\binom{m}{k} \\binom{n}{k} 2^k " }, { "math_id": 13, "text": " D(m,n) = \\sum_{k=0}^\\infty \\frac{1}{2^{k+1}} \\binom{k}{n} \\binom{k}{m}. " }, { "math_id": 14, "text": " D(m,n) = \\sum_{k=0}^{n} A(m,k), " }, { "math_id": 15, "text": " A(m,k) " }, { "math_id": 16, "text": "D(m,n)=\\begin{cases}1 &\\text{if }m=0\\text{ or }n=0\\\\D(m-1,n) + D(m-1,n-1) + D(m,n-1)&\\text{otherwise}\\end{cases}" }, { "math_id": 17, "text": " \\sum_{m,n = 0}^\\infty D(m, n) x^m y^n = (1 - x - y - xy)^{-1} . " }, { "math_id": 18, "text": " m = n " }, { "math_id": 19, "text": " k \\leftrightarrow n-k " }, { "math_id": 20, "text": " D(n) = \\sum_{k=0}^n \\binom{n}{k} \\binom{n+k}{k} , " }, { "math_id": 21, "text": " D(n) = \\sum_{k=0}^n \\binom{n}{k}^2 2^k . " }, { "math_id": 22, "text": " n D(n) = 3(2n-1)D(n-1) - (n-1)D(n-2) , " }, { "math_id": 23, "text": " \\sum_{n = 0}^\\infty D(n) x^n = (1-6x+x^2)^{-1/2} . " }, { "math_id": 24, "text": " D(n) = \\frac{c \\, \\alpha^n}{\\sqrt{n}} \\, (1 + O(n^{-1})) " }, { "math_id": 25, "text": " \\alpha = 3 + 2 \\sqrt{2} \\approx 5.828 " }, { "math_id": 26, "text": " c = (4 \\pi (3 \\sqrt{2} - 4))^{-1/2} \\approx 0.5727 " } ]
https://en.wikipedia.org/wiki?curid=14736250
14738000
Extraneous and missing solutions
In mathematics, an extraneous solution (or spurious solution) is one which emerges from the process of solving a problem but is not a valid solution to it. A missing solution is a valid one which is lost during the solution process. Both situations frequently result from performing operations that are not invertible for some or all values of the variables involved, which prevents the chain of logical implications from being bidirectional. Extraneous solutions: multiplication. One of the basic principles of algebra is that one can multiply both sides of an equation by the same expression without changing the equation's solutions. However, strictly speaking, this is not true, in that multiplication by certain expressions may introduce new solutions that were not present before. For example, consider the following equation: formula_0 If we multiply both sides by zero, we get, formula_1 This is true for all values of formula_2, so the solution set is all real numbers. But clearly not all real numbers are solutions to the original equation. The problem is that multiplication by zero is not "invertible": if we multiply by any nonzero value, we can reverse the step by dividing by the same value, but division by zero is not defined, so multiplication by zero cannot be reversed. More subtly, suppose we take the same equation and multiply both sides by formula_2. We get formula_3 formula_4 This quadratic equation has two solutions: formula_5 and formula_6 But if formula_7 is substituted for formula_2 in the original equation, the result is the invalid equation formula_8. This counterintuitive result occurs because in the case where formula_9, multiplying both sides by formula_2 multiplies both sides by zero, and so necessarily produces a true equation just as in the first example. In general, whenever we multiply both sides of an equation by an expression involving variables, we introduce extraneous solutions wherever that expression is equal to zero. But it is not sufficient to exclude these values, because they may have been legitimate solutions to the original equation. For example, suppose we multiply both sides of our original equation formula_10 by formula_11 We get formula_12 formula_13 which has only one real solution: formula_5. This is a solution to the original equation so cannot be excluded, even though formula_10 for this value of formula_2. Extraneous solutions: rational. Extraneous solutions can arise naturally in problems involving fractions with variables in the denominator. For example, consider this equation: formula_14 To begin solving, we multiply each side of the equation by the least common denominator of all the fractions contained in the equation. In this case, the least common denominator is formula_15. After performing these operations, the fractions are eliminated, and the equation becomes: formula_16 Solving this yields the single solution formula_17 However, when we substitute the solution back into the original equation, we obtain: formula_18 The equation then becomes: formula_19 This equation is not valid, since one cannot divide by zero. Therefore, the solution formula_5 is extraneous and not valid, and the original equation has no solution. For this specific example, it could be recognized that (for the value formula_5), the operation of multiplying by formula_15 would be a multiplication by zero. However, it is not always simple to evaluate whether each operation already performed was allowed by the final answer. Because of this, often, the only simple effective way to deal with multiplication by expressions involving variables is to substitute each of the solutions obtained into the original equation and confirm that this yields a valid equation. After discarding solutions that yield an invalid equation, we will have the correct set of solutions. In some cases, as in the above example, all solutions may be discarded, in which case the original equation has no solution. Missing solutions: division. Extraneous solutions are not too difficult to deal with because they just require checking all solutions for validity. However, more insidious are missing solutions, which can occur when performing operations on expressions that are invalid for certain values of those expressions. For example, if we were solving the following equation, the correct solution is obtained by subtracting formula_20 from both sides, then dividing both sides by formula_21: formula_22 formula_23 formula_17 By analogy, we might suppose we can solve the following equation by subtracting formula_24 from both sides, then dividing by formula_2: formula_25 formula_26 formula_17 The solution formula_5 is in fact a valid solution to the original equation; but the other solution, formula_9, has disappeared. The problem is that we divided both sides by formula_2, which involves the indeterminate operation of dividing by zero when formula_6 It is generally possible (and advisable) to avoid dividing by any expression that can be zero; however, where this is necessary, it is sufficient to ensure that any values of the variables that make it zero also fail to satisfy the original equation. For example, suppose we have this equation: formula_0 It is valid to divide both sides by formula_27, obtaining the following equation: formula_28 This is valid because the only value of formula_2 that makes formula_27 equal to zero is formula_29 which is not a solution to the original equation. In some cases we are not interested in certain solutions; for example, we may only want solutions where formula_2 is positive. In this case it is okay to divide by an expression that is only zero when formula_2 is zero or negative, because this can only remove solutions we do not care about. Other operations. Multiplication and division are not the only operations that can modify the solution set. For example, take the problem: formula_30 If we take the positive square root of both sides, we get: formula_31 We are not taking the square root of any negative values here, since both formula_32 and formula_20 are necessarily positive. But we have lost the solution formula_17 The reason is that formula_2 is actually not in general the "positive" square root of formula_33 If formula_2 is negative, the positive square root of formula_32 is formula_34 If the step is taken correctly, it leads instead to the equation: formula_35 formula_36 formula_37 This equation has the same two solutions as the original one: formula_38 and formula_17 We can also modify the solution set by squaring both sides, because this will make any negative values in the ranges of the equation positive, causing extraneous solutions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x+2=0." }, { "math_id": 1, "text": "0=0." }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "x(x+2)=(0)x," }, { "math_id": 4, "text": "x^2+2x=0." }, { "math_id": 5, "text": "x=-2" }, { "math_id": 6, "text": "x=0." }, { "math_id": 7, "text": "0" }, { "math_id": 8, "text": "2=0" }, { "math_id": 9, "text": "x=0" }, { "math_id": 10, "text": "x+2=0" }, { "math_id": 11, "text": "x+2." }, { "math_id": 12, "text": "(x+2)(x+2)=0(x+2)," }, { "math_id": 13, "text": "x^2+4x+4=0," }, { "math_id": 14, "text": "\\frac{1}{x - 2} = \\frac{3}{x + 2} - \\frac{6x}{(x - 2)(x + 2)}\\,." }, { "math_id": 15, "text": "(x - 2)(x + 2)" }, { "math_id": 16, "text": "x + 2 = 3(x - 2) - 6x\\,." }, { "math_id": 17, "text": "x=-2." }, { "math_id": 18, "text": "\\frac{1}{-2 - 2} = \\frac{3}{-2 + 2} - \\frac{6(-2)}{(-2 - 2)(-2 + 2)}\\,." }, { "math_id": 19, "text": "\\frac{1}{-4} = \\frac{3}{0} + \\frac{12}{0}\\,." }, { "math_id": 20, "text": "4" }, { "math_id": 21, "text": "2" }, { "math_id": 22, "text": "2x+4=0," }, { "math_id": 23, "text": "2x=-4," }, { "math_id": 24, "text": "2x" }, { "math_id": 25, "text": "x^2+2x=0," }, { "math_id": 26, "text": "x^2=-2x," }, { "math_id": 27, "text": "x-2" }, { "math_id": 28, "text": "\\frac{x+2}{x-2}=0." }, { "math_id": 29, "text": "x=2," }, { "math_id": 30, "text": "x^2 = 4." }, { "math_id": 31, "text": "x = 2." }, { "math_id": 32, "text": "x^2" }, { "math_id": 33, "text": "x^2." }, { "math_id": 34, "text": "-x. " }, { "math_id": 35, "text": "\\sqrt{x^2} = \\sqrt{4}." }, { "math_id": 36, "text": "|x| = 2." }, { "math_id": 37, "text": "x = \\pm 2." }, { "math_id": 38, "text": "x=2" } ]
https://en.wikipedia.org/wiki?curid=14738000
14743376
Hardy–Ramanujan theorem
Analytic number theory In mathematics, the Hardy–Ramanujan theorem, proved by Ramanujan and checked by Hardy states that the normal order of the number formula_0 of distinct prime factors of a number formula_1 is formula_2. Roughly speaking, this means that most numbers have about this number of distinct prime factors. Precise statement. A more precise version states that for every real-valued function formula_3 that tends to infinity as formula_1 tends to infinity formula_4 or more traditionally formula_5 for "almost all" (all but an infinitesimal proportion of) integers. That is, let formula_6 be the number of positive integers formula_1 less than formula_7 for which the above inequality fails: then formula_8 converges to zero as formula_7 goes to infinity. History. A simple proof to the result was given by Pál Turán, who used the Turán sieve to prove that formula_9 Generalizations. The same results are true of formula_10, the number of prime factors of formula_1 counted with multiplicity. This theorem is generalized by the Erdős–Kac theorem, which shows that formula_0 is essentially normally distributed. There are many proofs of this, including the method of moments (Granville &amp; Soundararajan) and Stein's method (Harper). It was shown by Durkan that a modified version of Turán's result allows one to prove the Hardy–Ramanujan Theorem with any even moment. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega(n)" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "\\log\\log n" }, { "math_id": 3, "text": "\\psi(n)" }, { "math_id": 4, "text": "|\\omega(n)-\\log\\log n|<\\psi(n)\\sqrt{\\log\\log n}" }, { "math_id": 5, "text": "|\\omega(n)-\\log\\log n|<{(\\log\\log n)}^{\\frac12 +\\varepsilon}" }, { "math_id": 6, "text": "g(x)" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "g(x)/x" }, { "math_id": 9, "text": "\\sum_{n \\le x} | \\omega(n) - \\log\\log x|^2 \\ll x \\log\\log x . " }, { "math_id": 10, "text": "\\Omega(n)" } ]
https://en.wikipedia.org/wiki?curid=14743376
1474467
Compton wavelength
Quantum mechanical property of particles The Compton wavelength is a quantum mechanical property of a particle, defined as the wavelength of a photon whose energy is the same as the rest energy of that particle (see mass–energy equivalence). It was introduced by Arthur Compton in 1923 in his explanation of the scattering of photons by electrons (a process known as Compton scattering). The standard Compton wavelength λ of a particle of mass formula_0 is given by formula_1 where h is the Planck constant and c is the speed of light. The corresponding frequency f is given by formula_2 and the angular frequency ω is given by formula_3 The CODATA 2018 value for the Compton wavelength of the electron is . Other particles have different Compton wavelengths. Reduced Compton wavelength. The reduced Compton wavelength "ƛ" (barred lambda, denoted below by formula_4) is defined as the Compton wavelength divided by 2"π": formula_5 where "ħ" is the reduced Planck constant. Role in equations for massive particles. The inverse reduced Compton wavelength is a natural representation for mass on the quantum scale, and as such, it appears in many of the fundamental equations of quantum mechanics. The reduced Compton wavelength appears in the relativistic Klein–Gordon equation for a free particle: formula_6 It appears in the Dirac equation (the following is an explicitly covariant form employing the Einstein summation convention): formula_7 The reduced Compton wavelength is also present in Schrödinger's equation, although this is not readily apparent in traditional representations of the equation. The following is the traditional representation of Schrödinger's equation for an electron in a hydrogen-like atom: formula_8 Dividing through by formula_9 and rewriting in terms of the fine-structure constant, one obtains: formula_10 Distinction between reduced and non-reduced. The reduced Compton wavelength is a natural representation of mass on the quantum scale and is used in equations that pertain to inertial mass, such as the Klein–Gordon and Schrödinger's equations. Equations that pertain to the wavelengths of photons interacting with mass use the non-reduced Compton wavelength. A particle of mass "m" has a rest energy of "E" = "mc"2. The Compton wavelength for this particle is the wavelength of a photon of the same energy. For photons of frequency "f", energy is given by formula_11 which yields the Compton wavelength formula if solved for "λ". Limitation on measurement. The Compton wavelength expresses a fundamental limitation on measuring the position of a particle, taking into account quantum mechanics and special relativity. This limitation depends on the mass "m" of the particle. To see how, note that we can measure the position of a particle by bouncing light off it – but measuring the position accurately requires light of short wavelength. Light with a short wavelength consists of photons of high energy. If the energy of these photons exceeds "mc"2, when one hits the particle whose position is being measured the collision may yield enough energy to create a new particle of the same type. This renders moot the question of the original particle's location. This argument also shows that the reduced Compton wavelength is the cutoff below which quantum field theory – which can describe particle creation and annihilation – becomes important. The above argument can be made a bit more precise as follows. Suppose we wish to measure the position of a particle to within an accuracy Δ"x". Then the uncertainty relation for position and momentum says that formula_12 so the uncertainty in the particle's momentum satisfies formula_13 Using the relativistic relation between momentum and energy "E"2 = ("pc")2 + ("mc"2)2, when Δ"p" exceeds "mc" then the uncertainty in energy is greater than "mc"2, which is enough energy to create another particle of the same type. But we must exclude this greater energy uncertainty. Physically, this is excluded by the creation of one or more additional particles to keep the momentum uncertainty of each particle at or below "mc". In particular the minimum uncertainty is when the scattered photon has limit energy equal to the incident observing energy. It follows that there is a fundamental minimum for Δ"x": formula_14 Thus the uncertainty in position must be greater than half of the reduced Compton wavelength "ħ"/"mc". Relationship to other constants. Typical atomic lengths, wave numbers, and areas in physics can be related to the reduced Compton wavelength for the electron (formula_15) and the electromagnetic fine-structure constant (formula_16). The Bohr radius is related to the Compton wavelength by: formula_17 The classical electron radius is about 3 times larger than the proton radius, and is written: formula_18 The Rydberg constant, having dimensions of linear wavenumber, is written: formula_19 formula_20 This yields the sequence: formula_21 For fermions, the reduced Compton wavelength sets the cross-section of interactions. For example, the cross-section for Thomson scattering of a photon from an electron is equal to formula_22 which is roughly the same as the cross-sectional area of an iron-56 nucleus. For gauge bosons, the Compton wavelength sets the effective range of the Yukawa interaction: since the photon has no mass, electromagnetism has infinite range. The Planck mass is the order of mass for which the Compton wavelength and the Schwarzschild radius formula_23 are the same, when their value is close to the Planck length (formula_24). The Schwarzschild radius is proportional to the mass, whereas the Compton wavelength is proportional to the inverse of the mass. The Planck mass and length are defined by: formula_25 formula_26 Geometrical interpretation. A geometrical origin of the Compton wavelength has been demonstrated using semiclassical equations describing the motion of a wavepacket. In this case, the Compton wavelength is equal to the square root of the quantum metric, a metric describing the quantum space: formula_27. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m" }, { "math_id": 1, "text": " \\lambda = \\frac{h}{m c}, " }, { "math_id": 2, "text": "f = \\frac{m c^2}{h}," }, { "math_id": 3, "text": " \\omega = \\frac{m c^2}{\\hbar}." }, { "math_id": 4, "text": "\\bar\\lambda" }, { "math_id": 5, "text": "\\bar\\lambda = \\frac{\\lambda}{2 \\pi} = \\frac{\\hbar}{m c}," }, { "math_id": 6, "text": " \\mathbf{\\nabla}^2\\psi-\\frac{1}{c^2}\\frac{\\partial^2}{\\partial t^2}\\psi = \\left(\\frac{m c}{\\hbar} \\right)^2 \\psi." }, { "math_id": 7, "text": "-i \\gamma^\\mu \\partial_\\mu \\psi + \\left( \\frac{m c}{\\hbar} \\right) \\psi = 0." }, { "math_id": 8, "text": " i\\hbar\\frac{\\partial}{\\partial t}\\psi=-\\frac{\\hbar^2}{2m}\\nabla^2\\psi -\\frac{1}{4 \\pi \\epsilon_0} \\frac{Ze^2}{r} \\psi." }, { "math_id": 9, "text": "\\hbar c" }, { "math_id": 10, "text": "\\frac{i}{c}\\frac{\\partial}{\\partial t}\\psi=-\\frac{\\bar{\\lambda}}{2} \\nabla^2\\psi - \\frac{\\alpha Z}{r} \\psi." }, { "math_id": 11, "text": " E = h f = \\frac{h c}{\\lambda} = m c^2, " }, { "math_id": 12, "text": "\\Delta x\\,\\Delta p\\ge \\frac{\\hbar}{2}," }, { "math_id": 13, "text": "\\Delta p \\ge \\frac{\\hbar}{2\\Delta x}." }, { "math_id": 14, "text": "\\Delta x \\ge \\frac{1}{2} \\left(\\frac{\\hbar}{mc} \\right)." }, { "math_id": 15, "text": "\\bar{\\lambda}_\\text{e} \\equiv \\tfrac{\\lambda_\\text{e}}{2\\pi}\\simeq 386~\\textrm{fm}" }, { "math_id": 16, "text": "\\alpha\\simeq\\tfrac{1}{137}" }, { "math_id": 17, "text": "a_0 = \\frac{1}{\\alpha}\\left(\\frac{\\lambda_\\text{e}}{2\\pi}\\right) = \\frac{\\bar{\\lambda}_\\text{e}}{\\alpha} \\simeq 137\\times\\bar{\\lambda}_\\text{e}\\simeq 5.29\\times 10^4~\\textrm{fm} " }, { "math_id": 18, "text": "r_\\text{e} = \\alpha\\left(\\frac{\\lambda_\\text{e}}{2\\pi}\\right) = \\alpha\\bar{\\lambda}_\\text{e} \\simeq\\frac{\\bar{\\lambda}_\\text{e}}{137}\\simeq 2.82~\\textrm{fm}" }, { "math_id": 19, "text": "\\frac{1}{R_\\infty}=\\frac{2\\lambda_\\text{e}}{\\alpha^2} \\simeq 91.1~\\textrm{nm}" }, { "math_id": 20, "text": "\\frac{1}{2\\pi R_\\infty} = \\frac{2}{\\alpha^2}\\left(\\frac{\\lambda_\\text{e}}{2\\pi}\\right) = 2 \\frac{\\bar{\\lambda}_\\text{e}}{\\alpha^2} \\simeq 14.5~\\textrm{nm}" }, { "math_id": 21, "text": "r_{\\text{e}} = \\alpha \\bar{\\lambda}_{\\text{e}} = \\alpha^2 a_0 = \\alpha^3 \\frac{1}{4\\pi R_\\infty}." }, { "math_id": 22, "text": "\\sigma_\\mathrm{T} = \\frac{8\\pi}{3}\\alpha^2\\bar{\\lambda}_\\text{e}^2 \\simeq 66.5~\\textrm{fm}^2 ," }, { "math_id": 23, "text": " r_{\\rm S} = 2 G M /c^2 " }, { "math_id": 24, "text": "l_{\\rm P}" }, { "math_id": 25, "text": "m_{\\rm P} = \\sqrt{\\hbar c/G}" }, { "math_id": 26, "text": "l_{\\rm P} = \\sqrt{\\hbar G /c^3}." }, { "math_id": 27, "text": "\\sqrt{g_{kk}}=\\lambda_\\mathrm{C}" } ]
https://en.wikipedia.org/wiki?curid=1474467
1474524
Context mixing
Context mixing is a type of data compression algorithm in which the next-symbol predictions of two or more statistical models are combined to yield a prediction that is often more accurate than any of the individual predictions. For example, one simple method (not necessarily the best) is to average the probabilities assigned by each model. The random forest is another method: it outputs the prediction that is the mode of the predictions output by individual models. Combining models is an active area of research in machine learning. The PAQ series of data compression programs use context mixing to assign probabilities to individual bits of the input. Application to Data Compression. Suppose that we are given two conditional probabilities, formula_0 and formula_1, and we wish to estimate formula_2, the probability of event X given both conditions formula_3 and formula_4. There is insufficient information for probability theory to give a result. In fact, it is possible to construct scenarios in which the result could be anything at all. But intuitively, we would expect the result to be some kind of average of the two. The problem is important for data compression. In this application, formula_3 and formula_4 are contexts, formula_5 is the event that the next bit or symbol of the data to be compressed has a particular value, and formula_0 and formula_1 are the probability estimates by two independent models. The compression ratio depends on how closely the estimated probability approaches the true but unknown probability of event formula_5. It is often the case that contexts formula_3 and formula_4 have occurred often enough to accurately estimate formula_0 and formula_1 by counting occurrences of formula_5 in each context, but the two contexts either have not occurred together frequently, or there are insufficient computing resources (time and memory) to collect statistics for the combined case. For example, suppose that we are compressing a text file. We wish to predict whether the next character will be a linefeed, given that the previous character was a period (context formula_3) and that the last linefeed occurred 72 characters ago (context formula_4). Suppose that a linefeed previously occurred after 1 of the last 5 periods (formula_6) and in 5 out of the last 10 lines at column 72 (formula_7). How should these predictions be combined? Two general approaches have been used, linear and logistic mixing. Linear mixing uses a weighted average of the predictions weighted by evidence. In this example, formula_1 gets more weight than formula_0 because formula_1 is based on a greater number of tests. Older versions of PAQ uses this approach. Newer versions use logistic (or neural network) mixing by first transforming the predictions into the logistic domain, log(p/(1-p)) before averaging. This effectively gives greater weight to predictions near 0 or 1, in this case formula_0. In both cases, additional weights may be given to each of the input models and adapted to favor the models that have given the most accurate predictions in the past. All but the oldest versions of PAQ use adaptive weighting. Most context mixing compressors predict one bit of input at a time. The output probability is simply the probability that the next bit will be a 1. Linear Mixing. We are given a set of predictions Pi(1) = n1i/ni, where ni = n0i + n1i, and n0i and n1i are the counts of 0 and 1 bits respectively for the i'th model. The probabilities are computed by weighted addition of the 0 and 1 counts: The weights wi are initially equal and always sum to 1. Under the initial conditions, each model is weighted in proportion to evidence. The weights are then adjusted to favor the more accurate models. Suppose we are given that the actual bit being predicted is y (0 or 1). Then the weight adjustment is: Compression can be improved by bounding ni so that the model weighting is better balanced. In PAQ6, whenever one of the bit counts is incremented, the part of the other count that exceeds 2 is halved. For example, after the sequence 000000001, the counts would go from (n0, n1) = (8, 0) to (5, 1). Logistic Mixing. Let Pi(1) be the prediction by the i'th model that the next bit will be a 1. Then the final prediction P(1) is calculated: where P(1) is the probability that the next bit will be a 1, Pi(1) is the probability estimated by the "i'th" model, and After each prediction, the model is updated by adjusting the weights to minimize coding cost. where η is the learning rate (typically 0.002 to 0.01), "y" is the predicted bit, and (y - P(1)) is the prediction error. List of Context Mixing Compressors. All versions below use logistic mixing unless otherwise indicated. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(X|A)" }, { "math_id": 1, "text": "P(X|B)" }, { "math_id": 2, "text": "P(X|A,B)" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "B" }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "P(X|A=0.2" }, { "math_id": 7, "text": "P(X|B)=0.5" } ]
https://en.wikipedia.org/wiki?curid=1474524
14745714
Pipe insulation
Pipe Insulation is thermal or acoustic insulation used on pipework. Applications. Condensation control. Where pipes operate at below-ambient temperatures, the potential exists for water vapour to condense on the pipe surface. Moisture is known to contribute towards many different types of corrosion, so preventing the formation of condensation on pipework is usually considered important. Pipe insulation can prevent condensation forming, as the surface temperature of the insulation will vary from the surface temperature of the pipe. Condensation will not occur, provided that (a) the insulation surface is above the dewpoint temperature of the air; and (b) the insulation incorporates some form of water-vapour barrier or retarder that prevents water vapour from passing through the insulation to form on the pipe surface. Pipe freezing. Since some water pipes are located either outside or in unheated areas where the ambient temperature may occasionally drop below the freezing point of water, any water in the pipework may potentially freeze. When water freezes it expands and this expansion can cause failure of a pipe system in any one of a number of ways. Pipe insulation cannot prevent the freezing of standing water in pipework, but it can increase the time required for freezing to occur—thereby reducing the risk of the water in the pipes freezing. For this reason, it is recommended to insulate pipework at risk of freezing, and local water-supply regulations may require pipe insulation be applied to pipework to reduce the risk of pipe freezing. For a given length, a smaller-bore pipe holds a smaller volume of water than a larger-bore pipe, and therefore water in a smaller-bore pipe will freeze more easily (and more quickly) than water in a larger-bore pipe (presuming equivalent environments). Since smaller-bore pipes present a greater risk of freezing, insulation is typically used in combination with alternative methods of freeze prevention (e.g., modulating trace heating cable, or ensuring a consistent flow of water through the pipe). Energy saving. Since pipework can operate at temperatures far removed from the ambient temperature, and the rate of heat flow from a pipe is related to the temperature differential between the pipe and the surrounding ambient air, heat flow from pipework can be considerable. In many situations, this heat flow is undesirable. The application of thermal pipe insulation introduces thermal resistance and reduces the heat flow. Thicknesses of thermal pipe insulation used for saving energy vary, but as a general rule, pipes operating at more-extreme temperatures exhibit a greater heat flow and larger thicknesses are applied due to the greater potential savings. The location of pipework also influences the selection of insulation thickness. For instance, in some circumstances, heating pipework within a well-insulated building might not require insulation, as the heat that's "lost" (i.e., the heat that flows from the pipe to the surrounding air) may be considered “useful” for heating the building, as such "lost" heat would be effectively trapped by the structural insulation anyway. Conversely, such pipework may be insulated to prevent overheating or unnecessary cooling in the rooms through which it passes. Protection against extreme temperatures. Where pipework is operating at extremely high or low temperatures, the potential exists for injury to occur should any person come into physical contact with the pipe surface. The threshold for human pain varies, but several international standards set recommended touch temperature limits. Since the surface temperature of insulation varies from the temperature of the pipe surface, typically such that the insulation surface has a "less extreme" temperature, pipe insulation can be used to bring surface touch temperatures into a safe range. Control of noise. Pipework can operate as a conduit for noise to travel from one part of a building to another (a typical example of this can be seen with waste-water pipework routed within a building). Acoustic insulation can prevent this noise transfer by acting to damp the pipe wall and performing an acoustic decoupling function wherever the pipe passes through a fixed wall or floor and wherever the pipe is mechanically fixed. Pipework can also radiate mechanical noise. In such circumstances, the breakout of noise from the pipe wall can be achieved by acoustic insulation incorporating a high-density sound barrier. Factors influencing performance. The relative performance of different pipe insulation on any given application can be influenced by many factors. The principal factors are: Other factors, such as the level of moisture content and the opening of joints, can influence the overall performance of pipe insulation. Many of these factors are listed in the international standard EN ISO 23993. Materials. Pipe insulation materials come in a large variety of forms, but most materials fall into one of the following categories. Mineral wool. Mineral wools, including rock and slag wools, are inorganic strands of mineral fibre bonded together using organic binders. Mineral wools are capable of operating at high temperatures and exhibit good fire performance ratings when tested. Mineral wools are used on all types of pipework, particularly industrial pipework operating at higher temperatures. Glass wool. Glass wool is a high-temperature fibrous insulation material, similar to mineral wool, where inorganic strands of glass fibre are bound together using a binder. As with other forms of mineral wool, glass-wool insulation can be used for thermal and acoustic applications. Flexible elastomeric foams. These are flexible, closed-cell, rubber foams based on NBR or EPDM rubber. Flexible elastomeric foams exhibit such a high resistance to the passage of water vapour that they do not generally require additional water-vapour barriers. Such high vapour resistance, combined with the high surface emissivity of rubber, allows flexible elastomeric foams to prevent surface condensation formation with comparatively small thicknesses. As a result, flexible elastomeric foams are widely used on refrigeration and air-conditioning pipework. Flexible elastomeric foams are also used on heating and hot-water systems. Rigid foam. Pipe insulation made from rigid Phenolic, PIR, or PUR foam insulation is common in some countries. Rigid-foam insulation has minimal acoustic performance but can exhibit low thermal-conductivity values of 0.021 W/(m·K) or lower, allowing energy-saving legislation to be met whilst using reduced insulation thicknesses. Polyethylene. Polyethylene is a flexible plastic foamed insulation that is widely used to prevent freezing of domestic water supply pipes and to reduce heat loss from domestic heating pipes. The fire performance of Polyethylene is typically 25/50 E84 compliant up to 1" thickness. Cellular Glass. 100% Glass manufactured primarily from sand, limestone &amp; soda ash. Cellular insulations are composed of small individual cells either interconnecting or sealed from each other to form a cellular structure. Glass, plastics, and rubber may comprise the base material and a variety of foaming agents are used. Cellular insulations are often further classified as either open cell (cells are interconnecting) or closed cell (cells sealed from each other). Generally, materials that have greater than 90% closed cell content are considered to be closed cell materials. Aerogel. Silica Aerogel insulation has the lowest thermal conductivity of any commercially produced insulation. Although no manufacturer currently manufactures Aerogel pipe sections, it is possible to wrap Aerogel blanket around pipework, allowing it to function as pipe insulation. The usage of Aerogel for pipe insulation is currently limited. Heat flow calculations and R-value. Heat flow passing through pipe insulation can be calculated by following the equations set out in either the ASTM C 680 or EN ISO 12241 standards. Heat flux is given by the following equation: formula_0 Where: In order to calculate heat flow, it is first necessary to calculate the thermal resistance ("R-value") for each layer of insulation. For pipe insulation, the R-value varies not only with the insulation thickness and thermal conductivity ("k-value") but also with the pipe outer diameter and the average material temperature. For this reason, it is more common to use the thermal conductivity value when comparing the effectiveness of pipe insulation, and R-values of pipe insulation are not covered by the US FTC R-value rule . The thermal resistance of each insulation layer is calculated using the following equation: formula_4 Where: Calculating the heat transfer resistance of the inner- and outer-insulation surfaces is more complex and requires the calculation of the internal- and external-surface coefficients of heat transfer. Equations for calculating this are based on empirical results and vary from standard to standard (both ASTM C 680 and EN ISO 12241 contain equations for estimating surface coefficients of heat transfer). A number of organisations such as the North American Insulation Manufacturers Association and Firo Insulation offer free programs that allow the calculation of heat flow through pipe insulation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " q = \\frac{ \\Theta_i - \\Theta_a }{ R_T }" }, { "math_id": 1, "text": "\\Theta_i" }, { "math_id": 2, "text": "\\Theta_a" }, { "math_id": 3, "text": "R_T" }, { "math_id": 4, "text": " R=\\frac{D_x \\ln(D_e / D_i)}{\\lambda}" }, { "math_id": 5, "text": "D_e" }, { "math_id": 6, "text": "D_i" }, { "math_id": 7, "text": "\\lambda" }, { "math_id": 8, "text": "D_x" } ]
https://en.wikipedia.org/wiki?curid=14745714
147460
Free variables and bound variables
Concept in mathematics or computer science In mathematics, and in other disciplines involving formal languages, including mathematical logic and computer science, a variable may be said to be either free or bound. Some older books use the terms real variable and apparent variable for free variable and bound variable, respectively. A "free variable" is a notation (symbol) that specifies places in an expression where substitution may take place and is not a parameter of this or any container expression. The idea is related to a "placeholder" (a symbol that will later be replaced by some value), or a wildcard character that stands for an unspecified symbol. In computer programming, the term free variable refers to variables used in a function that are neither local variables nor parameters of that function. The term non-local variable is often a synonym in this context. An instance of a variable symbol is "bound", in contrast, if the value of that variable symbol has been bound to a specific value or range of values in the domain of discourse or universe. This may be achieved through the use of logical quantifiers, variable-binding operators, or an explicit statement of allowed values for the variable (such as, "...where formula_0 is a positive integer".) A variable symbol overall is bound if at least one occurrence of it is bound.pp.142--143 Since the same variable symbol may appear in multiple places in an expression, some occurrences of the variable symbol may be free while others are bound,p.78 hence "free" and "bound" are at first defined for occurrences and then generalized over all occurrences of said variable symbol in the expression. However it is done, the variable ceases to be an independent variable on which the value of the expression depends, whether that value be a truth value or the numerical result of a calculation, or, more generally, an element of an image set of a function. While the domain of discourse in many contexts is understood, when an explicit range of values for the bound variable has not been given, it may be necessary to specify the domain in order to properly evaluate the expression. For example, consider the following expression in which both variables are bound by logical quantifiers: formula_1 This expression evaluates to "false" if the domain of formula_2 and formula_3 is the real numbers, but "true" if the domain is the complex numbers. The term "dummy variable" is also sometimes used for a bound variable (more commonly in general mathematics than in computer science), but this should not be confused with the identically named but unrelated concept of dummy variable as used in statistics, most commonly in regression analysis.p.17 Examples. Before stating a precise definition of free variable and bound variable, the following are some examples that perhaps make these two concepts clearer than the definition would: In the expression formula_4 "n" is a free variable and "k" is a bound variable; consequently the value of this expression depends on the value of "n", but there is nothing called "k" on which it could depend. In the expression formula_5 "y" is a free variable and "x" is a bound variable; consequently the value of this expression depends on the value of "y", but there is nothing called "x" on which it could depend. In the expression formula_6 "x" is a free variable and "h" is a bound variable; consequently the value of this expression depends on the value of "x", but there is nothing called "h" on which it could depend. In the expression formula_7 "z" is a free variable and "x" and "y" are bound variables, associated with logical quantifiers; consequently the logical value of this expression depends on the value of "z", but there is nothing called "x" or "y" on which it could depend. More widely, in most proofs, bound variables are used. For example, the following proof shows that all squares of positive even integers are divisible by formula_8 Let formula_0 be a positive even integer. Then there is an integer formula_9 such that formula_10. Since formula_11, we have formula_12 divisible by formula_8 not only "k" but also "n" have been used as bound variables as a whole in the proof. Variable-binding operators. The following formula_13 are some common variable-binding operators. Each of them binds the variable x for some set S. Many of these are operators which act on functions of the bound variable. In more complicated contexts, such notations can become awkward and confusing. It can be useful to switch to notations which make the binding explicit, such as formula_14 for sums or formula_15 for differentiation. Formal explanation. Variable-binding mechanisms occur in different contexts in mathematics, logic and computer science. In all cases, however, they are purely syntactic properties of expressions and variables in them. For this section we can summarize syntax by identifying an expression with a tree whose leaf nodes are variables, constants, function constants or predicate constants and whose non-leaf nodes are logical operators. This expression can then be determined by doing an inorder traversal of the tree. Variable-binding operators are logical operators that occur in almost every formal language. A binding operator Q takes two arguments: a variable "v" and an expression "P", and when applied to its arguments produces a new expression Q("v", "P"). The meaning of binding operators is supplied by the semantics of the language and does not concern us here. Variable binding relates three things: a variable "v", a location "a" for that variable in an expression and a non-leaf node "n" of the form Q("v", "P"). Note: we define a location in an expression as a leaf node in the syntax tree. Variable binding occurs when that location is below the node "n". In the lambda calculus, codice_0 is a bound variable in the term codice_1 and a free variable in the term codice_2. We say codice_0 is bound in codice_4 and free in codice_2. If codice_2 contains a subterm codice_7 then codice_0 is rebound in this term. This nested, inner binding of codice_0 is said to "shadow" the outer binding. Occurrences of codice_0 in codice_11 are free occurrences of the new codice_0. Variables bound at the top level of a program are technically free variables within the terms to which they are bound but are often treated specially because they can be compiled as fixed addresses. Similarly, an identifier bound to a recursive function is also technically a free variable within its own body but is treated specially. A "closed term" is one containing no free variables. Function expressions. To give an example from mathematics, consider an expression which defines a function formula_16 where "t" is an expression. "t" may contain some, all or none of the "x"1, …, "x""n" and it may contain other variables. In this case we say that function definition binds the variables "x"1, …, "x""n". In this manner, function definition expressions of the kind shown above can be thought of as "the" variable binding operator, analogous to the lambda expressions of lambda calculus. Other binding operators, like the summation sign, can be thought of as higher-order functions applying to a function. So, for example, the expression formula_17 could be treated as a notation for formula_18 where formula_19 is an operator with two parameters—a one-parameter function, and a set to evaluate that function over. The other operators listed above can be expressed in similar ways; for example, the universal quantifier formula_20 can be thought of as an operator that evaluates to the logical conjunction of the Boolean-valued function "P" applied over the (possibly infinite) set "S". Natural language. When analyzed in formal semantics, natural languages can be seen to have free and bound variables. In English, personal pronouns like "he", "she", "they", etc. can act as free variables. "Lisa found her book." In the sentence above, the possessive pronoun "her" is a free variable. It may refer to the previously mentioned "Lisa" or to any other female. In other words, "her book" could be referring to Lisa's book (an instance of coreference) or to a book that belongs to a different female (e.g. Jane's book). Whoever the referent of "her" is can be established according to the situational (i.e. pragmatic) context. The identity of the referent can be shown using coindexing subscripts where "i" indicates one referent and "j" indicates a second referent (different from "i"). Thus, the sentence "Lisa found her book" has the following interpretations: "Lisai found heri book." (interpretation #1: "her" = of "Lisa") "Lisai found herj book." (interpretation #2: "her" = of a female that is not Lisa) The distinction is not purely of academic interest, as some languages do actually have different forms for "heri" and "herj": for example, Norwegian and Swedish translate coreferent "heri" as "sin" and noncoreferent "herj" as "hennes". English does allow specifying coreference, but it is optional, as both interpretations of the previous example are valid (the ungrammatical interpretation is indicated with an asterisk): "Lisai found heri own book." (interpretation #1: "her" = of "Lisa") *"Lisai found herj own book." (interpretation #2: "her" = of a female that is not Lisa) However, reflexive pronouns, such as "himself", "herself", "themselves", etc., and reciprocal pronouns, such as "each other", act as bound variables. In a sentence like the following: "Jane hurt herself." the reflexive "herself" can only refer to the previously mentioned antecedent, in this case "Jane", and can never refer to a different female person. In this example, the variable "herself" is bound to the noun "Jane" that occurs in subject position. Indicating the coindexation, the first interpretation with "Jane" and "herself" coindexed is permissible, but the other interpretation where they are not coindexed is ungrammatical: "Janei hurt herselfi." (interpretation #1: "herself" = "Jane") *"Janei hurt herselfj." (interpretation #2: "herself" = a female that is not Jane) The coreference binding can be represented using a lambda expression as mentioned in the previous Formal explanation section. The sentence with the reflexive could be represented as (λ"x"."x" hurt "x")Jane in which "Jane" is the subject referent argument and "λx.x hurt x" is the predicate function (a lambda abstraction) with the lambda notation and "x" indicating both the semantic subject and the semantic object of sentence as being bound. This returns the semantic interpretation "JANE hurt JANE" with "JANE" being the same person. Pronouns can also behave in a different way. In the sentence below "Ashley hit her." the pronoun "her" can only refer to a female that is not Ashley. This means that it can never have a reflexive meaning equivalent to "Ashley hit herself". The grammatical and ungrammatical interpretations are: *"Ashleyi hit heri." (interpretation #1: "her" = "Ashley") "Ashleyi hit herj." (interpretation #2: "her" = a female that is not Ashley) The first interpretation is impossible. Only the second interpretation is permitted by the grammar. Thus, it can be seen that reflexives and reciprocals are bound variables (known technically as anaphors) while true pronouns are free variables in some grammatical structures but variables that cannot be bound in other grammatical structures. The binding phenomena found in natural languages was particularly important to the syntactic government and binding theory (see also: Binding (linguistics)). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "\\forall y\\,\\exists x\\,\\left(x=\\sqrt{y}\\right)." }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "y" }, { "math_id": 4, "text": "\\sum_{k=1}^{10} f(k,n)," }, { "math_id": 5, "text": "\\int_0^\\infty x^{y-1} e^{-x}\\,dx," }, { "math_id": 6, "text": "\\lim_{h\\rightarrow 0}\\frac{f(x+h)-f(x)}{h}," }, { "math_id": 7, "text": "\\forall x\\ \\exists y\\ \\Big[\\varphi(x,y,z)\\Big]," }, { "math_id": 8, "text": "4" }, { "math_id": 9, "text": "k" }, { "math_id": 10, "text": "n=2k" }, { "math_id": 11, "text": "n^2=4k^2" }, { "math_id": 12, "text": "n^2" }, { "math_id": 13, "text": "\\sum_{x\\in S} \n\\quad\\quad \\prod_{x\\in S}\n\\quad\\quad \\int_0^\\infty \\cdots \\,dx\n\\quad\\quad \\lim_{x\\to 0}\n\\quad\\quad \\forall x\n\\quad\\quad \\exists x" }, { "math_id": 14, "text": "\\sum_{1, \\ldots, 10} \\left( k \\mapsto f(k,n) \\right)" }, { "math_id": 15, "text": "D \\left( x \\mapsto x^2 + 2x + 1 \\right) " }, { "math_id": 16, "text": "f = \\left[ (x_1, \\ldots , x_n) \\mapsto t \\right]" }, { "math_id": 17, "text": "\\sum_{x \\in S}{x^2}" }, { "math_id": 18, "text": "\\sum_{S}{(x \\mapsto x^2)}" }, { "math_id": 19, "text": "\\sum_{S}{f}" }, { "math_id": 20, "text": "\\forall x \\in S\\ P(x)" } ]
https://en.wikipedia.org/wiki?curid=147460
1475109
Minkowski's question-mark function
Function with unusual fractal properties In mathematics, Minkowski's question-mark function, denoted ?("x"), is a function with unusual fractal properties, defined by Hermann Minkowski in 1904. It maps quadratic irrational numbers to rational numbers on the unit interval, via an expression relating the continued fraction expansions of the quadratics to the binary expansions of the rationals, given by Arnaud Denjoy in 1938. It also maps rational numbers to dyadic rationals, as can be seen by a recursive definition closely related to the Stern–Brocot tree. Definition and intuition. One way to define the question-mark function involves the correspondence between two different ways of representing fractional numbers using finite or infinite binary sequences. Most familiarly, a string of 0s and 1s with a single point mark ".", like "11.001001000011111..." can be interpreted as the binary representation of a number. In this case this number is formula_0 There is a different way of interpreting the same sequence, however, using continued fractions. Interpreting the fractional part "0.001001000011111..." as a binary number in the same way, replace each consecutive block of 0's or 1's by its run length (or, for the first block of zeroes, its run length plus one), in this case generating the sequence formula_1. Then, use this sequence as the coefficients of a continued fraction: formula_2 The question-mark function reverses this process: it translates the continued-fraction of a given real number into a run-length encoded binary sequence, and then reinterprets that sequence as a binary number. For instance, for the example above, formula_3. To define this formally, if an irrational number formula_4 has the (non-terminating) continued-fraction representation formula_5 then the value of the question-mark function on formula_4 is defined as the value of the infinite series formula_6 In the same way, if a rational number formula_4 has the terminating continued-fraction representation formula_7 then the value of the question-mark function on formula_4 reduces to a finite sum, formula_8 Analogously to the way the question-mark function reinterprets continued fractions as binary numbers, the Cantor function can be understood as reinterpreting ternary numbers as binary numbers. Self-symmetry. The question mark is clearly visually self-similar. A monoid of self-similarities may be generated by two operators S and R acting on the unit square and defined as follows: formula_9 Visually, S shrinks the unit square to its bottom-left quarter, while R performs a point reflection through its center. A point on the graph of ? has coordinates ("x", ?("x")) for some x in the unit interval. Such a point is transformed by S and R into another point of the graph, because ? satisfies the following identities for all "x" ∈ [0, 1]: formula_10 These two operators may be repeatedly combined, forming a monoid. A general element of the monoid is then formula_11 for positive integers "a"1, "a"2, "a"3, …. Each such element describes a self-similarity of the question-mark function. This monoid is sometimes called the "period-doubling monoid", and all period-doubling fractal curves have a self-symmetry described by it (the de Rham curve, of which the question mark is a special case, is a category of such curves). The elements of the monoid are in correspondence with the rationals, by means of the identification of "a"1, "a"2, "a"3, … with the continued fraction [0; "a"1, "a"2, "a"3,…]. Since both formula_12 and formula_13 are linear fractional transformations with integer coefficients, the monoid may be regarded as a subset of the modular group PSL(2, Z). Quadratic irrationals. The question mark function provides a one-to-one mapping from the non-dyadic rationals to the quadratic irrationals, thus allowing an explicit proof of countability of the latter. These can, in fact, be understood to correspond to the periodic orbits for the dyadic transformation. This can be explicitly demonstrated in just a few steps. Dyadic symmetry. Define two moves: a left move and a right move, valid on the unit interval formula_14 as formula_15 and formula_16 and formula_17 and formula_18 The question mark function then obeys a left-move symmetry formula_19 and a right-move symmetry formula_20 where formula_21 denotes function composition. These can be arbitrary concatenated. Consider, for example, the sequence of left-right moves formula_22 Adding the subscripts C and D, and, for clarity, dropping the composition operator formula_21 in all but a few places, one has: formula_23 Arbitrary finite-length strings in the letters L and R correspond to the dyadic rationals, in that every dyadic rational can be written as both formula_24 for integer "n" and "m" and as finite length of bits formula_25 with formula_26 Thus, every dyadic rational is in one-to-one correspondence with some self-symmetry of the question mark function. Some notational rearrangements can make the above slightly easier to express. Let formula_27 and formula_28 stand for L and R. Function composition extends this to a monoid, in that one can write formula_29 and generally, formula_30 for some binary strings of digits "A", "B", where "AB" is just the ordinary concatenation of such strings. The dyadic monoid "M" is then the monoid of all such finite-length left-right moves. Writing formula_31 as a general element of the monoid, there is a corresponding self-symmetry of the question mark function: formula_32 Isomorphism. An explicit mapping between the rationals and the dyadic rationals can be obtained providing a reflection operator formula_33 and noting that both formula_34 and formula_35 Since formula_36 is the identity, an arbitrary string of left-right moves can be re-written as a string of left moves only, followed by a reflection, followed by more left moves, a reflection, and so on, that is, as formula_37 which is clearly isomorphic to formula_38 from above. Evaluating some explicit sequence of formula_39 at the function argument formula_40 gives a dyadic rational; explicitly, it is equal to formula_25 where each formula_41 is a binary bit, zero corresponding to a left move and one corresponding to a right move. The equivalent sequence of formula_42 moves, evaluated at formula_40 gives a rational number formula_43 It is explicitly the one provided by the continued fraction formula_44 keeping in mind that it is a rational because the sequence formula_45 was of finite length. This establishes a one-to-one correspondence between the dyadic rationals and the rationals. Periodic orbits of the dyadic transform. Consider now the periodic orbits of the dyadic transformation. These correspond to bit-sequences consisting of a finite initial "chaotic" sequence of bits formula_46, followed by a repeating string formula_47 of length formula_48. Such repeating strings correspond to a rational number. This is easily made explicit. Write formula_49 one then clearly has formula_50 Tacking on the initial non-repeating sequence, one clearly has a rational number. In fact, "every" rational number can be expressed in this way: an initial "random" sequence, followed by a cycling repeat. That is, the periodic orbits of the map are in one-to-one correspondence with the rationals. Periodic orbits as continued fractions. Such periodic orbits have an equivalent periodic continued fraction, per the isomorphism established above. There is an initial "chaotic" orbit, of some finite length, followed by a repeating sequence. The repeating sequence generates a periodic continued fraction satisfying formula_51 This continued fraction has the form formula_52 with the formula_53 being integers, and satisfying formula_54 Explicit values can be obtained by writing formula_55 for the shift, so that formula_56 while the reflection is given by formula_57 so that formula_58. Both of these matrices are unimodular, arbitrary products remain unimodular, and result in a matrix of the form formula_59 giving the precise value of the continued fraction. As all of the matrix entries are integers, this matrix belongs to the projective modular group formula_60 Solving explicitly, one has that formula_61 It is not hard to verify that the solutions to this meet the definition of quadratic irrationals. In fact, every quadratic irrational can be expressed in this way. Thus the quadratic irrationals are in one-to-one correspondence with the periodic orbits of the dyadic transform, which are in one-to-one correspondence with the (non-dyadic) rationals, which are in one-to-one correspondence with the dyadic rationals. The question mark function provides the correspondence in each case. Properties of ?("x"). The question-mark function is a strictly increasing and continuous, but not absolutely continuous function. The derivative is defined almost everywhere, and can take on only two values, 0 (its value almost everywhere, including at all rational numbers) and formula_62. There are several constructions for a measure that, when integrated, yields the question-mark function. One such construction is obtained by measuring the density of the Farey numbers on the real number line. The question-mark measure is the prototypical example of what are sometimes referred to as multi-fractal measures. The question-mark function maps rational numbers to dyadic rational numbers, meaning those whose base two representation terminates, as may be proven by induction from the recursive construction outlined above. It maps quadratic irrationals to non-dyadic rational numbers. In both cases it provides an order isomorphism between these sets, making concrete Cantor's isomorphism theorem according to which every two unbounded countable dense linear orders are order-isomorphic. It is an odd function, and satisfies the functional equation ?("x" + 1) ?("x") + 1; consequently "x" ↦ ?("x") − "x" is an odd periodic function with period one. If ?("x") is irrational, then x is either algebraic of degree greater than two, or transcendental. The question-mark function has fixed points at 0, and 1, and at least two more, symmetric about the midpoint. One is approximately 0.42037. It was conjectured by Moshchevitin that they were the only 5 fixed points. In 1943, Raphaël Salem raised the question of whether the Fourier–Stieltjes coefficients of the question-mark function vanish at infinity. In other words, he wanted to know whether or not formula_63 This was answered affirmatively by Jordan and Sahlsten, as a special case of a result on Gibbs measures. The graph of Minkowski question mark function is a special case of fractal curves known as de Rham curves. Algorithm. The recursive definition naturally lends itself to an algorithm for computing the function to any desired degree of accuracy for any real number, as the following C function demonstrates. The algorithm descends the Stern–Brocot tree in search of the input x, and sums the terms of the binary expansion of "y" ?("x") on the way. As long as the loop invariant "qr" − "ps" 1 remains satisfied there is no need to reduce the fraction , since it is already in lowest terms. Another invariant is ≤ "x" &lt;. The codice_0 loop in this program may be analyzed somewhat like a codice_1 loop, with the conditional break statements in the first three lines making out the condition. The only statements in the loop that can possibly affect the invariants are in the last two lines, and these can be shown to preserve the truth of both invariants as long as the first three lines have executed successfully without breaking out of the loop. A third invariant for the body of the loop (up to floating point precision) is "y" ≤ ?("x") &lt; "y" + "d", but since d is halved at the beginning of the loop before any conditions are tested, our conclusion is only that "y" ≤ ?("x") &lt; "y" + 2"d" at the termination of the loop. To prove termination, it is sufficient to note that the sum codice_2 increases by at least 1 with every iteration of the loop, and that the loop will terminate when this sum is too large to be represented in the primitive C data type codice_3. However, in practice, the conditional break when codice_4 is what ensures the termination of the loop in a reasonable amount of time. /* Minkowski's question-mark function */ double minkowski(double x) { long p = x; long q = 1, r = p + 1, s = 1, m, n; double d = 1, y = p; if (x &lt; p || (p &lt; 0) ^ (r &lt;= 0)) return x; /* out of range ?(x) =~ x */ for (;;) { /* invariants: q * r - p * s == 1 &amp;&amp; p / q &lt;= x &amp;&amp; x &lt; r / s */ d /= 2; if (y + d == y) break; /* reached max possible precision */ m = p + r; if ((m &lt; 0) ^ (p &lt; 0)) break; /* sum overflowed */ n = q + s; if (n &lt; 0) break; /* sum overflowed */ if (x &lt; (double)m / n) { r = m; s = n; } else { y += d; p = m; q = n; return y + d; /* final round-off */ Probability distribution. Restricting the Minkowski question mark function to ?:[0,1] → [0,1], it can be used as the cumulative distribution function of a singular distribution on the unit interval. This distribution is symmetric about its midpoint, with raw moments of about "m"1 = 0.5, "m"2 = 0.290926, "m"3 = 0.186389 and "m"4 = 0.126992, and so a mean and median of 0.5, a standard deviation of about 0.2023, a skewness of 0, and an excess kurtosis about −1.147. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Historical sources. &lt;templatestyles src="Refbegin/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "2+1+\\frac18+\\frac1{64}+\\cdots=\\pi." }, { "math_id": 1, "text": "[3;3,1,2,1,4,5,\\dots]" }, { "math_id": 2, "text": "3+\\frac{1}{\\displaystyle 3+\\frac{1}{\\displaystyle 1+\\frac{1}{\\displaystyle 2+\\frac{1}{\\displaystyle 1+\\frac{1}{\\displaystyle 4+\\frac{1}{\\displaystyle 5+\\dots}}}}}}\\approx 3.2676" }, { "math_id": 3, "text": "\\operatorname{?}(3.2676)\\approx\\pi" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "x=a_0+\\frac{1}{\\displaystyle a_1+\\frac{1}{\\displaystyle a_2+\\cdots}}=[a_0;a_1,a_2,\\dots]" }, { "math_id": 6, "text": "\\operatorname{?}(x) = a_0 + 2 \\sum_{n=1}^\\infty \\frac{\\left(-1\\right)^{n+1}}{2^{a_1 + \\cdots + a_n}}." }, { "math_id": 7, "text": "[a_0;a_1,a_2,\\dots,a_m]" }, { "math_id": 8, "text": "\\operatorname{?}(x) = a_0 + 2 \\sum_{n=1}^m \\frac{\\left(-1\\right)^{n+1}}{2^{a_1 + \\cdots + a_n}}." }, { "math_id": 9, "text": "\\begin{align}\n S(x, y) &= \\left( \\frac x {x+1}, \\frac y 2 \\right), \\\\[5px]\n R(x, y) &= (1 - x, 1 - y).\n\\end{align}" }, { "math_id": 10, "text": "\\begin{align}\n \\operatorname{?}\\left(\\frac{x}{x+1}\\right) &= \\frac{\\operatorname{?}(x)}{2}, \\\\[5px]\n \\operatorname{?}(1 - x) &= 1 - \\operatorname{?}(x).\n \\end{align}" }, { "math_id": 11, "text": "S^{a_1} R S^{a_2} R S^{a_3} \\cdots" }, { "math_id": 12, "text": "S : x \\mapsto \\frac{x}{x+1}" }, { "math_id": 13, "text": "T : x \\mapsto 1 - x" }, { "math_id": 14, "text": "0\\le x\\le 1" }, { "math_id": 15, "text": "L_D(x) = \\frac{x}{2}" }, { "math_id": 16, "text": "L_C(x) = \\frac{x}{1+x}" }, { "math_id": 17, "text": "R_D(x) = \\frac{1+x}{2}" }, { "math_id": 18, "text": "R_C(x) = \\frac{1}{2-x}" }, { "math_id": 19, "text": "L_D \\circ \\text{?} = \\text{?} \\circ L_C" }, { "math_id": 20, "text": "R_D \\circ \\text{?} = \\text{?} \\circ R_C" }, { "math_id": 21, "text": "\\circ" }, { "math_id": 22, "text": "LRLLR." }, { "math_id": 23, "text": "L_D R_D L_D L_D R_D \\circ \\text{?} = \\text{?} \\circ L_C R_C L_C L_C R_C" }, { "math_id": 24, "text": "y=n/2^m" }, { "math_id": 25, "text": "y=0.b_1b_2b_3\\cdots b_m" }, { "math_id": 26, "text": "b_k\\in \\{0,1\\}." }, { "math_id": 27, "text": "g_0" }, { "math_id": 28, "text": "g_1" }, { "math_id": 29, "text": "g_{010}=g_0g_1g_0" }, { "math_id": 30, "text": "g_Ag_B=g_{AB}" }, { "math_id": 31, "text": "\\gamma\\in M" }, { "math_id": 32, "text": "\\gamma_D\\circ \\text{?} = \\text{?}\\circ \\gamma_C" }, { "math_id": 33, "text": "r(x)=1-x" }, { "math_id": 34, "text": "r\\circ R_D\\circ r = L_D" }, { "math_id": 35, "text": "r\\circ R_C\\circ r = L_C" }, { "math_id": 36, "text": "r^2=1" }, { "math_id": 37, "text": "L^{a_1}rL^{a_2}rL^{a_3}\\cdots" }, { "math_id": 38, "text": "S^{a_1}TS^{a_2}TS^{a_3}\\cdots" }, { "math_id": 39, "text": "L_D,R_D" }, { "math_id": 40, "text": "x=1" }, { "math_id": 41, "text": "b_k\\in\\{0,1\\}" }, { "math_id": 42, "text": "L_C,R_C" }, { "math_id": 43, "text": "p/q." }, { "math_id": 44, "text": "p/q=[a_1,a_2,a_3,\\ldots,a_j]" }, { "math_id": 45, "text": "(a_1,a_2,a_3,\\ldots,a_j)" }, { "math_id": 46, "text": "b_0,b_1,b_2,\\ldots, b_{k-1}" }, { "math_id": 47, "text": "b_k,b_{k+1},b_{k+2},\\ldots, b_{k+m-1}" }, { "math_id": 48, "text": "m" }, { "math_id": 49, "text": "y=\\sum_{j=0}^{m-1} b_{k+j}2^{-j-1}" }, { "math_id": 50, "text": "\\sum_{j=0}^\\infty b_{k+j}2^{-j-1} = y\\sum_{j=0}^\\infty 2^{-jm} = \\frac{y}{1-2^m}" }, { "math_id": 51, "text": "x=[a_n,a_{n+1},a_{n+2},\\ldots,a_{n+r},x]." }, { "math_id": 52, "text": "x = \\frac{\\alpha x+\\beta}{\\gamma x+\\delta}" }, { "math_id": 53, "text": "\\alpha,\\beta,\\gamma,\\delta" }, { "math_id": 54, "text": "\\alpha \\delta-\\beta \\gamma=\\pm 1." }, { "math_id": 55, "text": "S\\mapsto \\begin{pmatrix} 1 & 0\\\\ 1 & 1\\end{pmatrix}" }, { "math_id": 56, "text": "S^n\\mapsto \\begin{pmatrix} 1 & 0\\\\ n & 1\\end{pmatrix}" }, { "math_id": 57, "text": "T\\mapsto \\begin{pmatrix} -1 & 1\\\\ 0 & 1\\end{pmatrix}" }, { "math_id": 58, "text": "T^2=I" }, { "math_id": 59, "text": "S^{a_n}TS^{a_{n+1}}T\\cdots TS^{a_{n+r}} = \\begin{pmatrix} \\alpha & \\beta\\\\ \\gamma & \\delta\\end{pmatrix}" }, { "math_id": 60, "text": "PSL(2,\\mathbb{Z})." }, { "math_id": 61, "text": "\\gamma x^2 + (\\delta-\\alpha)x-\\beta=0." }, { "math_id": 62, "text": "+\\infty" }, { "math_id": 63, "text": "\\lim_{n \\to \\infty}\\int_0^1 e^{2\\pi inx} \\, \\operatorname{d?}(x)=0." } ]
https://en.wikipedia.org/wiki?curid=1475109
1475381
Glossary of game theory
List of definitions of terms and concepts used in game theory Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject. Definitions of a game. Notational conventions. formula_4 is an element of formula_3. formula_5 an element of formula_6, is a tuple of strategies for all players other than i. Normal form game. A game in normal form is a function: formula_9 Given the "tuple" of "strategies" chosen by the players, one is given an allocation of "payments" (given as real numbers). A further generalization can be achieved by splitting the game into a composition of two functions: formula_10 the outcome function of the game (some authors call this function "the game form"), and: formula_11 the allocation of payoffs (or preferences) to players, for each outcome of the game. Extensive form game. This is given by a tree, where at each vertex of the "tree" a different player has the choice of choosing an edge. The "outcome" set of an extensive form game is usually the set of tree leaves. Cooperative game. A game in which players are allowed to form coalitions (and to enforce coalitionary discipline). A cooperative game is given by stating a "value" for every coalition: formula_12 It is always assumed that the empty coalition gains nil. "Solution concepts" for cooperative games usually assume that the players are forming the "grand coalition" formula_13, whose value formula_14 is then divided among the players to give an allocation. Simple game. A Simple game is a simplified form of a cooperative game, where the possible gain is assumed to be either '0' or '1'. A simple game is couple (N, W), where W is the list of "winning" coalitions, capable of gaining the loot ('1'), and N is the set of players. Another way to put it is: A "strong dictator" is formula_22-effective for every possible outcome. A "weak dictator" is formula_23-effective for every possible outcome. A game can have no more than one "strong dictator". Some games have multiple "weak dictators" (in "rock-paper-scissors" both players are "weak dictators" but none is a "strong dictator"). Also see "Effectiveness". Antonym: "dummy". Antonyms: "say", "veto", "dictator". S is β-effective if for any strategies of the complement of S, the members of S can answer with strategies that ensure outcome a. Antonym: "Dummy". Antonym: "Dummy". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathbb{R} " }, { "math_id": 1, "text": " \\mathrm{N} " }, { "math_id": 2, "text": " \\Sigma\\ = \\prod_{i \\in \\mathrm{N}} \\Sigma\\ ^i " }, { "math_id": 3, "text": " \\Sigma\\ ^i " }, { "math_id": 4, "text": " \\sigma\\ _i " }, { "math_id": 5, "text": " \\sigma\\ _{-i} " }, { "math_id": 6, "text": " \\Sigma\\ ^{-i} = \\prod_{ j \\in \\mathrm{N}, j \\ne i} \\Sigma\\ ^j " }, { "math_id": 7, "text": " \\Gamma" }, { "math_id": 8, "text": " \\mathbb{R} ^ \\mathrm{N} " }, { "math_id": 9, "text": " \\pi\\ : \\prod_{i\\in \\mathrm{N}} \\Sigma\\ ^ i \\to \\mathbb{R}^\\mathrm{N}" }, { "math_id": 10, "text": " \\pi\\ : \\prod_{i \\in \\mathrm{N}} \\Sigma\\ ^i \\to \\Gamma" }, { "math_id": 11, "text": " \\nu\\ : \\Gamma\\ \\to \\mathbb{R}^\\mathrm{N} " }, { "math_id": 12, "text": " \\nu\\ : 2^{\\mathbb{P}(N)} \\to \\mathbb{R}" }, { "math_id": 13, "text": " N " }, { "math_id": 14, "text": " \\nu(N) " }, { "math_id": 15, "text": " \\nu\\ : \\Gamma\\ \\to \\mathbb{R} ^\\mathrm{N} " }, { "math_id": 16, "text": " \\tau\\ _i " }, { "math_id": 17, "text": "\n\\forall \\sigma\\ _i \\in\\ \\Sigma\\ ^i \\quad \\quad\n\\pi\\ (\\sigma\\ _i ,\\sigma\\ _{-i} ) \\le \\pi\\ (\\tau\\ _i ,\\sigma\\ _{-i} )\n" }, { "math_id": 18, "text": " \\mathrm{S} \\subseteq \\mathrm{N} " }, { "math_id": 19, "text": "m \\in \\mathbb{N}" }, { "math_id": 20, "text": " \\forall a \\in \\mathrm{A}, \\; \\exist \\sigma\\ _n \\in \\Sigma\\ ^n \\; s.t. \\; \\forall \\sigma\\ _{-n} \\in \\Sigma\\ ^{-n}: \\; \\Gamma\\ (\\sigma\\ _{-n},\\sigma\\ _n) = a " }, { "math_id": 21, "text": " \\forall a \\in \\mathrm{A}, \\; \\forall \\sigma\\ _{-m} \\in \\Sigma\\ ^{-m} \\; \\exist \\sigma\\ _m \\in \\Sigma\\ ^m \\; s.t. \\; \\Gamma\\ (\\sigma\\ _{-m},\\sigma\\ _m) = a " }, { "math_id": 22, "text": "\\alpha" }, { "math_id": 23, "text": "\\beta" }, { "math_id": 24, "text": "\n\\forall j \\in \\mathrm{N} \\; \\quad \\nu\\ _j (a) \\le\\ \\nu\\ _j (b) \n" }, { "math_id": 25, "text": "\n\\exists i \\in \\mathrm{N} \\; s.t. \\; \\nu\\ _i (a) < \\nu\\ _i (b) \n" }, { "math_id": 26, "text": "\n\\forall \\sigma\\ _{-i} \\in\\ \\Sigma\\ ^{-i} \\quad \\quad\n\\pi\\ (\\sigma\\ _i ,\\sigma\\ _{-i} ) \\le \\pi\\ (\\tau\\ _i ,\\sigma\\ _{-i} )\n" }, { "math_id": 27, "text": " \n\\exists \\sigma\\ _{-i} \\in\\ \\Sigma\\ ^{-i} \\quad s.t. \\quad \n\\pi\\ (\\sigma\\ _i ,\\sigma\\ _{-i} ) < \\pi\\ (\\tau\\ _i ,\\sigma\\ _{-i} )\n" }, { "math_id": 28, "text": " \\sigma\\ = (\\sigma\\ _i) _ {i \\in \\mathrm{N}} " }, { "math_id": 29, "text": " \\sigma" }, { "math_id": 30, "text": "\n\\forall i \\in \\mathrm{N} \\quad \\forall \\tau\\ _i \\in\\ \\Sigma\\ ^i \\quad \n\\pi\\ (\\tau\\ ,\\sigma\\ _{-i} ) \\le \\pi\\ (\\sigma\\ )\n" }, { "math_id": 31, "text": "\n\\forall \\gamma\\ \\in \\Gamma\\ \\sum_{i \\in \\mathrm{N}} \\nu\\ _i (\\gamma\\ ) = const. " } ]
https://en.wikipedia.org/wiki?curid=1475381
14753970
Multiplicative partition
In number theory, a multiplicative partition or unordered factorization of an integer formula_0 is a way of writing formula_0 as a product of integers greater than 1, treating two products as equivalent if they differ only in the ordering of the factors. The number formula_0 is itself considered one of these products. Multiplicative partitions closely parallel the study of multipartite partitions, which are additive partitions of finite sequences of positive integers, with the addition made pointwise. Although the study of multiplicative partitions has been ongoing since at least 1923, the name "multiplicative partition" appears to have been introduced by . The Latin name "factorisatio numerorum" had been used previously. MathWorld uses the term unordered factorization. Application. describe an application of multiplicative partitions in classifying integers with a given number of divisors. For example, the integers with exactly 12 divisors take the forms formula_3, formula_4, formula_5, and formula_6, where formula_7, formula_8, and formula_9 are distinct prime numbers; these forms correspond to the multiplicative partitions formula_10, formula_11, formula_12, and formula_13 respectively. More generally, for each multiplicative partition formula_14 of the integer formula_15, there corresponds a class of integers having exactly formula_15 divisors, of the form formula_16 where each formula_17 is a distinct prime. This correspondence follows from the multiplicative property of the divisor function. Bounds on the number of partitions. credits with the problem of counting the number of multiplicative partitions of formula_0; this problem has since been studied by others under the Latin name of "factorisatio numerorum". If the number of multiplicative partitions of formula_0 is formula_18, McMahon and Oppenheim observed that its Dirichlet series generating function formula_19 has the product representation formula_20 The sequence of numbers formula_18 begins &lt;templatestyles src="Block indent/styles.css"/&gt; Oppenheim also claimed an upper bound on formula_18, of the form formula_21 but as showed, this bound is erroneous and the true bound is formula_22 Both of these bounds are not far from linear in formula_0: they are of the form formula_23. However, the typical value of formula_18 is much smaller: the average value of formula_18, averaged over an interval formula_24, is formula_25 a bound that is of the form formula_26. Additional results. observe, and prove, that most numbers cannot arise as the number formula_18 of multiplicative partitions of some formula_0: the number of values less than formula_27 which arise in this way is formula_28. Additionally, Luca et al. show that most values of formula_0 are not multiples of formula_18: the number of values formula_29 such that formula_18 divides formula_0 is formula_30. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "B_i" }, { "math_id": 3, "text": "p^{11}" }, { "math_id": 4, "text": "p\\cdot q^5" }, { "math_id": 5, "text": "p^2\\cdot q^3" }, { "math_id": 6, "text": "p\\cdot q\\cdot r^2" }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "q" }, { "math_id": 9, "text": "r" }, { "math_id": 10, "text": "12" }, { "math_id": 11, "text": "2\\cdot 6" }, { "math_id": 12, "text": "3\\cdot 4" }, { "math_id": 13, "text": "2\\cdot 2\\cdot 3" }, { "math_id": 14, "text": "k = \\prod t_i" }, { "math_id": 15, "text": "k" }, { "math_id": 16, "text": "\\prod p_i^{t_i-1}," }, { "math_id": 17, "text": "p_i" }, { "math_id": 18, "text": "a_n" }, { "math_id": 19, "text": "f(s)" }, { "math_id": 20, "text": "f(s)=\\sum_{n=1}^{\\infty}\\frac{a_n}{n^s}=\\prod_{k=2}^{\\infty}\\frac{1}{1-k^{-s}}." }, { "math_id": 21, "text": "a_n\\le n\\left(\\exp\\frac{\\log n\\log\\log\\log n}{\\log\\log n}\\right)^{-2+o(1)}," }, { "math_id": 22, "text": "a_n\\le n\\left(\\exp\\frac{\\log n\\log\\log\\log n}{\\log\\log n}\\right)^{-1+o(1)}." }, { "math_id": 23, "text": "n^{1-o(1)}" }, { "math_id": 24, "text": "x\\le n\\le x+N" }, { "math_id": 25, "text": "\\bar a = \\exp\\left(\\frac{4\\sqrt{\\log N}}{\\sqrt{2e}\\log\\log N}\\bigl(1+o(1)\\bigr)\\right)," }, { "math_id": 26, "text": "n^{o(1)}" }, { "math_id": 27, "text": "N" }, { "math_id": 28, "text": "N^{O(\\log\\log\\log N/\\log\\log N)}" }, { "math_id": 29, "text": "n\\le N" }, { "math_id": 30, "text": "O(N/\\log^{1+o(1)} N)" } ]
https://en.wikipedia.org/wiki?curid=14753970
14754977
TANK-binding kinase 1
Protein-coding gene in the species Homo sapiens TBK1 (TANK-binding kinase 1) is an enzyme with kinase activity. Specifically, it is a serine / threonine protein kinase. It is encoded by the TBK1 gene in humans. This kinase is mainly known for its role in innate immunity antiviral response. However, TBK1 also regulates cell proliferation, apoptosis, autophagy, and anti-tumor immunity. Insufficient regulation of TBK1 activity leads to autoimmune, neurodegenerative diseases or tumorigenesis. Structure and regulation of activity. TBK1 is a non-canonical IKK kinase that phosphorylates the nuclear factor kB (NFkB). It shares sequence homology with canonical IKK. The N-terminus of the protein contains the kinase domain (region 9-309) and the ubiquitin-like domain (region 310-385). The C-terminus is formed by two coiled-coil structures (region 407-713) that provide a surface for homodimerization. The autophosphorylation of serine 172, which requires homodimerization and ubiquitinylation of lysines 30 and 401, is necessary for kinase activity. Involvement in signaling pathways. TBK1 is involved in many signaling pathways and forms a node between them. For this reason, regulation of its involvement in individual signaling pathways is necessary. This is provided by adaptor proteins that interact with the dimerization domain of TBK1 to determine its location and access to substrates. Binding to TANK leads to localization to the perinuclear region and phosphorylation of substrates which is required for subsequent production of type I interferons (IFN-I). In contrast, binding to NAP1 and SINTBAD leads to localization in the cytoplasm and involvement in autophagy. Another adaptor protein that determines the location of TBK1 is TAPE. TAPE targets TBK1 to endolysosomes. A key interest in TBK1 is due to its role in innate immunity, especially in antiviral responses. TBK1 is redundant with IKKformula_0, but TBK1 seems to play a more important role. After triggering antiviral signaling through PRRs (pattern recognition receptors), TBK1 is activated. Subsequently, it phosphorylates the transcription factor IRF3, which is translocated to the nucleus, and promotes production of IFN-I. As a non-canonical IKK, TBK1 is also involved in the non-canonical NFkB pathway. It phosphorylates p100/NF-κB2, which is subsequently processed in the proteasome and released as a p52 subunit. This subunit dimerizes with RelB and mediates gene expression. In the canonical NFkB pathway, the NF-kappa-B (NFKB) complex of proteins is inhibited by I-kappa-B (IKB) proteins, which inactivate NFKB by trapping it in the cytoplasm. Phosphorylation of serine residues on the IKB proteins by IKB kinases marks them for destruction via the ubiquitination pathway, thereby allowing activation and nuclear translocation of the NFKB complex. The protein encoded by this gene is similar to IKB kinases and can mediate NFkB activation in response to certain growth factors. TBK1 promotes autophagy involved in pathogen and mitochondrial clearance. TBK1 phosphorylates autophagy receptors and components of the autophagy apparatus. Furthermore, TBK1 is also involved in the regulation of cell proliferation, apoptosis and glucose metabolism. Interactions. TANK-binding kinase 1 has been shown to interact with: &lt;templatestyles src="Div col/styles.css"/&gt; Transcription factors activated upon TBK1 activation include IRF3, IRF7 and ZEB1. Clinical significance. Deregulation of TBK1 activity and mutations in this protein are associated with many diseases. Due to the role of TBK1 in cell survival, deregulation of its activity is associated with tumorogenesis. There are also many autoimmune (e.g., rheumatoid arthritis, sympathetic lupus), neurodegenerative (e.g., amyotrophic lateral sclerosis), and infantile (e.g., herpesviral encephalitis) diseases. The loss of TBK1 cause embryonic lethality in mice. Inhibition of IκB kinase (IKK) and IKK-related kinases, IKBKE (IKKε) and TANK-binding kinase 1 (TBK1), has been investigated as a therapeutic option for the treatment of inflammatory diseases and cancer, and a way to overcome resistance to cancer immunotherapy. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\epsilon" } ]
https://en.wikipedia.org/wiki?curid=14754977
14756059
Environmental toxicology
Multidisciplinary field of science Environmental toxicology is a multidisciplinary field of science concerned with the study of the harmful effects of various chemical, biological and physical agents on living organisms. Ecotoxicology is a subdiscipline of environmental toxicology concerned with studying the harmful effects of toxicants at the population and ecosystem levels. Rachel Carson is considered the mother of environmental toxicology, as she made it a distinct field within toxicology in 1962 with the publication of her book "Silent Spring", which covered the effects of uncontrolled pesticide use. Carson's book was based extensively on a series of reports by Lucille Farrier Stickel on the ecological effects of the pesticide DDT. Organisms can be exposed to various kinds of toxicants at any life cycle stage, some of which are more sensitive than others. Toxicity can also vary with the organism's placement within its food web. Bioaccumulation occurs when an organism stores toxicants in fatty tissues, which may eventually establish a trophic cascade and the biomagnification of specific toxicants. Biodegradation releases carbon dioxide and water as by-products into the environment. This process is typically limited in areas affected by environmental toxicants. Harmful effects of such chemical and biological agents as toxicants from pollutants, insecticides, pesticides, and fertilizers can affect an organism and its community by reducing its species diversity and abundance. Such changes in population dynamics affect the ecosystem by reducing its productivity and stability. Although legislation implemented since the early 1970s had intended to minimize harmful effects of environmental toxicants upon all species, McCarty (2013) has warned that "longstanding limitations in the implementation of the simple conceptual model that is the basis of current aquatic toxicity testing protocols" may lead to an impending environmental toxicology "dark age". Governing policies on environmental toxicity. U.S. policies. To protect the environment, the National Environmental Policy Act (NEPA) was written. The main point that NEPA brings to light is that it "assures that all branches of government give proper consideration to the environment prior to undertaking any major federal actions that significantly affect the environment." This law was passed in 1970 and also founded the Council on Environmental Quality (CEQ). The importance of CEQ was that it helped further push policy areas. CEQ created environmental programs including the Federal Water Pollution Control Act, Toxic Substance Control Act, Resources Conservation and Recovery Act (RCRA and the Safe). CEQ was essential in creating the foundation for most of the "current environmental legislation except for Superfund and asbestos control legislation." Some initial impacts of NEPA pertain to the interpretation within Courts. Not only did Courts interpret NEPA to expand over direct environmental impacts from any projects, specifically federal, but also indirect actions from federal projects. Toxic Substance Control Act. TSCA, also known as the Toxic Substance Control Act, is a federal law that regulates industrial chemicals that have the potential to be harmful to humans and the environment. TSCA specifically targets "the manufacture, importation, storage, use, disposal, and degradation of chemicals in commercial use." The EPA allows the following to be done: "1. Pre-manufacture testing of chemicals to determine health or environmental risk 2. Review of chemicals for significant risk prior to the start of commercial production 3. Restriction or prohibition on the production or disposal of certain chemicals 4. Import and export control of chemicals prior to their entering or leaving the USA." The Clean Air Act. The Clean Air Act was aided by the signing of the 1990 amendments. These amendments protected reducing acid, the ozone layer, improving air quality and toxic pollutants. The Clean Air Act was actually revised and with, support from President George H.W Bush, it was signed in. The biggest major threats that this act targets are: urban air pollution, toxic air emissions, stratospheric ozone, acid rain etc. Apart from targeting these specific areas, it also established a national operating that "permits program to make the law more workable, and strengthened enforcement to help ensure better compliance with the Act." Regulations and enforcement actions on polychlorinated biphenyls. As mentioned above, though the United States did ban the use of polychlorinated biphenyls (PCBs), there is the possibility that they are present in products made before the PCB ban in 1979. The Environmental Protection Agency (EPA) released its ban on PCBs on April 19, 1979. According to the EPA, "Although PCBs are no longer being produced in this country, we will now bring under control the vast majority of PCBs still in use," said EPA Administrator Douglas M. Castle. "This will help prevent further contamination of our air, water and food supplies from a toxic and very persistent man-made chemical." PCBs has been tested on laboratory animals and have caused cancer and birth defects. PCB is suspected of having certain effects on liver and skin of humans. They are also suspected of causing cancer as well. EPA "estimates that 150 million pounds of PCBs are dispersed throughout the environment, including air and water supplies; an additional 290 million pounds are located in landfills in this country." Again, even though they have been banned, there is still a large amount of PCBs are circulating within the environment and are possibly causing effects on the skin and liver of humans. There were some cases in which people or companies that disposed of PCBs incorrectly. Up until now, there have been four cases in which EPA had to take legal actions against people/companies for their methods of disposal. The two cases involving the companies, were fined $28,600 for improper disposal. It is unknown what fined was charged against the three people for "illegally dumping PCBs along 210 miles of roadway in North Carolina." Though PCBs were banned, there are some exceptions where they are being used. The area in which it has been completely prohibited is "the manufacture, processing, distribution in commerce, and "non-enclosed" (open to the environment) uses of PCBs unless specifically authorized or exempted by EPA. "Totally enclosed" uses (contained, and therefore exposure to PCBs is unlikely) will be allowed to continue for the life of the equipment." In terms of electrical equipment containing PCBs is allowed under specific controlled conditions. Out of the 750 million pounds of PCBs, electrical equipment represents 578 million pounds. Any new manufacture of PCB is prohibited. Sources of environmental toxicity. There are many sources of environmental toxicity that can lead to the presence of toxicants in our food, water and air. These sources include organic and inorganic pollutants, pesticides and biological agents, all of which can have harmful effects on living organisms. There can be so called point sources of pollution, for instance the drains from a specific factory, but also non-point sources (diffuse sources) like the rubber from car tires that contain numerous chemicals and heavy metals that are spread in the environment. PCBs. PCBs are organic pollutants that are still present in our environment today, despite being banned in many countries, including the United States and Canada. Due to the persistent nature of PCBs in aquatic ecosystems, many aquatic species contain high levels of this chemical. For example, wild salmon ("Salmo salar") in the Baltic Sea have been shown to have significantly higher PCB levels than farmed salmon as the wild fish live in a heavily contaminated environment. PCBs pertains to a group of human-produced "organic chemicals known as Chlorinated hydrocarbons" The chemical and physical properties of a PCS determine the quantity and location chlorine and unlike other chemicals, they have no form of identification. The range of toxicity is not consistent and because PCBs have certain properties ( chemical stability, non-flammability) they have been used in a colossal amount of commercial and industrial practices. Some of those include, "Electrical, heat transfer and hydraulic equipment, plasticizers in paints, plastics and rubber products and pigments, dyes and carbonless copy paper" to name a few. Heavy metals. Metals like cadmium, mercury, and lead have minimal roles in living organisms if any, so the accumulation of these, even if a little, can lead to health issues. For example, because humans consume fish, it is important to monitor fishes for such trace metals. It has been known for a long time that these trace metals get passed up the food web because of their lack of biodegradability or capability to break down. Such build-up can lead to liver damage and cardiovascular diseases in people. It is also important to monitor fishes not just for public health, but also to assess the health of coastal ecosystems. For instance, it has been shown that fish (i.e. rainbow trout) exposed to higher cadmium levels and grow at a slower rate than fish exposed to lower levels or none. Moreover, cadmium can potentially alter the productivity and mating behaviours of these fish. Heavy metals can also alter the genetic makeup in aquatic organisms. In Canada, a study examined genetic diversity in wild yellow perch along various heavy metal concentration gradients in lakes polluted by mining operations. Researchers wanted to determine what effect metal contamination had on evolutionary responses among populations of yellow perch. Along the gradient, genetic diversity over all loci was negatively correlated with liver cadmium contamination. Additionally, there was a negative correlation observed between copper contamination and genetic diversity. Some aquatic species have evolved heavy metal tolerances. In response to high heavy metal concentrations a Dipteran species, "Chironomus riparius", of the midge family, Chironomidae, has evolved to become tolerant to cadmium toxicity in aquatic environments. Altered life histories, increased cadmium excretion, and sustained growth under cadmium exposure is evidence that shows that "C. riparius" exhibits genetically based heavy metal tolerance. Additionally, a case study in China looked at the concentrations of Cu (copper), Cr (chromium), Cd (cadmium), and Pb (lead) in the edible parts of the fishes "Pelteobagrus fluvidraco," the banded catfish, and "Cyprinus carpio," the common carp living in Taihu Lake. These metals were actively being released from sources such as industrial waste stemming from agriculture and mining and then going into coastal ecosystems and becoming stored in the local fish, especially their organs. This was especially alarming because too much copper consumption can lead to diarrhea and nausea in humans and liver damage in fish. Additionally, too much lead can lead to defects in learning, behavior, metabolism, and growth in some vertebrates, including humans. Much of these heavy metals were found in the two fish species' liver, kidney, and gills, however, their concentrations were fortunately found to be below the threshold amount for human consumption made by the Chinese Food Health Criterion. Overall, the study showed that the remediation efforts here did in fact reduce the amount of heavy metals built up in the fish. Generally speaking, the specific rate of build-up of metals in fish depends on the metal, the fish species, the aquatic environment, the time of year, and fishes' organs. For example, metals are more commonly known to be found the most in carnivorous species with omnivorous species following behind. In this case, perhaps due to the properties of the water differing at different parts of the year, there were more heavy metals spotted in the two fish species in the summer compared to the winter. Overall, it is relatively understood that the amount of metals in the liver and kidney of a fish represents the amount that has been actively stored in their bodies whereas the amount of metals in the gills represents the amount that has been accumulated from the surrounding water. This is why the gills are thought to be better bioindicators of metal pollution. Radiation. Radiation is given off by matter as either rays or waves of pure energy or high-speed particles. Rays or waves of energy, also known as electromagnetic radiation, include sunlight, X-rays, radar, and radio waves. Particle radiation includes alpha and beta particles and neutrons. When humans and animals are exposed to high radiation levels, they can develop cancer, congenital disabilities, or skin burns. Plants also face problems when exposed to large levels of radiation. After the Chernobyl disaster in 1986, the nuclear radiation damaged the surrounding plants' reproductive tissues, and it took approximately three years for these plans to regain their reproductive abilities. The study of radiation and its effects on the environment is known as radioecology. Metals toxicity. The most known or common types of heavy metals include zinc, arsenic, copper, lead, nickel, chromium, aluminum, and cadmium. All of these types cause certain risks on human and environment health. Though certain amount of these metals can actually have an important role in, for example, maintaining certain biochemical and physiological, "functions in living organisms when in very low concentrations, however they become noxious when they exceed certain threshold concentrations." Heavy metal are a huge part of environmental pollutions and their toxicity "is a problem of increasing significance for ecological, evolutionary, nutritional and environmental reasons." Aluminum Aluminum is the most common natural metal in the Earth's crust and is naturally cycled throughout the environment via processes like the weathering of rocks and volcano eruptions. Those natural processes release more aluminum into the freshwater environments than do humans, but anthropogenic impact has been causing values to rise above the recommended amount by the U.S. EPA and World Health Organization. Aluminum is used commonly in industrially-made items like paints, paper, household appliances, packaging, processing of food and water, and for health care items like antiperspirants and vaccine production. Run-off from those industrial uses then bring the metal flowing into the environment. Generally, too much exposure to aluminum affects motor and cognitive skills. In mammals, the metal has been shown to affect gene expression, DNA repair, and DNA binding. One study showed how the effects of aluminum include neurodegeneration and nerve cell death in mice. Another study has shown it to be related to human diseases associated with the nervous system such as Alzheimer's and Parkinson's disease and autism. Exposure to contaminants can change the tissues of marine life like fish too. For example, its accumulation has been shown to cause neurodegeneration in cerebral regions of the brains such as those of "O. mossambicus", otherwise known as Mozambique tilapia. Aluminum also decreases locomotive abilities of fishes since aluminum is thought to negatively impact with their oxygen supply. Finally, the metal causes slow responses to arousal and other environmental stimuli, overall abnormal behavior, and changes with the neurotransmitters in their bodies such as adrenaline and dopamine. Arsenic. Arsenic, one of the most important heavy metals, causes ecological problems and health issues in humans. It is "semimetallic property, is prominently toxic and carcinogenic, and is extensively available in the form of oxides or sulfides or as a salt of iron, sodium, calcium, copper, "etc."" It is also one of the most abundant elements on earth and its specific inorganic forms are very dangerous to living creatures (animals, plants, and humans) and the environment. In humans, arsenic can cause cancer in the bladder, skin, lungs and liver. One of the major sources of arsenic exposure in humans is contaminated water, which is a problem in more than 30 countries in the world. Humans tend to encounter arsenic by "natural means, industrial source, or from unintended sources." Water can become contaminated by arsenical pesticides or natural arsenical chemicals. There are some cases in which arsenic has been used in suicide attempts and can result in acute poisoning. Arsenic "is a protoplastic poison since it affects primarily the sulphydryl group of cells causing malfunctioning of cell respiration, cell enzymes and mitosis." Lead. Another extremely toxic metal, lead can and has been known to cause "extensive environmental contamination and health problems in many parts of the world." The physical appearance of lead is bright and silver colored metal. Some sources of lead pollution in the environment include Metal plating and fishing operations, soil waste, factory chimneys, smelting of ores, wastes from batter industries, fertilizers and pesticides and many more. Unlike, other metals such as copper, lead only plays a physiological aspect and no biological functions. In the US, "more than 100 to 200,000 tons of lead per year is being released from vehicle exhausts" and some can be brought in by plants, flow in water or fixation into the soil. Humans come in contact with lead through mining, fossil fuel burning. In burning, lead and its compounds are exposed into air, soil, and water. Lead can have different effects on the body and effects the central nervous system. Someone who has come in contact with lead can have either acute or chronic lead poisoning. Those who experience acute poisoning have symptoms such as appetite, headache, hypertension, abdominal pain, renal dysfunction, fatigue, sleeplessness, arthritis, hallucinations and vertigo." Chronic exposure on the other hand, can cause more severe symptoms such as, "mental retardation, birth defects, psychosis, autism, allergies, dyslexia, weight loss, hyperactivity, paralysis, muscular weakness, brain damage, kidney damage and may even cause death." Mercury. Mercury, a shiny silver-white, can transform into a colorless and odorless gas when heated up. Mercury highly affects the marine environment and there have been many studies conducted on the effects on the water environment. The biggest sources of mercury pollution include "agriculture, municipal wastewater discharges, mining, incineration, and discharges of industrial wastewater" all relatively connected to water. Mercury exists in three different forms and all three possess different levels of bioavailability and toxicity. The three forms include organic compounds, metallic elements and inorganic salts. As stated above, they are present in water resources such as oceans, rivers and lakes. Studies have shown that mercury turns into methylmercury (MeHg) and seeps into the environment. Plankton then get the metal into their system, and they are then eaten by other marine organisms. This cycle continues up the food web. This process is called biomagnification and "causes significant disturbance to aquatic lives." Mercury hurts marine life but can also be very hurtful towards humans' nervous system. Higher levels of mercury exposure can change many brain functions. It can "lead to shyness, tremors, memory problems, irritability, and changes in vision or hearing." Furthermore, breathing in mercury can lead to dysfunction in sensory and mental capabilities in humans as well such as with the use of one's motor skills, cognition, and sight. Because of these worrying side effects, there was a study done in the Pacific coast of Columbia to assess the levels of mercury in the environment and in the people living there from gold-mining. The researchers found that the median total mercury concentration in hair measured from people living in two communities, Quibdo and Paimado, was 1.26formula_0g/g and 0.67 formula_0g/g respectively. Residents in other areas of Columbia have been found to have similar levels. These levels are greater than the recommended threshold values held by the U.S. Environmental Protection Agency (EPA). In addition, they measured the concentration of mercury found in fish living nearby in the Atrato River. Even though the concentration was determined to have a low risk factor for human health and consumption, the concentration (0.5 g/g) was above the World Health Organization's (WHO) recommended threshold. They also determined that approximately 44% of the total sites around the river had a moderate level of pollution, further emphasizing that more intervention programs should be conducted to curb the seepage of mercury into the environment. This was a major concern especially since the Choco region is a biodiversity hotspot for all manner of organisms, not just humans. In the end, the highest levels of total airborne mercury were found to be in the gold shops downtown, further emphasizing the cost of gold-mining in such native communities and the need for better programs directed towards preventing its spread. Cadmium. According to, ATSDR ranking, cadmium is the 7th most toxic heavy metal. Cadmium is interesting in that once it is exposed to humans (at work) or animals in their environment, it will accumulate inside the body throughout the life of the human/animal. Though cadmium was used as replacement for tin in WWI and pigment in paint industries back in the day, currently it is seen mostly in rechargeable batteries, tobacco smoke and some alloys production. As stated by the Agency for Toxic Substance and Disease Registry, in " the US, more than 500,000 workers get exposed to toxic cadmium each year." It is also stated that the highest exposure to cadmium can be seen in China and Japan. The effects of cadmium on the kidney and bones is huge. It can cause bone mineralization which "is the process of laying down minerals on a matrix of the bone". This can happen through renal dysfunction or bone damage. Chromium. The 7th most abundant element, chromium, can occur naturally when one burns oil and coal and is released into the environment through sewage and fertilizers. Chromium usage can be seen in, "industries such as metallurgy, electroplating, production of paints and pigments, tanning, wood preservation, chemical production and pulp and paper production." Chromium toxicity affects the "biological processes in various plants such as maize, wheat, barley, cauliflower, citrullus and in vegetables. Chromium toxicity causes chlorosis and necrosis in plants." Pesticides. Pesticides are a major source of environmental toxicity. These chemically synthesized agents have been known to persist in the environment long after their administration. The poor biodegradability of pesticides can result in bioaccumulation of chemicals in various organisms along with biomagnification within a food web. Pesticides can be categorized according to the pests they target. Insecticides are used to eliminate agricultural pests that attack various fruits and crops. Herbicides target herbal pests such as weeds and other unwanted plants that reduce crop production. Pesticides in general have been shown to negatively impact the reproductive and endocrine systems of various reptiles and amphibians, so much that it is cautiously thought to be one of the main factors behind the decline in their populations all over the world. These pesticides impair their immune, nervous, behavioral systems including causing lower fertility rates, abnormal hormone levels, and lower fitness of offspring. Amphibians are thought to be especially in low decline because the release of agricultural pesticides is simultaneous with the secretion of pheromones during their season of reproduction. For instance, it has been demonstrated that greater quantities of pesticides correlates with greater number of defects in toads. For example, the chloroacetanilide class of herbicides is used worldwide in the control of weeds and grasses for agriculture. They are mainly used for crops such as corn, rice, soybean, sunflower, cotton, among others and are able to stay in the environment for long periods of time. Thus they can be found in soil, groundwater, and surface water due to soil erosion, leaching, and surface runoff. The amount of time they stay in the environment depends on the soil type and climate conditions like temperature and moisture. Chloroacetanilide herbicides include acetochlor, alachlor, among others. They are all listed as B2, L2, and C classes of carcinogens by the U.S. EPA. Another herbicide called atrazine is still commonly used throughout the world even with the European Union banning its usage in 2005. Shockingly, its use was still prevalent in the U.S. in 2016 and in Australia for some time. Because it can dissolve in water, many concerns have been raised about its potential to contaminate soil and water along the surface and ground. Various studies have been conducted to determine the impact of atrazine on wildlife. For example, studies have shown it to cause stunted growth and suppress or damage the immune and reproductive systems of aquatic life. It also is linked to cancer not only in fish, but also in mammals like humans. Additionally, atrazine is known to induce aromatase which causes the bodies of fish and amphibians to produce estrogen even when they are not supposed to. The herbicide also causes changes in gene expression which can be passed down from parent to offspring and get in the way of thyroid homeostasis. For example, a study done on male African clawed frogs show that exposure to atrazine led to smaller testicular size and lower testosterone levels. Another study done with the Northern leopard frog and Blanchard's cricket frog found that atrazine lowered their success with metamorphosis, the process of turning into an adult frog from the initial stage of a tadpole. This makes sense since metamorphosis is controlled by hormones from the thyroid gland which atrazine is known to negatively impact. Furthermore, a study was done to study the effects of atrazine on freshwater crayfish "Cherax destructor" from the Czech Republic, a keystone species. They found that the hepatopancreas, the body part that serves as both the liver and pancreas in these crustaceans, became damaged after being exposed. A build-up of lactate and ammonia also resulted, leading to liver failure, tissue hypoxia, lactic acidosis, muscle fatigue, and pain. There was also damage and even deterioration of the gills however they were able to heal after 2 weeks. Damage to gills was also found in the bivalve "Diplodon expansus". DDT. Dichlorodiphenyltrichloroethane (DDT) is an organochlorine insecticide that has been banned due to its adverse effects on both humans and wildlife. DDT's insecticidal properties were first discovered in 1939. Following this discovery, DDT was widely used by farmers in order to kill agricultural pests such as the potato beetle, codling moth and corn earworm. In 1962, the harmful effects of the widespread and uncontrolled use of DDT were detailed by Rachel Carson in her book The Silent Spring. Such large quantities of DDT and its metabolite dichlorodiphenyldichloroethylene (DDE) that were released into the environment were toxic to both animals and humans. DDT is not easily biodegradable and thus the chemical accumulates in soil and sediment runoff. Water systems become polluted and marine life such as fish and shellfish accumulate DDT in their tissues. Furthermore, this effect is amplified when animals who consume the fish also consume the chemical, demonstrating biomagnification within the food web. The process of biomagnification has detrimental effects on various bird species because DDT and DDE accumulate in their tissues inducing egg-shell thinning. Rapid declines in bird populations have been seen in Europe and North America as a result. Humans who consume animals or plants that are contaminated with DDT experience adverse health effects. Various studies have shown that DDT has damaging effects on the liver, nervous system and reproductive system of humans. By 1972, the United States Environmental Protection Agency (EPA) banned the use of DDT in the United States. Despite the regulation of this pesticide in North America, it is still used in certain areas of the world. Traces of this chemical have been found in noticeable amounts in a tributary of the Yangtze River in China, suggesting the pesticide is still in use in this region. Though DDT was banned in 1972, some of the pesticide (as well as other chemical) lingered in the environment. This lingering of toxic material led to the near extinction of peregrine falcon. There was high levels of DDT were found in many areas such as "eggs, fat and tissues of the bird." The government . worked with conservation groups in helping them breed out of the contaminated area. Finally, in 1999 the birds were taken off the U.S endangered species list. Sulfuryl fluoride. Sulfuryl fluoride is an insecticide that is broken down into fluoride and sulfate when released into the environment. Fluoride has been known to negatively affect aquatic wildlife. Elevated levels of fluoride have been proven to impair the feeding efficiency and growth of the common carp ("Cyprinus carpio"). Exposure to fluoride alters ion balance, total protein and lipid levels within these fish, which changes their body composition and disrupts various biochemical processes. PFAS chemicals. Per and poly fluoroalkyl substances, known as PFAS, are a group of approximately 15 000 chemicals. The common structure of these chemicals involves a functional group and a long carbon tail that is fully or partially fluorinated. The first PFAS chemical, polytetrafluoroethylene (PTFE), was accidentally synthesized in 1938 by DuPont researcher Roy J. Plunkett while making refrigerants. The chemical was found to have unique and useful properties such as resistance to water, oil, and extreme temperatures. In 1945 DuPont patented this chemical, along with other PFAS chemicals like PFOA with the now household name, Teflon. American multinational conglomerate 3M began mass producing Teflon in 1947. Then in the 1960's, the US navy and 3M created a new type of fire-fighting foam using PFAS chemicals, "aqueous film-forming foam" or AFFF, which was then shipped around the world and used at airports, military sites, and fire-fighting training centers. The chemicals are now used in many household products including nail polish, makeup, shampoos, soaps, toothpastes, menstrual products, clothes, contact lenses and toilet paper. The chemicals are also used in fracking, artificial grass, lubricants (mechanical, industrial and bicycle), food packaging, magazines, pesticides, refrigerants, and even surgically implanted medical devices. These chemicals have been given the nickname "forever chemicals" due to their extreme stability and resistance to natural degradation in the environment. They also bioaccumulate in humans and animals, with many of the PFAS chemicals having half-lives of several years. The also "biomagnify", so animals higher in the food chain tend to have higher concentration of the chemicals in their blood. PFAS has been found in almost all human blood samples tested, one study found 97% of Americans has PFAS in their blood. PFAS chemicals have been linked to high cholesterol, altered kidney and thyroid function, ulcerative colitis, immunosuppression, decreased effectiveness of vaccines, low birth weight, reproductive issues, and cancers such as kidney, testicular and liver cancer. However, we are still uncovering the full health effects of these chemicals. PFAS chemicals are now ubiquitous in the environment, recent research found PFAS chemicals in all rain water studied. DuPont and 3M had both done internal studies on the potential harmful effects of these chemicals, and had known for decades of their potential to cause cancers and low birth weight. Yet this research was not made public and the companies continued to make large profits off the harmful chemicals. In 2000 3M announced they will voluntarily halt production of PFOA and PFOS — technically known as "long-chain" chemicals — and will stop putting them in products by 2002. They replaced these chemicals with new "short-chain" PFAS formulations, but scientists have found these replacements to be possibly just as hazardous. Lawsuits around the world have now sprung up against companies and governments who knew of the harm these chemicals could do and continued to use them. Regulation talks on these chemicals is now happening world-wide. Remediation of these "forever chemicals" has been attempted in hot spots around the world, by placing the contaminated soil in landfill or heating at extremely high temperature. However, these are both very expensive, and new, cheaper remediation tools are desperately required. Organophosphate chemicals Organophosphate pesticides (OPs) are ester derivatives of phosphorus. These substances are found in pesticides, herbicides, and insecticides and were generally thought to be safe because they degrade quickly in the natural environment assuming there is sunlight, air, and soil. However, studies have shown these pesticides to negatively affect photosynthesis and growth in plants. These substances also get into the soil via runoff and cause decreases in soil fertility as well. Moreover, they have also been known to cause erratic swimming, respiratory stress, changes in behavior, and delayed metamorphosis in aquatic organisms. In a specific case study, organophosphate pesticides like chlorpyrifos, diazinon, fenitrothion, and quinalphos used in agriculture in the northwestern part of Bangladesh were found to have high or acute ecological risks on the surface water and soil for aquatic insects and crustaceans. More specifically, it showed higher ecological risks for Daphnia compared to other marine organisms. The discovery of such high concentrations of pesticides could be due to the local farmers using more pesticides than the recommended amount. This could be due to agriculture being the country's biggest economical activity. With the country's rising population numbers, necessity for more food will only increase, thereby putting more pressure on farmers. Cyanobacteria and cyanotoxins. Cyanobacteria, or blue-green algae, are photosynthetic bacteria. They grow in many types of water. Their rapid growth ("bloom") is related to high water temperature as well as eutrophication (resulting from enrichment with minerals and nutrients often due to runoff from the land that induces excessive growth of these algae). Many genera of cyanobacteria produce several toxins. Cyanotoxins can be dermatotoxic, neurotoxic, and hepatotoxic, though death related to their exposure is rare. Cyanotoxins and their non-toxic components can cause allergic reactions, but this is poorly understood. Despite their known toxicities, developing a specific biomarker of exposure has been difficult because of the complex mechanism of action these toxins possess. Cyanotoxins in drinking water. The occurrence of this toxin in drinking water depends on a couple of factors. One, is the drinking water's level in raw source water and secondly, it depends on the effectiveness of removing these toxins from water when drinking water is actually being produced. Due to being no data on the absence/presence of these toxins in drinking water, it is very hard to actually monitor the amounts that are present in finished water. This is a result of the U.S not having state or federal programs in place that actually monitor the presence of this toxins in drinking water treatment plants. Effects on humans. Though data on the effects of these two toxins are limited, from what is available it suggests the toxins attack the liver and kidney. There was an hepatoenteritis-like outbreak in Palm Island, Australia (1979), due to the consumption of water that contained, ""C. raciborskii", a cyanobacteria that can produce cylindrospermopsin." Most cases (typically involving children) have required they be taken to a hospital. The results of hospitilation include: Vomiting, kidney damage (due to lose of water, protein and electrolytes) fever, bloody diarrhea, and headaches. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mu" } ]
https://en.wikipedia.org/wiki?curid=14756059
1475657
Jacobi triple product
Mathematical identity found by Jacobi in 1829 In mathematics, the Jacobi triple product is the identity: formula_0 for complex numbers "x" and "y", with |"x"| &lt; 1 and "y" ≠ 0. It was introduced by Jacobi (1829) in his work "Fundamenta Nova Theoriae Functionum Ellipticarum". The Jacobi triple product identity is the Macdonald identity for the affine root system of type "A"1, and is the Weyl denominator formula for the corresponding affine Kac–Moody algebra. Properties. Jacobi's proof relies on Euler's pentagonal number theorem, which is itself a specific case of the Jacobi triple product identity. Let formula_1 and formula_2. Then we have formula_3 The Rogers–Ramanujan identities follow with formula_4, formula_2 and formula_4, formula_5. The Jacobi Triple Product also allows the Jacobi theta function to be written as an infinite product as follows: Let formula_6 and formula_7 Then the Jacobi theta function formula_8 can be written in the form formula_9 Using the Jacobi triple product identity, the theta function can be written as the product formula_10 There are many different notations used to express the Jacobi triple product. It takes on a concise form when expressed in terms of "q"-Pochhammer symbols: formula_11 where formula_12 is the infinite "q"-Pochhammer symbol. It enjoys a particularly elegant form when expressed in terms of the Ramanujan theta function. For formula_13 it can be written as formula_14 Proof. Let formula_15 Substituting xy for y and multiplying the new terms out gives formula_16 Since formula_17 is meromorphic for formula_18, it has a Laurent series formula_19 which satisfies formula_20 so that formula_21 and hence formula_22 Evaluating "c"0("x"). Showing that formula_23 (the polynomial of "x" of formula_24 is "1") is technical. One way is to set formula_25 and show both the numerator and the denominator of formula_26 are weight 1/2 modular under formula_27, since they are also 1-periodic and bounded on the upper half plane the quotient has to be constant so that formula_28. Other proofs. A different proof is given by G. E. Andrews based on two identities of Euler. For the analytic case, see Apostol. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\prod_{m=1}^\\infty\n\\left( 1 - x^{2m}\\right)\n\\left( 1 + x^{2m-1} y^2\\right)\n\\left( 1 +\\frac{x^{2m-1}}{y^2}\\right)\n= \\sum_{n=-\\infty}^\\infty x^{n^2} y^{2n},\n" }, { "math_id": 1, "text": "x=q\\sqrt q" }, { "math_id": 2, "text": "y^2=-\\sqrt{q}" }, { "math_id": 3, "text": "\\phi(q) = \\prod_{m=1}^\\infty \\left(1-q^m \\right) =\n\\sum_{n=-\\infty}^\\infty (-1)^n q^{\\frac{3n^2-n}{2}}." }, { "math_id": 4, "text": "x=q^2\\sqrt q" }, { "math_id": 5, "text": "y^2=-q\\sqrt{q}" }, { "math_id": 6, "text": "x=e^{i\\pi \\tau}" }, { "math_id": 7, "text": "y=e^{i\\pi z}." }, { "math_id": 8, "text": "\n\\vartheta(z; \\tau) = \\sum_{n=-\\infty}^\\infty e^{\\pi {\\rm{i}} n^2 \\tau + 2 \\pi {\\rm{i}} n z}\n" }, { "math_id": 9, "text": "\\sum_{n=-\\infty}^\\infty y^{2n}x^{n^2}. " }, { "math_id": 10, "text": "\\vartheta(z; \\tau) = \\prod_{m=1}^\\infty\n\\left( 1 - e^{2m \\pi {\\rm{i}} \\tau}\\right)\n\\left[ 1 + e^{(2m-1) \\pi {\\rm{i}} \\tau + 2 \\pi {\\rm{i}} z}\\right]\n\\left[ 1 + e^{(2m-1) \\pi {\\rm{i}} \\tau -2 \\pi {\\rm{i}} z}\\right].\n" }, { "math_id": 11, "text": "\\sum_{n=-\\infty}^\\infty q^{\\frac{n(n+1)}{2}}z^n =\n(q;q)_\\infty \\; \\left(-\\tfrac{1}{z};q\\right)_\\infty \\; (-zq;q)_\\infty," }, { "math_id": 12, "text": "(a;q)_\\infty" }, { "math_id": 13, "text": "|ab|<1" }, { "math_id": 14, "text": "\\sum_{n=-\\infty}^\\infty a^{\\frac{n(n+1)}{2}} \\; b^{\\frac{n(n-1)}{2}} = (-a; ab)_\\infty \\;(-b; ab)_\\infty \\;(ab;ab)_\\infty." }, { "math_id": 15, "text": "f_x(y) = \\prod_{m=1}^\\infty \\left( 1 - x^{2m} \\right)\\left( 1 + x^{2m-1} y^2\\right)\\left( 1 +x^{2m-1}y^{-2}\\right)" }, { "math_id": 16, "text": "f_x(xy) = \\frac{1+x^{-1}y^{-2}}{1+xy^2}f_x(y) = x^{-1}y^{-2}f_x(y)" }, { "math_id": 17, "text": "f_x" }, { "math_id": 18, "text": "|y| > 0" }, { "math_id": 19, "text": "f_x(y)=\\sum_{n=-\\infty}^\\infty c_n(x)y^{2n}" }, { "math_id": 20, "text": "\\sum_{n=-\\infty}^\\infty c_n(x)x^{2n+1} y^{2n}=x f_x(x y)=y^{-2}f_x(y)=\\sum_{n=-\\infty}^\\infty c_{n+1}(x)y^{2n}" }, { "math_id": 21, "text": "c_{n+1}(x) = c_n(x)x^{2n+1} = \\dots = c_0(x) x^{(n+1)^2}" }, { "math_id": 22, "text": "f_x(y)=c_0(x) \\sum_{n=-\\infty}^\\infty x^{n^2} y^{2n}" }, { "math_id": 23, "text": "c_0(x) = 1" }, { "math_id": 24, "text": "y^0" }, { "math_id": 25, "text": "y= 1" }, { "math_id": 26, "text": "\\frac1{c_0(e^{2i\\pi z})} =\\frac{\\sum\\limits_{n=-\\infty}^\\infty e^{2i\\pi n^2 z}}{\\prod\\limits_{m=1}^\\infty (1-e^{2i\\pi mz})(1+e^{2i\\pi(2m-1)z})^2}" }, { "math_id": 27, "text": "z\\mapsto -\\frac{1}{4z}" }, { "math_id": 28, "text": "c_0(x)=c_0(0)=1" } ]
https://en.wikipedia.org/wiki?curid=1475657
147566
Otto cycle
Thermodynamic cycle for spark ignition piston engines An Otto cycle is an idealized thermodynamic cycle that describes the functioning of a typical spark ignition piston engine. It is the thermodynamic cycle most commonly found in automobile engines. The Otto cycle is a description of what happens to a gas as it is subjected to changes of pressure, temperature, volume, addition of heat, and removal of heat. The gas that is subjected to those changes is called the system. The system, in this case, is defined to be the fluid (gas) within the cylinder. By describing the changes that take place within the system, it will also describe in inverse, the system's effect on the environment. In the case of the Otto cycle, the effect will be to produce enough net work from the system so as to propel an automobile and its occupants in the environment. The Otto cycle is constructed from: Top and bottom of the loop: a pair of quasi-parallel and isentropic processes (frictionless, adiabatic reversible). Left and right sides of the loop: a pair of parallel isochoric processes (constant volume). The isentropic process of compression or expansion implies that there will be no inefficiency (loss of mechanical energy), and there be no transfer of heat into or out of the system during that process. The cylinder and piston are assumed to be impermeable to heat during that time. Work is performed on the system during the lower isentropic compression process. Heat flows into the Otto cycle through the left pressurizing process and some of it flows back out through the right depressurizing process. The summation of the work added to the system plus the heat added minus the heat removed yields the net mechanical work generated by the system. Processes. The processes are described by: The Otto cycle consists of isentropic compression, heat addition at constant volume, isentropic expansion, and rejection of heat at constant volume. In the case of a four-stroke Otto cycle, technically there are two additional processes: one for the exhaust of waste heat and combustion products at constant pressure (isobaric), and one for the intake of cool oxygen-rich air also at constant pressure; however, these are often omitted in a simplified analysis. Even though those two processes are critical to the functioning of a real engine, wherein the details of heat transfer and combustion chemistry are relevant, for the simplified analysis of the thermodynamic cycle, it is more convenient to assume that all of the waste-heat is removed during a single volume change. History. The four-stroke engine was first patented by Alphonse Beau de Rochas in 1861. Before, in about 1854–57, two Italians (Eugenio Barsanti and Felice Matteucci) invented an engine that was rumored to be very similar, but the patent was lost. The first person to build a working four-stroke engine, a stationary engine using a coal gas-air mixture for fuel (a gas engine), was German engineer Nicolaus Otto. This is why the four-stroke principle today is commonly known as the Otto cycle and four-stroke engines using spark plugs often are called Otto engines. Processes. The cycle has four parts: a mass containing a mixture of fuel and oxygen is drawn into the cylinder by the descending piston, it is compressed by the piston rising, the mass is ignited by a spark releasing energy in the form of heat, the resulting gas is allowed to expand as it pushes the piston down, and finally the mass is exhausted as the piston rises a second time. As the piston is capable of moving along the cylinder, the volume of the gas changes with its position in the cylinder. The compression and expansion processes induced on the gas by the movement of the piston are idealized as reversible, i.e., no useful work is lost through turbulence or friction and no heat is transferred to or from the gas during those two processes. After the expansion is completed in the cylinder, the remaining heat is extracted and finally the gas is exhausted to the environment. Mechanical work is produced during the expansion process and some of that used to compress the air mass of the next cycle. The mechanical work produced minus that used for the compression process is the net work gained and that can be used for propulsion or for driving other machines. Alternatively the net work gained is the difference between the heat produced and the heat removed. Process 0–1 intake stroke (blue shade). A mass of air (working fluid) is drawn into the cylinder, from 0 to 1, at atmospheric pressure (constant pressure) through the open intake valve, while the exhaust valve is closed during this process. The intake valve closes at point 1. Process 1–2 compression stroke ("B" on diagrams). Piston moves from crank end (BDC, bottom dead centre and maximum volume) to cylinder head end ("TDC", top dead centre and minimum volume) as the working gas with initial state 1 is compressed isentropically to state point 2, through compression ratio ("V"1/"V"2). Mechanically this is the isentropic compression of the air/fuel mixture in the cylinder, also known as the compression stroke. This isentropic process assumes that no mechanical energy is lost due to friction and no heat is transferred to or from the gas, hence the process is reversible. The compression process requires that mechanical work be added to the working gas. Generally the compression ratio is around 9–10:1 ("V"1:"V"2) for a typical engine. Process 2–3 ignition phase ("C" on diagrams). The piston is momentarily at rest at "TDC". During this instant, which is known as the ignition phase, the air/fuel mixture remains in a small volume at the top of the compression stroke. Heat is added to the working fluid by the combustion of the injected fuel, with the volume essentially being held constant. The pressure rises and the ratio formula_0 is called the "explosion ratio". Process 3–4 expansion stroke ("D" on diagrams). The increased high pressure exerts a force on the piston and pushes it towards the "BDC". Expansion of working fluid takes place isentropically and work is done by the system on the piston. The volume ratio formula_1 is called the "isentropic expansion ratio". (For the Otto cycle is the same as the compression ratio formula_2). Mechanically this is the expansion of the hot gaseous mixture in the cylinder known as expansion (power) stroke. Process 4–1 idealized heat rejection ("A" on diagrams). The piston is momentarily at rest at "BDC". The working gas pressure drops instantaneously from point 4 to point 1 during a constant volume process as heat is removed to an idealized external sink that is brought into contact with the cylinder head. In modern internal combustion engines, the heat-sink may be surrounding air (for low powered engines), or a circulating fluid, such as coolant. The gas has returned to state 1. Process 1–0 exhaust stroke. The exhaust valve opens at point 1. As the piston moves from "BDC" (point 1) to "TDC" (point 0) with the exhaust valve opened, the gaseous mixture is vented to the atmosphere and the process starts anew. Cycle analysis. In this process 1–2 the piston does work on the gas and in process 3–4 the gas does work on the piston during those isentropic compression and expansion processes, respectively. Processes 2–3 and 4–1 are isochoric processes; heat is transferred into the system from 2—3 and out of the system from 4—1 but no work is done on the system or extracted from the system during those processes. No work is done during an isochoric (constant volume) process because addition or removal of work from a system requires the movement of the boundaries of the system; hence, as the cylinder volume does not change, no shaft work is added to or removed from the system. Four different equations are used to describe those four processes. A simplification is made by assuming changes of the kinetic and potential energy that take place in the system (mass of gas) can be neglected and then applying the first law of thermodynamics (energy conservation) to the mass of gas as it changes state as characterized by the gas's temperature, pressure, and volume. During a complete cycle, the gas returns to its original state of temperature, pressure and volume, hence the net internal energy change of the system (gas) is zero. As a result, the energy (heat or work) added to the system must be offset by energy (heat or work) that leaves the system. In the analysis of thermodynamic systems, the convention is to account energy that enters the system as positive and energy that leaves the system is accounted as negative. Equation 1a. During a complete cycle, the net change of energy of the system is zero: formula_3 The above states that the system (the mass of gas) returns to the original thermodynamic state it was in at the start of the cycle. Where formula_4is energy added to the system from 1–2–3 and formula_5 is energy removed from the system from 3–4–1. In terms of work and heat added to the system Equation 1b: formula_6 Each term of the equation can be expressed in terms of the internal energy of the gas at each point in the process: formula_7 formula_8 formula_9 formula_10 The energy balance Equation 1b becomes formula_11 To illustrate the example we choose some values to the points in the illustration: formula_12 formula_13 formula_14 formula_15 These values are arbitrarily but rationally selected. The work and heat terms can then be calculated. The energy added to the system as work during the compression from 1 to 2 is formula_16 The energy added to the system as heat from point 2 to 3 is formula_17 The energy removed from the system as work during the expansion from 3 to 4 is formula_18 The energy removed from the system as heat from point 4 to 1 is formula_19 The energy balance is formula_20 Note that energy added to the system is counted as positive and energy leaving the system is counted as negative and the summation is zero as expected for a complete cycle that returns the system to its original state. From the energy balance the work out of the system is: formula_21 The net energy out of the system as work is -1, meaning the system has produced one net unit of energy that leaves the system in the form of work. The net heat out of the system is: formula_22 As energy added to the system as heat is positive. From the above it appears as if the system gained one unit of heat. This matches the energy produced by the system as work out of the system. Thermal efficiency is the quotient of the net work from the system, to the heat added to system. Equation 2: formula_23 formula_24 Alternatively, thermal efficiency can be derived by strictly heat added and heat rejected. formula_25 Supplying the fictitious values formula_26 In the Otto cycle, there is no heat transfer during the process 1–2 and 3–4 as they are isentropic processes. Heat is supplied only during the constant volume processes 2–3 and heat is rejected only during the constant volume processes 4–1. The above values are absolute values that might, for instance , have units of joules (assuming the MKS system of units are to be used) and would be of use for a particular engine with particular dimensions. In the study of thermodynamic systems the extensive quantities such as energy, volume, or entropy (versus intensive quantities of temperature and pressure) are placed on a unit mass basis, and so too are the calculations, making those more general and therefore of more general use. Hence, each term involving an extensive quantity could be divided by the mass, giving the terms units of joules/kg (specific energy), meters3/kg (specific volume), or joules/(kelvin·kg) (specific entropy, heat capacity) etc. and would be represented using lower case letters, u, v, s, etc. Equation 1 can now be related to the specific heat equation for constant volume. The specific heats are particularly useful for thermodynamic calculations involving the ideal gas model. formula_27 Rearranging yields: formula_28 Inserting the specific heat equation into the thermal efficiency equation (Equation 2) yields. formula_29 Upon rearrangement: formula_30 Next, noting from the diagrams formula_31 (see isentropic relations for an ideal gas), thus both of these can be omitted. The equation then reduces to: Equation 2: formula_32 Since the Otto cycle uses isentropic processes during the compression (process 1 to 2) and expansion (process 3 to 4) the isentropic equations of ideal gases and the constant pressure/volume relations can be used to yield Equations 3 &amp; 4. Equation 3: formula_33 Equation 4: formula_34 where formula_35 formula_36 is the specific heat ratio The derivation of the previous equations are found by solving these four equations respectively (where formula_37 is the specific gas constant): formula_38 formula_39 formula_40 formula_41 Further simplifying Equation 4, where formula_42 is the compression ratio formula_43: Equation 5: formula_44 From inverting Equation 4 and inserting it into Equation 2 the final thermal efficiency can be expressed as: Equation 6: formula_45 From analyzing equation 6 it is evident that the Otto cycle efficiency depends directly upon the compression ratio formula_42. Since the formula_46 for air is 1.4, an increase in formula_42 will produce an increase in formula_47. However, the formula_48 for combustion products of the fuel/air mixture is often taken at approximately 1.3. The foregoing discussion implies that it is more efficient to have a high compression ratio. The standard ratio is approximately 10:1 for typical automobiles. Usually this does not increase much because of the possibility of autoignition, or "knock", which places an upper limit on the compression ratio. During the compression process 1–2 the temperature rises, therefore an increase in the compression ratio causes an increase in temperature. Autoignition occurs when the temperature of the fuel/air mixture becomes too high before it is ignited by the flame front. The compression stroke is intended to compress the products before the flame ignites the mixture. If the compression ratio is increased, the mixture may auto-ignite before the compression stroke is complete, leading to "engine knocking". This can damage engine components and will decrease the brake horsepower of the engine. Power. The power produced by the Otto cycle is an energy developed per unit of time. The Otto engines are called four-stroke engines. The intake stroke and compression stroke require one rotation of the engine crankshaft. The power stroke and exhaust stroke require another rotation. For two rotations there is one work generating stroke.. From the above cycle analysis the net work produced by the system : formula_49 If the units used were MKS the cycle would have produced one joule of energy in the form of work. For an engine of a particular displacement, such as one liter, the mass of gas of the system can be calculated assuming the engine is operating at standard temperature (20 °C) and pressure (1 atm). Using the Universal Gas Law the mass of one liter of gas is at room temperature and sea level pressure: formula_50 "V"=0.001 m3, "R"=0.286 kJ/(kg·K), "T"=293 K, "P"=101.3 kN/m2 "M"=0.00121 kg At an engine speed of 3000 RPM there are 1500 work-strokes/minute or 25 work-strokes/second. formula_51 Power is 25 times that since there are 25 work-strokes/second formula_52 If the engine uses multiple cylinders with the same displacement, the result would be multiplied by the number of cylinders. These results are the product of the values of the internal energy that were assumed for the four states of the system at the end each of the four strokes (two rotations). They were selected only for the sake of illustration, and are obviously of low value. Substitution of actual values from an actual engine would produce results closer to that of the engine. Whose results would be higher than the actual engine as there are many simplifying assumptions made in the analysis that overlook inefficiencies. Such results would overestimate the power output. Increasing power and efficiency. The difference between the exhaust and intake pressures and temperatures means that some increase in efficiency can be gained by use of a turbocharger, removing from the exhaust flow some part of the remaining energy and transferring that to the intake flow to increase the intake pressure. A gas turbine can extract useful work energy from the exhaust stream and that can then be used to pressurize the intake air. The pressure and temperature of the exhausting gases would be reduced as they expand through the gas turbine and that work is then applied to the intake gas stream, increasing its pressure and temperature. The transfer of energy amounts to an efficiency improvement and the resulting power density of the engine is also improved. The intake air is typically cooled so as to reduce its volume as the work produced per stroke is a direct function of the amount of mass taken into the cylinder; denser air will produce more work per cycle. Practically speaking the intake air mass temperature must also be reduced to prevent premature ignition in a petrol fueled engine; hence, an intercooler is used to remove some energy as heat and so reduce the intake temperature. Such a scheme both increases the engine's efficiency and power. The application of a supercharger driven by the crankshaft does increase the power output (power density) but does not increase efficiency as it uses some of the net work produced by the engine to pressurize the intake air and fails to extract otherwise wasted energy associated with the flow of exhaust at high temperature and a pressure to the ambient. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(P_3/P_2)" }, { "math_id": 1, "text": "V_4/V_3" }, { "math_id": 2, "text": "V_1/V_2" }, { "math_id": 3, "text": "\\Delta E = E_\\text{in} - E_\\text{out} = 0" }, { "math_id": 4, "text": "E_\\text{in}" }, { "math_id": 5, "text": "E_\\text{out}" }, { "math_id": 6, "text": "W_{1-2} + Q_{2-3} + W_{3-4} + Q_{4-1} = 0" }, { "math_id": 7, "text": "W_{1-2} = U_2 - U_1" }, { "math_id": 8, "text": "Q_{2-3} = U_3 - U_2" }, { "math_id": 9, "text": "W_{3-4} = U_4 - U_3" }, { "math_id": 10, "text": "Q_{4-1} = U_1 - U_4" }, { "math_id": 11, "text": "W_{1-2} + Q_{2-3} + W_{3-4} + Q_{4-1} = \\left(U_2 - U_1\\right) + \\left(U_3 - U_2\\right) + \\left(U_4 - U_3\\right) + \\left(U_1 - U_4\\right) = 0" }, { "math_id": 12, "text": "U_1 = 1" }, { "math_id": 13, "text": "U_2 = 5" }, { "math_id": 14, "text": "U_3 = 9" }, { "math_id": 15, "text": "U_4 = 4" }, { "math_id": 16, "text": "\\left(U_2 - U_1\\right) = \\left(5 - 1\\right) = 4" }, { "math_id": 17, "text": "\\left({U_3 - U_2}\\right) = \\left(9 - 5\\right) = 4" }, { "math_id": 18, "text": "\\left(U_4 - U_3\\right) = \\left(4 - 9\\right) = -5" }, { "math_id": 19, "text": "\\left(U_1 - U_4\\right) = \\left(1 - 4\\right) = -3" }, { "math_id": 20, "text": "\\Delta E = + 4 + 4 - 5 - 3 = 0" }, { "math_id": 21, "text": "\\sum \\text{Work} = W_{1-2} + W_{3-4} = \\left(U_2 - U_1\\right) + \\left(U_4 - U_3\\right) = 4 - 5 = -1" }, { "math_id": 22, "text": "\\sum \\text{Heat} = Q_{2-3} + Q_{4-1} = \\left(U_3 - U_2\\right) + \\left(U_1 - U_4\\right) = 4 -3 = 1" }, { "math_id": 23, "text": "\\eta = \\frac{W_{1-2} + W_{3-4} }{Q_{2-3}} = \\frac{\\left(U_2 - U_1\\right) + \\left(U_4 - U_3\\right)}{ \\left(U_3 - U_2\\right)}" }, { "math_id": 24, "text": "\\eta =1+\\frac{U_1 - U_4 }{ \\left(U_3 - U_2\\right)} = 1+\\frac{(1-4)}{ (9-5)} = 0.25 " }, { "math_id": 25, "text": "\\eta=\\frac{Q_{2-3} + Q_{4-1}}{Q_{2-3}}\n=1+\\frac{\\left(U_1-U_4\\right) }{ \\left(U_3-U_2\\right)} " }, { "math_id": 26, "text": "\\eta=1+\\frac{1-4}{9-5}=1+\\frac{-3}{4}=0.25" }, { "math_id": 27, "text": "C_\\text{v} = \\left(\\frac{\\delta u}{\\delta T}\\right)_\\text{v}" }, { "math_id": 28, "text": "\\delta u = (C_\\text{v})(\\delta T)" }, { "math_id": 29, "text": "\\eta = 1-\\left(\\frac{C_\\text{v}(T_4 - T_1)}{ C_\\text{v}(T_3 - T_2)}\\right)" }, { "math_id": 30, "text": "\\eta = 1-\\left(\\frac{T_1}{T_2}\\right)\\left(\\frac{T_4/T_1-1}{T_3/T_2-1}\\right)" }, { "math_id": 31, "text": "T_4/T_1 = T_3/T_2" }, { "math_id": 32, "text": "\\eta = 1-\\left(\\frac{T_1}{T_2}\\right)" }, { "math_id": 33, "text": "\\left(\\frac{T_2}{T_1}\\right)=\\left(\\frac{p_2}{p_1}\\right)^{(\\gamma-1)/{\\gamma}}" }, { "math_id": 34, "text": "\\left(\\frac{T_2}{T_1}\\right)=\\left(\\frac{V_1}{V_2}\\right)^{(\\gamma-1)}" }, { "math_id": 35, "text": "{\\gamma} = \\left(\\frac{C_\\text{p}}{C_\\text{v}}\\right)" }, { "math_id": 36, "text": "{\\gamma}" }, { "math_id": 37, "text": "R" }, { "math_id": 38, "text": "C_\\text{p} \\ln\\left(\\frac{V_1}{V_2}\\right) - R \\ln \\left(\\frac{p_2}{p_1}\\right) = 0" }, { "math_id": 39, "text": "C_\\text{v} \\ln\\left(\\frac{T_2}{T_1}\\right) - R \\ln \\left(\\frac{V_2}{V_1}\\right) = 0" }, { "math_id": 40, "text": "C_\\text{p} = \\left(\\frac{\\gamma R}{\\gamma-1}\\right)" }, { "math_id": 41, "text": "C_\\text{v} = \\left(\\frac{R}{\\gamma-1}\\right)" }, { "math_id": 42, "text": "r" }, { "math_id": 43, "text": "(V_1/V_2)" }, { "math_id": 44, "text": "\\left(\\frac{T_2}{T_1}\\right) = \\left(\\frac{V_1}{V_2}\\right)^{(\\gamma-1)} = r^{(\\gamma-1)}" }, { "math_id": 45, "text": "\\eta = 1 - \\left(\\frac{1}{r^{(\\gamma-1)}}\\right)" }, { "math_id": 46, "text": "\\gamma" }, { "math_id": 47, "text": "\\eta" }, { "math_id": 48, "text": "\\gamma " }, { "math_id": 49, "text": "\\sum \\text{ Work} = W_{1-2} + W_{3-4} = \\left(U_2 - U_1\\right) + \\left(U_4 - U_3\\right) = +4 - 5 = -1" }, { "math_id": 50, "text": "M=\\frac{PV}{RT}" }, { "math_id": 51, "text": "\\sum \\text{ Work} = 1\\,\\text{J}/(\\text{kg}\\cdot\\text{stroke})\\times 0.00121\\,\\text{kg}= 0.00121\\,\\text{J}/\\text{stroke}" }, { "math_id": 52, "text": "P = 25 \\times 0.00121=0.0303\\,\\text{J}/\\text{s}\\; \\text{or} \\;\\text{W}" } ]
https://en.wikipedia.org/wiki?curid=147566
14757039
Community indifference curve
A community indifference curve is an illustration of different combinations of commodity quantities that would bring a whole community the same level of utility. The model can be used to describe any community, such as a town or an entire nation. In a community indifference curve, the indifference curves of all those individuals are aggregated and held at an equal and constant level of utility. History. Invented by Tibor Scitovsky, a Hungarian born economist, in 1941. Solving for a CIC. A community indifference curve (CIC) provides the set of all aggregate endowments formula_0 needed to achieve a given distribution of utilities, formula_1. The community indifference curve can be found by solving for the following minimization problem:&lt;br&gt; formula_2 CICs assume allocative efficiency amongst members of the community. Allocative Efficiency provides that formula_3. The CIC comes from solving for formula_4 in terms of formula_5, formula_6. Community indifference curves are an aggregate of individual indifference curves. References. Albouy, David. "Welfare Economics with a Full Production Economy." Economics 481. Fall 2007. Deardorff's Glossary of International Economics.
[ { "math_id": 0, "text": "(\\bar{x}, \\bar{y}) = (x_1 + x_2, y_1, + y_2)" }, { "math_id": 1, "text": "(\\bar{u_1}, \\bar{u_2})" }, { "math_id": 2, "text": "\\min \\bar{y} \\text{ s.t. } U_1(x_1, y_1) \\geq \\bar{u_1} \\text{ and } U_2(\\bar{x}, \\bar{y} - 1) \\geq \\bar{u_2} " }, { "math_id": 3, "text": " MRS_1 xy = MRS_2 xy" }, { "math_id": 4, "text": "\\bar{y}" }, { "math_id": 5, "text": "\\bar{x}" }, { "math_id": 6, "text": "y_{cic}(\\bar{x})" } ]
https://en.wikipedia.org/wiki?curid=14757039
14757077
CD48
Protein-coding gene in humans CD48 antigen (cluster of differentiation 48) also known as B-lymphocyte activation marker (BLAST-1) or signaling lymphocytic activation molecule 2 (SLAMF2) is a protein that in humans is encoded by the CD48 gene. CD48 is a member of the CD2 subfamily of the immunoglobulin superfamily (IgSF) which includes SLAM (signaling lymphocyte activation molecules) proteins, such as CD84, CD150, CD229 and CD244. CD48 is found on the surface of lymphocytes and other immune cells, dendritic cells and endothelial cells, and participates in activation and differentiation pathways in these cells. CD48 was the first B-cell-specific cellular differentiation antigen identified in transformed B lymphoblasts. Structure. The gene for CD48 is located in chromosome 1q23 and contains 4 exons, each exon encoding one of the 4 domains of CD48: signal peptide, variable (V) domain, constant 2 (C2) domain and the glycophosphatidylinositol anchor (GPI anchor). The cDNA sequence of 1137 nucleotides encodes a 243 amino acid polypeptide of about 45 kDa. It consists of a 26 amino acid signal peptide, 194 amino acids of mature CD48 (V and C2 domains) and the C-terminal 23 amino acid segment comprising the GPI anchor. The GPI linkage of CD48 to the cell surface is through serine residue 220. CD48 does not have a transmembrane domain, however, but is held at the cell surface by a GPI anchor via a C-terminal domain which can be cleaved to yield a soluble form of the receptor. The CD48 protein is heavily glycosylated, with five possible asparagine-linked glycosylation sites at positions 40, 44, 104, 162 and 189, respectively. Approximately 35-40% of the total molecular weight is attributed to the carbohydrate side chains. Interactions. CD48 was found to have a very low affinity for CD2 with dissociation constant (formula_0) &lt; 0.5 mM. It was found that the preferred ligand of CD48 is 2B4 (CD244), which is also a member of the CD2 subfamily SLAM of IgSF expressed on natural killer cells (NK cells) and other leukocytes. The affinity of CD244 for CD48 is at formula_0 = 8 μM which is about 5 - 10 times stronger than for CD2. Function. Cell distribution. CD48 is expressed on all peripheral blood lymphocytes (PBL) including T cells, B cells, NK cells and thymocytes. It is also found on the surface of activated T cells, mast cells, monocytes and granulocytes. Like all other GPI anchor protein (GPI-AP), CD48 is deficient in erythrocytes (red blood cells). T-cell activation. CD48 and CD2 molecular coupling together with other interaction pairs of CD28 and CD80, TCR and peptide-MHC and LFA-1 and ICAM-1 contribute to the formation of an immunological synapse between a T cell and an antigen-presenting cell. CD48 interaction with CD2 has been shown to promote lipid raft formation, T cell activation and the formation of caveolae for macrophages through cell signal transduction via GPI moieties. Clinical significance. CD48 is being investigated amongst other markers in research on inflammation markers and therapies for HIV/AIDS. Heterozygous germline mutation in a patient was associated with a recurrent inflammatory syndrome resembling hemophagocytic lymphohistiocytosis. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt; External links. "This article incorporates text from the United States National Library of Medicine, which is in the public domain."
[ { "math_id": 0, "text": "K_{D}" } ]
https://en.wikipedia.org/wiki?curid=14757077
14758305
Pole splitting
Pole splitting is a phenomenon exploited in some forms of frequency compensation used in an electronic amplifier. When a capacitor is introduced between the input and output sides of the amplifier with the intention of moving the pole lowest in frequency (usually an input pole) to lower frequencies, pole splitting causes the pole next in frequency (usually an output pole) to move to a higher frequency. This pole movement increases the stability of the amplifier and improves its step response at the cost of decreased speed. Example of pole splitting. This example shows that introduction of the capacitor referred to as CC in the amplifier of Figure 1 has two results: first it causes the lowest frequency pole of the amplifier to move still lower in frequency and second, it causes the higher pole to move higher in frequency. The amplifier of Figure 1 has a low frequency pole due to the added input resistance "Ri" and capacitance "Ci", with the time constant "Ci" ( "RA || Ri" ). This pole is moved down in frequency by the Miller effect. The amplifier is given a high frequency output pole by addition of the load resistance "RL" and capacitance "CL", with the time constant "CL" (" Ro || RL" ). The upward movement of the high-frequency pole occurs because the Miller-amplified compensation capacitor "CC" alters the frequency dependence of the output voltage divider. The first objective, to show the lowest pole moves down in frequency, is established using the same approach as the Miller's theorem article. Following the procedure described in the article on Miller's theorem, the circuit of Figure 1 is transformed to that of Figure 2, which is electrically equivalent to Figure 1. Application of Kirchhoff's current law to the input side of Figure 2 determines the input voltage formula_0 to the ideal op amp as a function of the applied signal voltage formula_1, namely, formula_2 which exhibits a roll-off with frequency beginning at "f1" where formula_3 which introduces notation formula_4 for the time constant of the lowest pole. This frequency is lower than the initial low frequency of the amplifier, which for "CC" = 0 F is formula_5. Turning to the second objective, showing the higher pole moves still higher in frequency, it is necessary to look at the output side of the circuit, which contributes a second factor to the overall gain, and additional frequency dependence. The voltage formula_6 is determined by the gain of the ideal op amp inside the amplifier as formula_7 Using this relation and applying Kirchhoff's current law to the output side of the circuit determines the load voltage formula_8 as a function of the voltage formula_9 at the input to the ideal op amp as: formula_10formula_11 This expression is combined with the gain factor found earlier for the input side of the circuit to obtain the overall gain as formula_12 formula_13formula_14formula_15 This gain formula appears to show a simple two-pole response with two time constants. (It also exhibits a zero in the numerator but, assuming the amplifier gain "Av" is large, this zero is important only at frequencies too high to matter in this discussion, so the numerator can be approximated as unity.) However, although the amplifier does have a two-pole behavior, the two time-constants are more complicated than the above expression suggests because the Miller capacitance contains a buried frequency dependence that has no importance at low frequencies, but has considerable effect at high frequencies. That is, assuming the output "R-C" product, "CL" ( "Ro || RL" ), corresponds to a frequency well above the low frequency pole, the accurate form of the Miller capacitance must be used, rather than the Miller approximation. According to the article on Miller effect, the Miller capacitance is given by formula_16 (For a positive Miller capacitance, "Av" is negative.) Upon substitution of this result into the gain expression and collecting terms, the gain is rewritten as: formula_17 with "Dω" given by a quadratic in ω, namely: formula_18 formula_19 formula_20 formula_21 formula_22 formula_23 Every quadratic has two factors, and this expression looks simpler if it is rewritten as formula_24 formula_25 where formula_4 and formula_26 are combinations of the capacitances and resistances in the formula for "Dω". They correspond to the time constants of the two poles of the amplifier. One or the other time constant is the longest; suppose formula_4 is the longest time constant, corresponding to the lowest pole, and suppose formula_4 » formula_26. (Good step response requires formula_4 » formula_26. See Selection of CC below.) At low frequencies near the lowest pole of this amplifier, ordinarily the linear term in ω is more important than the quadratic term, so the low frequency behavior of "Dω" is: formula_27 where now "CM" is redefined using the Miller approximation as formula_28 which is simply the previous Miller capacitance evaluated at low frequencies. On this basis formula_4 is determined, provided formula_4 » formula_26. Because "CM" is large, the time constant formula_29 is much larger than its original value of "Ci" ( "RA || Ri" ). At high frequencies the quadratic term becomes important. Assuming the above result for formula_4 is valid, the second time constant, the position of the high frequency pole, is found from the quadratic term in "Dω" as formula_30 Substituting in this expression the quadratic coefficient corresponding to the product formula_31 along with the estimate for formula_4, an estimate for the position of the second pole is found: formula_32 and because "CM" is large, it seems formula_26 is reduced in size from its original value "CL" ( "Ro" || "RL" ); that is, the higher pole has moved still higher in frequency because of "CC". In short, introduction of capacitor "CC" moved the low pole lower and the high pole higher, so the term pole splitting seems a good description. Selection of CC. What value is a good choice for "CC"? For general purpose use, traditional design (often called "dominant-pole" or "single-pole compensation") requires the amplifier gain to drop at 20 dB/decade from the corner frequency down to 0 dB gain, or even lower. With this design the amplifier is stable and has near-optimal step response even as a unity gain voltage buffer. A more aggressive technique is two-pole compensation. The way to position "f"2 to obtain the design is shown in Figure 3. At the lowest pole "f"1, the Bode gain plot breaks slope to fall at 20 dB/decade. The aim is to maintain the 20 dB/decade slope all the way down to zero dB, and taking the ratio of the desired drop in gain (in dB) of 20 log10 "Av" to the required change in frequency (on a log frequency scale) of ( log10 "f"2  − log10 "f"1 ) = log10 ( "f"2 / "f"1 ) the slope of the segment between "f"1 and "f"2 is: Slope per decade of frequency formula_33 which is 20 dB/decade provided "f2 = Av f1" . If "f2" is not this large, the second break in the Bode plot that occurs at the second pole interrupts the plot before the gain drops to 0 dB with consequent lower stability and degraded step response. Figure 3 shows that to obtain the correct gain dependence on frequency, the second pole is at least a factor "Av" higher in frequency than the first pole. The gain is reduced a bit by the voltage dividers at the input and output of the amplifier, so with corrections to "Av" for the voltage dividers at input and output the pole-ratio condition for good step response becomes: formula_34 Using the approximations for the time constants developed above, formula_35 or formula_36 formula_37 which provides a quadratic equation to determine an appropriate value for "CC". Figure 4 shows an example using this equation. At low values of gain this example amplifier satisfies the pole-ratio condition without compensation (that is, in Figure 4 the compensation capacitor "CC" is small at low gain), but as gain increases, a compensation capacitance rapidly becomes necessary (that is, in Figure 4 the compensation capacitor "CC" increases rapidly with gain) because the necessary pole ratio increases with gain. For still larger gain, the necessary "CC" drops with increasing gain because the Miller amplification of "CC", which increases with gain (see the Miller equation ), allows a smaller value for "CC". To provide more safety margin for design uncertainties, often "Av" is increased to two or three times "Av" on the right side of this equation. See Sansen or Huijsing and article on step response. Slew rate. The above is a small-signal analysis. However, when large signals are used, the need to charge and discharge the compensation capacitor adversely affects the amplifier slew rate; in particular, the response to an input ramp signal is limited by the need to charge "CC". References and notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ v_i" }, { "math_id": 1, "text": "\\ v_a" }, { "math_id": 2, "text": "\n\n \\frac {v_i} {v_a} = \\frac {R_i} {R_i+R_A} \\frac {1} {1+j \\omega (C_M+C_i) (R_A\\|R_i)} \\ ," }, { "math_id": 3, "text": "\n\n\\begin{align} \n f_{1} & = \\frac {1} {2 \\pi (C_M+C_i)(R_A\\|R_i) } \\\\\n & = \\frac {1} {2 \\pi \\tau_1} \\ , \\\\\n\\end{align}\n\n" }, { "math_id": 4, "text": "\\tau_1" }, { "math_id": 5, "text": "\\frac {1} {2 \\pi C_i (R_A\\|R_i)}" }, { "math_id": 6, "text": "\\ v_o" }, { "math_id": 7, "text": "\\ v_o = A_v v_i \\ . " }, { "math_id": 8, "text": "v_{\\ell}" }, { "math_id": 9, "text": "\\ v_{i}" }, { "math_id": 10, "text": " \\frac {v_{\\ell}} {v_i} = A_v \\frac {R_L} {R_L+R_o}\\,\\!" }, { "math_id": 11, "text": "\\sdot \\frac {1+j \\omega C_C R_o/A_v } {1+j \\omega (C_L + C_C ) (R_o\\|R_L) } \\ . " }, { "math_id": 12, "text": "\n\n\\frac {v_{\\ell}} {v_a} = \\frac {v_{\\ell}}{v_i} \\frac {v_i} {v_a} \n" }, { "math_id": 13, "text": "= A_v \\frac {R_i} {R_i+R_A}\\sdot \\frac {R_L} {R_L+R_o}\\,\\! " }, { "math_id": 14, "text": " \\sdot \\frac {1} {1+j \\omega (C_M+C_i) (R_A\\|R_i)} \\,\\! " }, { "math_id": 15, "text": " \\sdot \\frac {1+j \\omega C_C R_o/A_v } {1+j \\omega (C_L + C_C ) (R_o\\|R_L) } \\ . " }, { "math_id": 16, "text": "\n\\begin{align}\nC_M & = C_C \\left( 1 - \\frac {v_{\\ell}} {v_i} \\right) \\\\\n & = C_C \\left( 1 - A_v \\frac {R_L} {R_L+R_o} \\frac {1+j \\omega C_C R_o/A_v } {1+j \\omega (C_L + C_C ) (R_o\\|R_L) } \\right ) \\ . \\\\\n\\end{align}\n" }, { "math_id": 17, "text": " \\frac {v_{\\ell}} {v_a} = A_v \\frac {R_i} {R_i+R_A} \\frac {R_L} {R_L+R_o} \\frac {1+j \\omega C_C R_o/A_v } {D_{ \\omega }} \\ , " }, { "math_id": 18, "text": "D_{ \\omega }\\,\\!" }, { "math_id": 19, "text": " = [1+j \\omega (C_L+C_C) (R_o\\|R_L)] \\,\\!" }, { "math_id": 20, "text": " \\sdot \\ [ 1+j \\omega C_i (R_A\\|R_i)] \\,\\!" }, { "math_id": 21, "text": " \\ +j \\omega C_C (R_A\\|R_i)\\,\\! " }, { "math_id": 22, "text": "\\sdot \\left( 1-A_v \\frac {R_L} {R_L+R_O} \\right) \\,\\!" }, { "math_id": 23, "text": "\\ +(j \\omega) ^2 C_C C_L (R_A\\|R_i) (R_O\\|R_L) \\ . " }, { "math_id": 24, "text": "\n\\ D_{ \\omega } =(1+j \\omega { \\tau}_1 )(1+j \\omega { \\tau}_2 ) " }, { "math_id": 25, "text": " = 1 + j \\omega ( {\\tau}_1+{\\tau}_2) ) +(j \\omega )^2 \\tau_1 \\tau_2 \\ , \\ " }, { "math_id": 26, "text": "\\tau_2" }, { "math_id": 27, "text": "\n\\begin{align}\n\\ D_{ \\omega } & = 1+ j \\omega [(C_M+C_i) (R_A\\|R_i) +(C_L+C_C) (R_o\\|R_L)] \\\\\n & = 1+j \\omega ( \\tau_1 + \\tau_2) \\approx 1 + j \\omega \\tau_1 \\ , \\ \\\\\n\\end{align}\n" }, { "math_id": 28, "text": " C_M= C_C \\left( 1 - A_v \\frac {R_L}{R_L+R_o} \\right) \\ ," }, { "math_id": 29, "text": "{\\tau}_1" }, { "math_id": 30, "text": " \\tau_2 = \\frac {\\tau_1 \\tau_2} {\\tau_1} \\approx \\frac {\\tau_1 \\tau_2} {\\tau_1 + \\tau_2}\\ . " }, { "math_id": 31, "text": "\\tau_1 \\tau_2 " }, { "math_id": 32, "text": "\n\\begin{align}\n \\tau_2 & = \\frac {(C_C C_L +C_L C_i+C_i C_C)(R_A\\|R_i) (R_O\\|R_L) } {(C_M+C_i) (R_A\\|R_i) +(C_L+C_C) (R_o\\|R_L)} \\\\\n & \\approx \\frac {C_C C_L +C_L C_i+C_i C_C} {C_M} (R_O\\|R_L)\\ , \\\\\n\\end{align}\n" }, { "math_id": 33, "text": "=20 \\frac {\\mathrm{log_{10}} ( A_v )} {\\mathrm{log_{10}} (f_2 / f_1 ) } \\ ," }, { "math_id": 34, "text": " \\frac {\\tau_1} {\\tau_2} \\approx A_v \\frac {R_i} {R_i+R_A}\\sdot \\frac {R_L} {R_L+R_o} \\ , " }, { "math_id": 35, "text": " \\frac {\\tau_1} {\\tau_2} \\approx \\frac {(\\tau_1 +\\tau_2 ) ^2} {\\tau_1 \\tau_2} \\approx A_v \\frac {R_i} {R_i+R_A}\\sdot \\frac {R_L} {R_L+R_o} \\ ," }, { "math_id": 36, "text": " \\frac {[(C_M+C_i) (R_A\\|R_i) +(C_L+C_C) (R_o\\|R_L)]^2} {(C_C C_L +C_L C_i+C_i C_C)(R_A\\|R_i) (R_O\\|R_L) } \\,\\! " }, { "math_id": 37, "text": "{\\color{White}\\sdot} = A_v \\frac {R_i} {R_i+R_A}\\sdot \\frac {R_L} {R_L+R_o} \\ ," } ]
https://en.wikipedia.org/wiki?curid=14758305
14758355
Multinomial probit
In statistics and econometrics, the multinomial probit model is a generalization of the probit model used when there are several possible categories that the dependent variable can fall into. As such, it is an alternative to the multinomial logit model as one method of multiclass classification. It is not to be confused with the "multivariate" probit model, which is used to model correlated binary outcomes for more than one independent variable. General specification. It is assumed that we have a series of observations "Y""i", for "i" = 1..."n", of the outcomes of multi-way choices from a categorical distribution of size "m" (there are "m" possible choices). Along with each observation "Y""i" is a set of "k" observed values "x""1,i", ..., "x""k,i" of explanatory variables (also known as independent variables, predictor variables, features, etc.). Some examples: The multinomial probit model is a statistical model that can be used to predict the likely outcome of an unobserved multi-way trial given the associated explanatory variables. In the process, the model attempts to explain the relative effect of differing explanatory variables on the different outcomes. Formally, the outcomes "Y""i" are described as being categorically-distributed data, where each outcome value "h" for observation "i" occurs with an unobserved probability "p""i,h" that is specific to the observation "i" at hand because it is determined by the values of the explanatory variables associated with that observation. That is: formula_0 or equivalently formula_1 for each of "m" possible values of "h". Latent variable model. Multinomial probit is often written in terms of a latent variable model: formula_2 where formula_3 Then formula_4 That is, formula_5 Note that this model allows for arbitrary correlation between the error variables, so that it doesn't necessarily respect independence of irrelevant alternatives. When formula_6 is the identity matrix (such that there is no correlation or heteroscedasticity), the model is called independent probit. Estimation. For details on how the equations are estimated, see the article Probit model.
[ { "math_id": 0, "text": "Y_i|x_{1,i},\\ldots,x_{k,i} \\ \\sim \\operatorname{Categorical}(p_{i,1},\\ldots,p_{i,m}),\\text{ for }i = 1, \\dots , n" }, { "math_id": 1, "text": "\\Pr[Y_i=h|x_{1,i},\\ldots,x_{k,i}] = p_{i,h},\\text{ for }i = 1, \\dots , n," }, { "math_id": 2, "text": "\n\\begin{align}\nY_i^{1\\ast} &= \\boldsymbol\\beta_1 \\cdot \\mathbf{X}_i + \\varepsilon_1 \\, \\\\\nY_i^{2\\ast} &= \\boldsymbol\\beta_2 \\cdot \\mathbf{X}_i + \\varepsilon_2 \\, \\\\\n\\ldots & \\ldots \\\\\nY_i^{m\\ast} &= \\boldsymbol\\beta_m \\cdot \\mathbf{X}_i + \\varepsilon_m \\, \\\\\n\\end{align}\n" }, { "math_id": 3, "text": "\\boldsymbol\\varepsilon \\sim \\mathcal{N}(0,\\boldsymbol\\Sigma)" }, { "math_id": 4, "text": " Y_i = \\begin{cases}\n1 & \\text{if }Y_i^{1\\ast} > Y_i^{2\\ast},\\ldots,Y_i^{m\\ast} \\\\\n2 & \\text{if }Y_i^{2\\ast} > Y_i^{1\\ast},Y_i^{3\\ast},\\ldots,Y_i^{m\\ast} \\\\\n\\ldots & \\ldots \\\\\nm &\\text{otherwise.} \\end{cases} " }, { "math_id": 5, "text": " Y_i = \\arg\\max_{h=1}^m Y_i^{h\\ast}" }, { "math_id": 6, "text": "\\scriptstyle\\boldsymbol\\Sigma" } ]
https://en.wikipedia.org/wiki?curid=14758355
14758602
Rabin fairness
Rabin fairness is a fairness model invented by Matthew Rabin. It goes beyond the standard assumptions in modeling behavior, rationality and self-interest, to incorporate fairness. Rabin's fairness model incorporates findings from the economics and psychology fields to provide an alternative utility model. Fairness is one type of social preference. Including fairness in the standard utility model. Past utility models incorporated altruism or the fact that people may care not only about their own well-being, but also about the well-being of others. However, evidence indicates that pure altruism does not occur often, contrarily most altruistic behavior demonstrates three facts (as defined by Rabin) and these facts are proven by past events. Due to the existence of these three facts, Rabin created a utility function that incorporates fairness.: Rabin's fairness model. Rabin formalized fairness using a two-person, modified game theory matrix with two decisions (a two by two matrix), where i is the person whose utility is being measured. Furthermore, within the game theory matrix payoffs for each person are allocated. The following formula was created by Rabin to model utility to include fairness: formula_0 Where: Fairness model implications. The fairness model implies that if player j is treating player i badly, if formula_13, then player i wishes to treat player j badly as well by choosing an action, ai, that is low or negative. However, if player j is treating player i kindly, formula_14, then player i will act kindly towards player j, as well (For more in depth examples see Rabin (1993)). Welfare and fairness: an application. Rabin also used the fairness model as a utility function to determine social welfare. Rabin used a game theory "Grabbing Game" which posited that there are two people shopping, with two cans of soup left. The payoffs for each are given as follows, where player i's payoffs are on the left of each pair and player j's payoffs are on the right of each pair: If both grab or both share, each player i and j get one can of soup. However, one grabs, and the other does not, then the person who grabbed gets both cans of soup. There is a Nash Equilibrium present of (grab, grab). Moreover, applying Rabin's fairness model (grab, grab) will always be a fairness equilibrium but for small values of x the cooperative choice (share, share) will Pareto dominate (grab, grab). The reasoning behind this is that if the two people both grab for and therefore fight over the cans, the angriness and bad tempers that arise are likely to outweigh the importance of receiving the cans. Therefore, while (Grab, Grab) and (Share, Share) are fairness equilibria when material payoffs are small, (Share, Share) will dominate (Grab, Grab) since people are affected by the kindness, which increases utility, or unkindness, which decreases utility, of others. This example could be generalized further to describe the allocation of public goods. Public goods provision and fairness. Stouten (2006) further generalized the principle of fairness to be applied to the provision of public goods. He and his colleagues ran three experiments to find how participants reacted when one member of their group violated the equality rule, which states that all group members will coordinate to equally and fairly contribute to the efficient provision of public goods. Their findings demonstrated that the participants believed that the equality rule should be applied to others and therefore when one person violated this rule punishment was used against this person, in terms of negative reactions. Therefore, the equality rule applied in real-life situations should lead to the efficient provision of public goods if violations of the important coordination and fairness rules can be detected. However, often these violations cannot be detected which then leads to the free rider problem and an under-provision of public goods. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "U_i(a_i, b_j, c_i) = \\pi_i(a_i, b_j) + \\tilde{f}_j(b_j, c_i) * [1+f_i(a_i, b_j)]." }, { "math_id": 1, "text": "\\pi_i(a_i, b_j)" }, { "math_id": 2, "text": "f_i(a_i, b_j) = [\\pi_j(b_j, a_i) - \\pi_j^e(b_j)]/[\\pi_j^h(b_j) - \\pi_j^\\text{min}(b_j)]" }, { "math_id": 3, "text": "\\pi_j^e(b_j)= [\\pi_j^h(b_j)+ \\pi_j^l(b_j)]/2," }, { "math_id": 4, "text": "\\pi_j^h(b_j)" }, { "math_id": 5, "text": "\\pi_j^l(b_j)" }, { "math_id": 6, "text": "\\pi_j^\\text{min}(b_j)" }, { "math_id": 7, "text": "\\tilde{f}_j(b_j, c_i) = [\\pi_i(c_i, b_j) - \\pi_i^e(c_i)]/[\\pi_i^h(c_i) - \\pi_i^\\text{min}(c_i)]" }, { "math_id": 8, "text": "\\pi_i^\\text{min}(c_i)" }, { "math_id": 9, "text": "\\pi_i^e(c_i)= [\\pi_i^h(c_i)+ \\pi_i^l(c_i)]/2" }, { "math_id": 10, "text": "\\pi_i^h(c_i)" }, { "math_id": 11, "text": "\\pi_{i^l}(c_i)" }, { "math_id": 12, "text": "U_i(a_i, b_j, c_i)" }, { "math_id": 13, "text": "f_j(b_j, c_i) < 0" }, { "math_id": 14, "text": "f_j(b_j, c_i) > 0" } ]
https://en.wikipedia.org/wiki?curid=14758602
1475894
Malfatti circles
Three tangent circles in a triangle In geometry, the Malfatti circles are three circles inside a given triangle such that each circle is tangent to the other two and to two sides of the triangle. They are named after Gian Francesco Malfatti, who made early studies of the problem of constructing these circles in the mistaken belief that they would have the largest possible total area of any three disjoint circles within the triangle. Malfatti's problem has been used to refer both to the problem of constructing the Malfatti circles and to the problem of finding three area-maximizing circles within a triangle. A simple construction of the Malfatti circles was given by , and many mathematicians have since studied the problem. Malfatti himself supplied a formula for the radii of the three circles, and they may also be used to define two triangle centers, the Ajima–Malfatti points of a triangle. The problem of maximizing the total area of three circles in a triangle is never solved by the Malfatti circles. Instead, the optimal solution can always be found by a greedy algorithm that finds the largest circle within the given triangle, the largest circle within the three connected subsets of the triangle outside of the first circle, and the largest circle within the five connected subsets of the triangle outside of the first two circles. Although this procedure was first formulated in 1930, its correctness was not proven until 1994. Malfatti's problem. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Does the greedy algorithm always find area-maximizing packings of more than three circles in any triangle? Gian Francesco Malfatti (1803) posed the problem of cutting three cylindrical columns out of a triangular prism of marble, maximizing the total volume of the columns. He assumed that the solution to this problem was given by three tangent circles within the triangular cross-section of the wedge. That is, more abstractly, he conjectured that the three Malfatti circles have the maximum total area of any three disjoint circles within a given triangle. Malfatti's work was popularized for a wider readership in French by Joseph Diaz Gergonne in the first volume of his "Annales" (1811), with further discussion in the second and tenth. However, Gergonne only stated the circle-tangency problem, not the area-maximizing one. Malfatti's assumption that the two problems are equivalent is incorrect. Lob and Richmond (1930), who went back to the original Italian text, observed that for some triangles a larger area can be achieved by a greedy algorithm that inscribes a single circle of maximal radius within the triangle, inscribes a second circle within one of the three remaining corners of the triangle, the one with the smallest angle, and inscribes a third circle within the largest of the five remaining pieces. The difference in area for an equilateral triangle is small, just over 1%, but as Howard Eves (1946) pointed out, for an isosceles triangle with a very sharp apex, the optimal circles (stacked one atop each other above the base of the triangle) have nearly twice the area of the Malfatti circles. In fact, the Malfatti circles are never optimal. It was discovered through numerical computations in the 1960s, and later proven rigorously, that the Lob–Richmond procedure always produces the three circles with largest area, and that these are always larger than the Malfatti circles. conjectured more generally that, for any integer n, the greedy algorithm finds the area-maximizing set of n circles within a given triangle; the conjecture is known to be true for "n" ≤ 3. History. The problem of constructing three circles tangent to each other within a triangle was posed by the 18th-century Japanese mathematician Ajima Naonobu prior to the work of Malfatti, and included in an unpublished collection of Ajima's works made a year after Ajima's death by his student Kusaka Makoto. Even earlier, the same problem was considered in a 1384 manuscript by Gilio di Cecco da Montepulciano, now in the Municipal Library of Siena, Italy. Jacob Bernoulli (1744) studied a special case of the problem, for a specific isosceles triangle. Since the work of Malfatti, there has been a significant amount of work on methods for constructing Malfatti's three tangent circles; Richard K. Guy writes that the literature on the problem is "extensive, widely scattered, and not always aware of itself". Notably, Jakob Steiner (1826) presented a simple geometric construction based on bitangents; other authors have since claimed that Steiner's presentation lacked a proof, which was later supplied by Andrew Hart (1856), but Guy points to the proof scattered within two of Steiner's own papers from that time. Solutions based on algebraic formulations of the problem include those by C. L. Lehmus (1819), E. C. Catalan (1846), C. Adams (1846, 1849), J. Derousseau (1895), and Andreas Pampuch (1904). The algebraic solutions do not distinguish between internal and external tangencies among the circles and the given triangle; if the problem is generalized to allow tangencies of either kind, then a given triangle will have 32 different solutions and conversely a triple of mutually tangent circles will be a solution for eight different triangles. credits the enumeration of these solutions to , but notes that this count of the number of solutions was already given in a remark by . The problem and its generalizations were the subject of many other 19th-century mathematical publications, and its history and mathematics have been the subject of ongoing study since then. It has also been a frequent topic in books on geometry. and recount an episode in 19th-century Neapolitan mathematics related to the Malfatti circles. In 1839, Vincenzo Flauti, a synthetic geometer, posed a challenge involving the solution of three geometry problems, one of which was the construction of Malfatti's circles; his intention in doing so was to show the superiority of synthetic to analytic techniques. Despite a solution being given by Fortunato Padula, a student in a rival school of analytic geometry, Flauti awarded the prize to his own student, Nicola Trudi, whose solutions Flauti had known of when he posed his challenge. More recently, the problem of constructing the Malfatti circles has been used as a test problem for computer algebra systems. Steiner's construction. Although much of the early work on the Malfatti circles used analytic geometry, provided the following simple synthetic construction. A circle that is tangent to two sides of a triangle, as the Malfatti circles are, must be centered on one of the angle bisectors of the triangle (green in the figure). These bisectors partition the triangle into three smaller triangles, and Steiner's construction of the Malfatti circles begins by drawing a different triple of circles (shown dashed in the figure) inscribed within each of these three smaller triangles. In general these circles are disjoint, so each pair of two circles has four bitangents (lines touching both). Two of these bitangents pass "between" their circles: one is an angle bisector, and the second is shown as a red dashed line in the figure. Label the three sides of the given triangle as a, b, and c, and label the three bitangents that are not angle bisectors as x, y, and z, where x is the bitangent to the two circles that do not touch side a, y is the bitangent to the two circles that do not touch side b, and z is the bitangent to the two circles that do not touch side c. Then the three Malfatti circles are the inscribed circles to the three tangential quadrilaterals "abyx", "aczx", and "bczy". In case of symmetry two of the dashed circles may touch in a point on a bisector, making two bitangents coincide there, but still setting up the relevant quadrilaterals for Malfatti's circles. The three bitangents x, y, and z cross the triangle sides at the point of tangency with the third inscribed circle, and may also be found as the reflections of the angle bisectors across the lines connecting pairs of centers of these incircles. Radius formula. The radius of each of the three Malfatti circles may be determined as a formula involving the three side lengths a, b, and c of the triangle, the inradius r, the semiperimeter formula_0, and the three distances d, e, and f from the incenter of the triangle to the vertices opposite sides a, b, and c respectively. The formulae for the three radii are: formula_1 Related formulae may be used to find examples of triangles whose side lengths, inradii, and Malfatti radii are all rational numbers or all integers. For instance, the triangle with side lengths 28392, 21000, and 25872 has inradius 6930 and Malfatti radii 3969, 4900, and 4356. As another example, the triangle with side lengths 152460, 165000, and 190740 has inradius 47520 and Malfatti radii 27225, 30976, and 32400. Ajima–Malfatti points. Given a triangle "ABC" and its three Malfatti circles, let "D", "E", and "F" be the points where two of the circles touch each other, opposite vertices "A", "B", and "C" respectively. Then the three lines "AD", "BE", and "CF" meet in a single triangle center known as the first Ajima–Malfatti point after the contributions of Ajima and Malfatti to the circle problem. The second Ajima–Malfatti point is the meeting point of three lines connecting the tangencies of the Malfatti circles with the centers of the excircles of the triangle. Other triangle centers also associated with the Malfatti circles include the Yff–Malfatti point, formed in the same way as the first Malfatti point from three mutually tangent circles that are all tangent to the lines through the sides of the given triangle, but that lie partially outside the triangle, and the radical center of the three Malfatti circles (the point where the three bitangents used in their construction meet). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "s = (a + b + c)/2" }, { "math_id": 1, "text": "\n\\begin{align}\nr_1 &= \\frac{r}{2(s-a)}(s-r+d-e-f),\\\\\nr_2 &= \\frac{r}{2(s-b)}(s-r-d+e-f),\\\\\nr_3 &= \\frac{r}{2(s-c)}(s-r-d-e+f).\\\\\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=1475894