URL
stringlengths 15
1.68k
| text_list
listlengths 1
199
| image_list
listlengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://encyclopedia2.thefreedictionary.com/regular+hexahedron | [
"# cube\n\n(redirected from regular hexahedron)\nAlso found in: Dictionary, Thesaurus, Wikipedia.\n\n## cube,\n\nin geometry, regular solid bounded by six equal squares. All adjacent faces of a cube are perpendicular to each other; any one face of a cube may be its base. The dimensions of a cube are the lengths of the three edges which meet at any vertex. The volume of a cube is equal to the product of its dimensions, and since its dimensions are equal, the volume is equal to the third power, or cube, of any one of its dimensions. Hence, in arithmetic and algebra, the cube of a number or letter is that number or letter raised to the third power. For example, the cube of 4 is 43=4×4×4=64. The problem of constructing a cube with a volume equal to twice that of a given cube using only a compass and a straightedge is known as the problem of the duplication of the cube and is one of the famous geometric problems of antiquitygeometric problems of antiquity,\nthree famous problems involving elementary geometric constructions with straight edge and compass, conjectured by the ancient Greeks to be impossible but not proved to be so until modern times.\n. The cube, or hexahedron, is one of only five regular polyhedra (see polyhedronpolyhedron\n, closed solid bounded by plane faces; each face of a polyhedron is a polygon. A cube is a polyhedron bounded by six polygons (in this case squares) meeting at right angles.\n).\n\n## Cube\n\nA solid figure, bounded by six squares, and hence also called a hexahedron.\n\n## Cube\n\n(1) One of five types of regular polyhedrons, having six square faces, 12 edges, and eight vertices; three mutually perpendicular edges meet at each vertex. A cube is sometimes called a hexahedron.\n\n(2) The cube of the number a is the third power of the number, that is, the product a • a • a = a3. It is so named because it expresses the volume of a cube whose edge is equal to a.\n\n## cube\n\n[kyüb]\n(mathematics)\nRegular polyhedron whose faces are all square.\nFor a number a, the new number obtained by taking the threefold product of a with itself: a × a × a.\n\n## cube\n\n1\n1. a solid having six plane square faces in which the angle between two adjacent sides is a right angle\n2. the product of three equal factors: the cube of 2 is 2 × 2 × 2 (usually written 23)\n\n## cube\n\n2\nany of various tropical American plants, esp any of the leguminous genus Lonchocarpus, the roots of which yield rotenone\n\n## Cube\n\n(1)\nThree-dimensional visual language for higher-order logic.\n\n\"The Cube Language\", M. Najork et al, 1991 IEEE Workshop on Visual Langs, Oct 1991, pp.218-224.\n\n## cube\n\n(2)\n[short for \"cubicle\"] A module in the open-plan offices used at many programming shops. \"I've got the manuals in my cube.\"\n\n## cube\n\n(3)\nA NeXT machine (which resembles a matte-black cube).\n\n## cube\n\n(1) See OLAP cube and OLAP.\n\n(2) Apple's earlier Cube computer. See G4.\nSite: Follow: Share:\nOpen / Close"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.91543585,"math_prob":0.99233216,"size":2986,"snap":"2019-43-2019-47","text_gpt3_token_len":744,"char_repetition_ratio":0.1167002,"word_repetition_ratio":0.057361376,"special_character_ratio":0.24313463,"punctuation_ratio":0.13893376,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9975303,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-12T13:17:36Z\",\"WARC-Record-ID\":\"<urn:uuid:886c62eb-8f26-4fd9-a032-89502fda34fd>\",\"Content-Length\":\"45553\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0c15fc4d-ce56-48e1-af50-b23bb0799bdc>\",\"WARC-Concurrent-To\":\"<urn:uuid:66f5fc62-6861-495c-b869-d4bc3992ec33>\",\"WARC-IP-Address\":\"45.34.10.165\",\"WARC-Target-URI\":\"https://encyclopedia2.thefreedictionary.com/regular+hexahedron\",\"WARC-Payload-Digest\":\"sha1:CMQ7YI66SRPMF6CEJUIC7LYA7G4ESD7S\",\"WARC-Block-Digest\":\"sha1:UDAH2VQBRODS3YWAXXHCLDYB5CEN5FT4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496665573.50_warc_CC-MAIN-20191112124615-20191112152615-00452.warc.gz\"}"} |
https://math.answers.com/Q/What_is_5a_-_1_plus_a_equals_11 | [
"",
null,
"",
null,
"",
null,
"",
null,
"0\n\n# What is 5a - 1 plus a equals 11?\n\nUpdated: 12/15/2022",
null,
"Wiki User\n\n14y ago\n\nA= 2",
null,
"Wiki User\n\n14y ago",
null,
"",
null,
"",
null,
"Earn +20 pts\nQ: What is 5a - 1 plus a equals 11?\nSubmit\nStill have questions?",
null,
"",
null,
"Related questions\n\n### What does a equal in this problem 5a-1 plus a equals 11?\n\nIf: 5a-1+a = 11 Then: a = 2\n\n### What is the solution of 6a plus 5a equals -11?\n\n6a + 5a = -11 11a = -11 a = -11/11 = -1\n\n### How do you solve 6a plus 5a equals -11?\n\n6a + 5a = -11 Combining like terms: 11a = -11 Dividing by 11: a = -1\n\n### What is the answer to 5a plus 1 equals a-3?\n\nAnswer to: 5a + 1 = a - 3a = -1\n\n### Solve the equation 2a plus 13 equals 5a 1?\n\nIf: 2a+13 = 5a+1 Then: a = 4\n\n### 5a equals -5a plus 5?\n\n5a = -5a + 5Add 5a to each side:10a = 5Divide each side by 10:a = 5/10 = 1/2\n\n### Does 1 plus 1 equals 11 I am 6 And i want to know?\n\nNo it equals 2 10 plus 1 equals 11\n\n### How do you solve the equation 5a plus 2 equals 7?\n\n5a + 2 = 7 subtract 2 from both sides: 5a = 5 divide both sides by 5: a = 1\n\n### 2a-4b plus 4b-6a equals a plus b?\n\n2a-6a-a=b+4b-4b -5a=5b-4b -5a=1b a=-5 b=1\n\n### What is the solution to 5a plus 11 plus 3a - 7 equals -4?\n\nSo we have 5a + 11 + 3a - 7 = -4 Start by combining common terms such as the 5a and 3a to give you 8a and 11 and -7 to give you 4. So now your equation is 8a + 4 = -4. Subtract 4 from both sides gives you 8a = -8. Divide -8 by 8 to give you a = -1, your solution.\n\n### 3a plus 5 equals 5a plus 3?\n\nIf 3a+5=5a+3, then: 3a+5 - 3a = 5a+3 - 3a 5=2a+3 5 - 3=2a+3 - 3 2=2a a=1\n\n5a(2b + 1)"
]
| [
null,
"https://math.answers.com/icons/searchIcon.svg",
null,
"https://math.answers.com/icons/searchGlassWhiteIcon.svg",
null,
"https://math.answers.com/icons/notificationBellIcon.svg",
null,
"https://math.answers.com/icons/coinIcon.svg",
null,
"https://math.answers.com/images/avatars/default.png",
null,
"https://math.answers.com/images/avatars/default.png",
null,
"https://math.answers.com/images/avatars/default.png",
null,
"https://math.answers.com/icons/sendIcon.svg",
null,
"https://math.answers.com/icons/coinIcon.svg",
null,
"https://math.answers.com/icons/searchIcon.svg",
null,
"https://st.answers.com/html_test_assets/imp_-_pixel.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.89488673,"math_prob":0.9999273,"size":1219,"snap":"2023-40-2023-50","text_gpt3_token_len":609,"char_repetition_ratio":0.1473251,"word_repetition_ratio":0.652459,"special_character_ratio":0.52912223,"punctuation_ratio":0.08125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99964166,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T19:20:55Z\",\"WARC-Record-ID\":\"<urn:uuid:c0997793-df08-452b-b529-3d9af991dfb1>\",\"Content-Length\":\"165341\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d51cab67-a817-4f39-988d-c9df6bc0e13e>\",\"WARC-Concurrent-To\":\"<urn:uuid:40586f39-6a39-43bd-bad8-57ddc77137fb>\",\"WARC-IP-Address\":\"146.75.36.203\",\"WARC-Target-URI\":\"https://math.answers.com/Q/What_is_5a_-_1_plus_a_equals_11\",\"WARC-Payload-Digest\":\"sha1:CWA35535LLUGPOOXK426KWNWPNKC56SB\",\"WARC-Block-Digest\":\"sha1:2OZSJUMNM22PWT6ITLRWIFMR2UXITDM5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511002.91_warc_CC-MAIN-20231002164819-20231002194819-00687.warc.gz\"}"} |
https://byjus.com/questions/liquid-pressure-is-measured-by/ | [
"",
null,
"# How is Liquid Pressure Measured?\n\nTo measure liquid pressure, a manometer is used.\n\nLiquid pressure is defined as the increase in the pressure when there is an increase in the depth of the liquid. Liquid pressure is given by the following equation:\n\n P = ρgh\n\nWhere,\n\n• P is the liquid pressure\n• ρ is the density of the liquid\n• g is the acceleration due to gravity\n• h is the depth\n\nWorking of manometer\nA manometer is used for measuring the liquid pressure with respect to an outside source which is usually considered to be the earth’s atmosphere.\n\nLiquid such as mercury is used for the measurement of the pressure. The other end of the U-tube is filled with the gas for which the pressure needs to be calculated. The end where the gas is filled is sealed while the other end is kept open. Now the atmospheric pressure, as well as the gas pressure, acts on the liquid.\n\n• The air in the tube is said to be equal to the outside air pressure when the liquid is straight level in both the tubes.\n• The air in the tube is said to be lighter than the out air’s pressure when the liquid rises above the straight level.\n• The air in the tube is said to be heavier than the outside air’s pressure when the liquid is below the straight level.",
null,
"(22)",
null,
"(6)"
]
| [
null,
"https://www.facebook.com/tr",
null,
"https://cdn1.byjus.com/wp-content/uploads/2021/08/upvote-lineart.svg",
null,
"https://cdn1.byjus.com/wp-content/uploads/2021/08/downvote-lineart.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9489305,"math_prob":0.98509365,"size":1197,"snap":"2022-27-2022-33","text_gpt3_token_len":263,"char_repetition_ratio":0.20284995,"word_repetition_ratio":0.097345136,"special_character_ratio":0.21470343,"punctuation_ratio":0.062240664,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99064726,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-04T06:30:13Z\",\"WARC-Record-ID\":\"<urn:uuid:fd99d3bf-bd0a-4f5a-9efc-6c8161b09bc5>\",\"Content-Length\":\"187895\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:36237716-a125-4e9a-8c88-8f17ba1f87ff>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8537e09-f5cf-4cde-8b9f-6fa4a7348bf9>\",\"WARC-IP-Address\":\"162.159.130.41\",\"WARC-Target-URI\":\"https://byjus.com/questions/liquid-pressure-is-measured-by/\",\"WARC-Payload-Digest\":\"sha1:VF6XWERUNESBVIJNJTXFX4FJY2UPXY5E\",\"WARC-Block-Digest\":\"sha1:LDGAJHHEZGUEZXSL5AGX4FXVXJTDAMSA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104354651.73_warc_CC-MAIN-20220704050055-20220704080055-00096.warc.gz\"}"} |
http://www.s-pay.com.ua/index.php/kindle/differential-geometry-and-mathematical-physics-part-i-manifolds-lie-groups | [
"# Download e-book for iPad: Differential Geometry and Mathematical Physics: Part I. by Rudolph, G. and Schmidt, M.",
null,
"By Rudolph, G. and Schmidt, M.\n\nISBN-10: 9400753454\n\nISBN-13: 9789400753457\n\nRanging from undergraduate point, this booklet systematically develops the fundamentals of - research on manifolds, Lie teams and G-manifolds (including equivariant dynamics) - Symplectic algebra and geometry, Hamiltonian structures, symmetries and relief, - Integrable platforms, Hamilton-Jacobi conception (including Morse households, the Maslov category and caustics). the 1st merchandise is suitable for almost all components of mathematical physics, whereas the second one merchandise offers the foundation of Hamiltonian mechanics. The last thing introduces to special unique components. invaluable heritage wisdom on topology is prov\n\nRead or Download Differential Geometry and Mathematical Physics: Part I. Manifolds, Lie Groups and Hamiltonian Systems PDF\n\nSimilar differential geometry books\n\nGet Connections, curvature and cohomology. Vol. III: Cohomology PDF\n\nGreub W. , Halperin S. , James S Van Stone. Connections, Curvature and Cohomology (AP Pr, 1975)(ISBN 0123027039)(O)(617s)\n\nRudolph, G. and Schmidt, M.'s Differential Geometry and Mathematical Physics: Part I. PDF\n\nRanging from undergraduate point, this publication systematically develops the fundamentals of - research on manifolds, Lie teams and G-manifolds (including equivariant dynamics) - Symplectic algebra and geometry, Hamiltonian platforms, symmetries and aid, - Integrable platforms, Hamilton-Jacobi concept (including Morse households, the Maslov type and caustics).\n\nGet A treatise on the geometry of surfaces PDF\n\nThis quantity is made out of electronic photographs from the Cornell collage Library ancient arithmetic Monographs assortment.\n\nMeant for a twelve months direction, this article serves as a unmarried resource, introducing readers to the \\$64000 ideas and theorems, whereas additionally containing sufficient historical past on complex subject matters to attract these scholars wishing to specialise in Riemannian geometry. this can be one of many few Works to mix either the geometric components of Riemannian geometry and the analytic points of the speculation.\n\nAdditional info for Differential Geometry and Mathematical Physics: Part I. Manifolds, Lie Groups and Hamiltonian Systems\n\nSample text\n\n3) may be taken as the extension of the tangent mapping from Tm M to Dm M. 2. 2 we conclude that for local charts (U, κ) on M at m and (V , ρ) on N at Φ(m) one has Φm X m ρ,i φ(m) = ρ ◦ Φ ◦ κ −1 i κ,j κ(m) j Xm . 4) That is, locally the tangent mapping of Φ is given by the derivative (matrix of partial derivatives) of the local representative Φκ,ρ = ρ ◦ Φ ◦ κ −1 at κ(m). 4 Let M and N be open subsets of the finite-dimensional real vector spaces V and W , respectively. Let Φ ∈ C k (M, N ) and v ∈ M.\n\nNote that, here, continuity of f need not be required. Indeed, for every pair of charts (Ui , κi ), (Vj , ρj ) such that Φ(Ui ) ⊂ Vj we have Φ Ui = ρj−1 ◦ Φκi ,ρj ◦ κi , which is continuous as a composition of continuous mappings. 2. 1 iff it is of class C k in the sense of classical calculus. To see this, choose global charts corresponding to two chosen bases. In particular, multilinear mappings between finite-dimensional real vector spaces are smooth. 3. Let (Ui , κi ) and (Vi , ρi ), i = 1, 2, be local charts on M and N , respectively, such that W := U1 ∩ U2 ∩ Φ −1 (V1 ∩ V2 ) = ∅.\n\n6 Submanifolds Let k ≥ 1 and let N be a C k -manifold. 1 (Submanifold) A C k -submanifold of N is a pair (M, ϕ), where M is a C k -manifold and ϕ : M → N is an injective immersion of class C k . Submanifolds (M1 , ϕ1 ) and (M2 , ϕ2 ) are said to be equivalent if there exists a diffeomorphism ψ : M1 → M2 such that ϕ2 ◦ ψ = ϕ1 . 2 1. Let (M, ϕ) be a C k -submanifold of N and let ϕ˜ : M → ϕ(M) denote the induced mapping. Since ϕ˜ is bijective, one can use it to carry over the topological and differentiable structure from M to ϕ(M), thus making ϕ˜ into a diffeomorphism."
]
| [
null,
"https://images-na.ssl-images-amazon.com/images/I/41WojgGYYiL._SX331_BO1,204,203,200_.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8543602,"math_prob":0.91173357,"size":4125,"snap":"2020-34-2020-40","text_gpt3_token_len":1106,"char_repetition_ratio":0.10264499,"word_repetition_ratio":0.11188811,"special_character_ratio":0.2489697,"punctuation_ratio":0.14713217,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9940865,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-30T02:43:38Z\",\"WARC-Record-ID\":\"<urn:uuid:3f4d8fc6-8bdd-4ed2-bd29-70348d96a3ba>\",\"Content-Length\":\"27929\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e1b3431-e2d7-4a7e-ae25-bf6a9327fde1>\",\"WARC-Concurrent-To\":\"<urn:uuid:00f6d13d-1ceb-4e81-bd1d-157f7d83daf0>\",\"WARC-IP-Address\":\"93.190.41.65\",\"WARC-Target-URI\":\"http://www.s-pay.com.ua/index.php/kindle/differential-geometry-and-mathematical-physics-part-i-manifolds-lie-groups\",\"WARC-Payload-Digest\":\"sha1:OZ2AERR6OECCZHSP53KX6YAJSNFKVGJE\",\"WARC-Block-Digest\":\"sha1:ADLVYAO5XLTDOSKJHZFEPKTVVEL66ATA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402101163.62_warc_CC-MAIN-20200930013009-20200930043009-00662.warc.gz\"}"} |
https://asmedigitalcollection.asme.org/appliedmechanics/article/84/3/031008/422436/Revisiting-the-Instability-and-Bifurcation | [
"Development of soft electromechanical materials is critical for several tantalizing applications such as human-like robots, stretchable electronics, actuators, energy harvesting, among others. Soft dielectrics can be easily deformed by an electric field through the so-called electrostatic Maxwell stress. The highly nonlinear coupling between the mechanical and electrical effects in soft dielectrics gives rise to a rich variety of instability and bifurcation behavior. Depending upon the context, instabilities can either be detrimental, or more intriguingly, exploited for enhanced multifunctional behavior. In this work, we revisit the instability and bifurcation behavior of a finite block made of a soft dielectric material that is simultaneously subjected to both mechanical and electrical stimuli. An excellent literature already exists that has addressed the same topic. However, barring a few exceptions, most works have focused on the consideration of homogeneous deformation and accordingly, relatively fewer insights are at hand regarding the compressive stress state. In our work, we allow for fairly general and inhomogeneous deformation modes and, in the case of a neo-Hookean material, present closed-form solutions to the instability and bifurcation behavior of soft dielectrics. Our results, in the asymptotic limit of large aspect ratio, agree well with Euler's prediction for the buckling of a slender block and, furthermore, in the limit of zero aspect ratio are the same as Biot's critical strain of surface instability of a compressed homogeneous half-space of a neo-Hookean material. A key physical insight that emerges from our analysis is that soft dielectrics can be used as actuators within an expanded range of electric field than hitherto believed.\n\nIntroduction\n\nSoft materials, such as polymers and many soft biological materials, play an important role in our daily life. They can be easily deformed to large strain values due to intrinsically low elastic stiffness. Meanwhile, surface instabilities like wrinkles [1,2] and creases are often observed under mechanical compression or constrained swelling. Soft dielectrics, an important subclass of soft materials, can achieve significantly large deformation when they are subject to electrical stimuli. Soft dielectrics find applications in human-like robots [6,7], stretchable electronics , actuators , energy harvesters , among others. Large deformations of soft dielectrics are often accompanied by electromechanical instabilities including pull-in instability , wrinkling and the creasing , the electro-creasing to cratering instability , electro-cavitation , among others.\n\nHistorically, instabilities are often thought to cause “failure” and usually avoided. The pull-in instability, for example, is suppressed in order to enhance the actuation strain and the electrical energy density of soft dielectrics. More recently, research has increasingly also been directed at how electromechanical instabilities of soft dielectrics can be harnessed for various applications such as giant actuation strain, dynamic surface patterning, and energy harvesting [28,29].\n\nA commonly used actuator, for example, is a film of dielectric elastomer coated with compliant electrodes on its surfaces. Upon application of a voltage difference between the two electrodes, the Maxwell stress from the electric field compresses the film in the thickness direction, causes expansion in the plane, and creates a large actuation strain. The thinning of the film increases the intensity of the electric field in the material. When the film thickness decreases to a certain threshold value, the film is unable to sustain the Maxwell stress and the pull-in instability occurs. Exploitation of soft dielectric films in applications requires a thorough understanding of large deformation mechanics and the electromechanical instabilities induced by voltages and mechanical forces. To this end, numerous theoretical analyses [10,16,22,3033] have been carried on this subject matter.\n\nIn a prior work , Zhao and Suo analyzed the electromechanical stability of a film of dielectric elastomer subject to tensile forces in its plane and a voltage difference across its thickness. From the principle of minimum energy, they studied the stability of the homogeneously2 deformed film by examining the positive definiteness of the Hessian matrix. They showed that prestress can significantly enhance the stability of the homogeneously deformed film and markedly increase the actuation stretch. We remark that Zhao and Suo assumed a homogeneously deformed film throughout their equilibrium state and stability analysis. Subsequently, this assumption of homogeneous deformation has been widely used in other works [10,24,25,30,31,34,35].\n\nThe aforementioned assumption of a homogeneous deformation imposes the restriction that the upper and bottom surfaces of the dielectric thin film remain perfectly plane. Hence, nonhomogeneous deformation and the effects of the geometry of the dielectric film, like the thickness or the aspect ratio, on the electromechanical instability are excluded. In a recent work, Dorfmann and Ogden investigated the instability (buckling) of an infinite plate of electroelastic material by analyzing its incremental elastic deformation. In another work, Dorfmann and Ogden studied the surface instability of a half-space subject to both mechanical compression and an electric field normal to its surface.\n\nIn this work, we present a complete linearized bifurcation analysis for electromechanical instability in a finite block of a soft dielectric material subject to physically reasonable boundary conditions,3 and compare it with the response of a thin film and half-space. An elastic finite block is often used to study the mechanical behavior of elastic materials at finite strain, such as the instability and post-buckling of a mechanical compressed elastic block, and the buckling of a compressible magnetoelastic block . Unlike a half-space [1,36] or an infinite long plate , a finite block has measurable length quantities, such as the aspect ratio, and allows for physically well-defined boundary conditions on all its surfaces. Hence, the effect of the boundary conditions due to finite dimensions on the electromechanical instability can be addressed in the present work by analyzing a finite block. Compared to the in-plane biaxial dead loads on the dielectric film, as used in past works , we employ displacement-controlled boundary conditions on the two sides of the finite block which allows a facile consideration of both tension and compression.4 Based on the implicit function theorem [40,41], we present an analysis of the onset of bifurcation from the trivial solution of a finite block of dielectric elastomer subject to mechanical loads (compression or extension) and a voltage across its thickness. Although our analysis of electromechanical instability is applicable to a general elastic dielectric elastomer, we present closed-form expressions for the special case of ideal neo-Hookean dielectrics.\n\nThe paper is organized as follows. In Sec. 2, we present the general formulation for the electrostatic problem of a finite block of a dielectric elastomer subject to electromechanical loads. The linear bifurcation analysis is presented in Sec. 3, where the incremental boundary-value problem is obtained by linearizing the equations of equilibrium with respect to deformation and the polarization. In Sec. 4, we obtain the solutions of the homogeneous deformation and the incremental boundary-value problem, and discuss the onset of bifurcation from the trivial solution. Finally, in Sec. 5, we compare our analytical results with Euler's predictions for the buckling of both mechanically and electromechanically compressed slender block and discuss the pertinent physical insights.\n\nFormulation\n\nDomain and Boundary Conditions.\n\nConsider a finite block of an elastic dielectric (see Fig. 1). Assuming plane-strain condition in the X3 direction, the dielectric block in the reference configuration can be represented by\n\n$ΩR={X∈ℝ2:0≤X1≤l1,−l22≤X2≤l22}$\n(1)\nwhere X1 and X2 are the Cartesian coordinates, l1 is the length, and l2 is the height of the dielectric block. The boundary $∂ΩR$ of ΩR consists of four parts\n$Sl={X∈ΩR:X1=0}, Sr={X∈ΩR:X1=l1}Su={X∈ΩR:X2=l22}, Sb={X∈ΩR:X2=−l22}$\n(2)\nThe deformation of the block is expressed by a smooth function $x:ΩR→ℝ2$, and the constraint of incompressibility requires a unit Jacobian, such that\n$J=detF=1$\n(3)\n\nwhere $F=∇x=(∂x/∂X1)e1+(∂x/∂X2)e2$ is the deformation gradient in two dimensions, and $ei$, i = 1, 2, are the unit vectors in the Xi directions.\n\nA few comments regarding the boundary conditions are in order. For a dielectric elastomer film, a voltage is usually applied across the top and bottom surfaces and in-plane tensile dead loads are introduced. Such kinds of boundary conditions are used for dielectric elastomers that work in uniaxial actuation mode, such as in spring-roll actuators . In this paper, we apply a potential difference between the upper and lower surfaces, but mimic the physical situation where the dielectric block is controlled by a loading device that stretches or compresses the block in the X1 direction. For example, the loading device can be made of two well-lubricated, rigid plates (see Fig. 1) that are in contact with the side surfaces $Sl∪Sr$ with rollers. During the compression/tension process, there is no rotation of the two plates in order to ensure that the compressive/tensile stresses on the rigid plates are always along the X1 direction. The mechanical and electrostatic boundary conditions on the side surfaces $Sl∪Sr$ are defined by\n$x·e1=λX1, D̃·e1=0 on Sl∪Sr$\n(4)\nwhere $λ>0$ is the prescribed stretch along the X1 direction and $D̃$ is the nominal electric displacement. Meanwhile, the mechanical and electrostatic boundary conditions on the upper and lower surfaces $Su∪Sb$ are\n$t̃e=0, ξ=ξb on Su∪Sb$\n(5)\n\nwhere $t̃e$ is the surface traction and ξ is the voltage. Here, the prescribed voltages are $ξb=V$ on the upper surface $Su$ and $ξb=0$ on the lower surface $Sb$ (see Fig. 1).\n\nEquations of Electrostatics of a Deformable Media.\n\nIn the deformed domain Ω, the true electric field is denoted by e, the true electric displacement by d, and the polarization by p. In the absence of free charges, currents, and magnetic fields, the Maxwell equations reduce to\n$curl e=0, div d=0, d=ϵ0e+p in Ω$\n(6)\n\nwhere ϵ0 is the vacuum permittivity. The curl, the divergence, and the gradient operators in the current configuration are denoted by “curl,” “div,” and “grad,” respectively. In contrast, the corresponding operators in the reference configuration are denoted by “Curl,” “Div,” and “$∇$.” The equality, $curl e=0$ in Eq. (6), indicates that there exists a scalar potential (voltage) ξ such that $e=−grad ξ$.\n\nTo represent e, d, and p in the undeformed domain ΩR, we use the composition maps \n$E=e°x, D=d°x, P=p°x$\n(7)\nIn the undeformed domain ΩR, the nominal electric field is denoted by $Ẽ$, the nominal electric displacement by $D̃$, and the nominal polarization by $P̃$. We define the relations of the true fields in Eq. (7) and the nominal fields in ΩR as [42,43]\n$Ẽ=FTE, D̃=JF−1D, P̃=JP$\n(8)\nwhere $FT$ and $F−1$ are the transpose and the inverse of the deformation gradient F, respectively, and $J=detF$ is the Jacobian. Maxwell's equations (Eq. (6)) in ΩR are\n$Curl Ẽ=0, Div D̃=0, D̃=F−1(ϵ0JF−TẼ+P̃)$\n(9)\n\nwhere $Ẽ=−∇ξ$.\n\nCombining Eq. (8), an alternative form of Eq. (9) in ΩR can be written as\n$FTE=−∇ξ, Div D̃=0, FD̃=ϵ0JE+P̃$\n(10)\n\nFree Energy of the System.\n\nElectromechanics of deformable dielectrics can be formulated in a variety of ways cf. Refs. [36,42,43] for just a few examples. In this paper, we follow the energy formulation of continuum magneto-electro-elasticity as described by Liu . The notion of invoking a minimum energy principle with Maxwell's equations as a constraint has roots in an earlier work on micromagnetism and ferroelectrics .\n\nSubject to both mechanical and electrical loads, the total free energy (see, for example, Refs. [42,46]) of the system in Fig. 1 is given by\n$F[x,P̃]=U[x,P̃]+Eelect[x,P̃]$\n(11)\nHere, $U[x,P̃]$ is the internal energy\n$U[x,P̃]=∫ΩRΨ(F,P̃)$\n(12)\nwhere $F=∇x$ and $Ψ(F,P̃)$ is the internal energy density. The electric energy, $Eelect[x,P̃]$, in Eq. (11) is\n$Eelect[x,P̃]=ϵ02∫ΩRJ|E|2+∫Su∪SbξD̃·N$\n(13)\n\nwhere $J=det∇x$ is the Jacobian and N is the unit normal to the surfaces $Su∪Sb$. The relations among E, $D̃$, and $P̃$ in Eq. (13) are given by Eq. (10). Note that the mechanical work done by the loading device on the side surfaces $Sl∪Sr$ is not included into the total free energy due to the nominal displacement-controlled boundary condition (Eq. (4)1).\n\nFirst Variation of the Free Energy.\n\nWhen the aforementioned electromechanical system is in equilibrium at a deformation x and a polarization $P̃$, the first variation of the energy functional $F[x,P̃]$ must vanish (subject to the constraint imposed by Maxwell's equations). Since there exist two functions $x:ΩR→Ω$ and $P̃:ΩR→ℝ2$ in $F[x,P̃]$, the vanishing of the first variation requires that both the first variations with respect to x and $P̃$ must be zero (see Appendix A for details).\n\nVariation of Polarization.\n\nThe first variation of $F[x,P̃]$ in Eq. (11) with respect to the polarization $P̃$ gives\n$∂Ψ∂P̃−E=0 in ΩR$\n(14)\n\nwhere $E=−F−T∇ξ$. The detailed derivation of Eq. (14) is given in Appendix A1.\n\nVariation of Deformation.\n\nVanishing of the first variation of $F[x,P̃]$ in Eq. (11) with respect to the deformation x yields the Euler–Lagrange equation (see Appendix A2 for details)\n$Div(∂Ψ∂F+Σ̃−qF−T)=0 in ΩR$\n(15)\nand the natural boundary conditions\n$(∂Ψ∂F+Σ̃−qF−T)e1=se1 on Sl∪Sr$\n(16)\n\n$(∂Ψ∂F+Σ̃−qF−T)e2=0 on Su∪Sb$\n(17)\nwhere q is the hydrostatic pressure required by the incompressibility constraint (Eq. (3)), s is the normal stress on the side surfaces $Sl∪Sr$, and $Σ̃$ is the so-called Piola–Maxwell stress defined by\n$Σ̃=E⊗D̃−ϵ0J2|E|2F−T$\n(18)\nEquations (10), (14)(17), along with the constraint of incompressibility (Eq. (3)) and the boundary conditions (Eqs. (4) and (5)), form a boundary-value problem, whose solution includes all the possible equilibrium solutions for a finite block of soft dielectric subject to mechanical and electrical loads. The aforementioned boundary-value problem can be compactly summarized as:\n$DivT=0, ∂Ψ∂P̃−E=0, detF=1FTE=−∇ξ, DivD̃=0 FD̃=ϵ0JE+P̃} in ΩR$\n(19)\n\n$x1=λX1, D̃·e1=0, Te1=se1 on Sl∪Sr$\n(20)\n\n$ξ=ξb, Te2=0 on Su∪Sb$\n(21)\nwhere the total nominal stress T is\n$T=∂Ψ∂F+Σ̃−qF−T$\n(22)\n\nOnset of Electromechanical Buckling\n\nOf interest here is the condition for the onset of buckling of the dielectric block. Mathematically, buckling is governed by the onset of bifurcation in the trivial solution to the boundary-value problem (Eqs. (19)(21)). Based on the implicit function theorem [40,41], the equilibrium equations have a nontrivial solution bifurcating from its trivial solution only if the linearized equations of equilibrium possess a nonzero solution. It is obvious that the onset of bifurcation depends on the applied mechanical and electrical loads. The linearized equations describe the response of the dielectric block, in a state of equilibrium, to infinitesimal increments of the deformation and the polarization.\n\nLinearization With Respect to Both the Deformation and the Polarization.\n\nLet $x*$ and $P̃*$ be the infinitesimal increments of the deformation x and the polarization $P̃$, respectively. Other increments depend on $x*$ and $P̃*$ at $(x,P̃)$. We denote other linearized increments (omitting higher terms) by taking advantage of the superscripts. For example, the total linearized increment of a general field Θ is denoted by $Θ*$ (see Appendix B for details). $Θ*$ is usually the sum of two increments $Θ†$ and $Θ‡$, which denote, respectively, the increments related to the deformation and the polarization. Since the deformation gradient F and the Jacobian J are independent of the polarization, we have the linearized increments\n$F*=∇x*, J*=JF−T·∇x*,(FT)*=(∇x*)T, (F−T)*=−F−T(∇x*)TF−T$\n(23)\nIn contrast, the total linearized increments of the electric field E, the nominal electric displacement $D̃$, and the Piola–Maxwell stress $Σ̃$ consist of two parts:\n$E*=E†+E‡, D̃*=D̃†+D̃‡, Σ̃*=Σ̃†+Σ̃‡$\n(24)\nCombining Eq. (24) and $Σ̃$ in Eq. (18), we have the relation between the linearized increments (see Appendix B2)\n$Σ̃*=E*⊗D̃+E⊗D̃* −ϵ02{2J(E·E*)F−T+|E|2[J(F−T)*+J*F−T]}$\n(25)\nSimilarly, the linearization of the boundary-value problem (Eqs. (19)(21)) can be written as\n$DivT*=0, ∂2Ψ∂P̃∂F[F*]+∂2Ψ∂P̃2P̃*−E*=0F−T·F*=0, (FT)*E+FTE*=−∇ξ*DivD̃*=0, FD̃*+F*D̃=ϵ0JE*+ϵ0J*E+P̃*}in ΩR$\n(26)\n\n$x*·e1=0, D̃*·e1=0, T*e1=s*e1 on Sl∪Sr$\n(27)\n\n$ξ*=0, T*e2=0 on Su∪Sb$\n(28)\nwhere $T*$ is the linearized increment of the total nominal stress T in Eq. (22), given by\n$T*=∂2Ψ∂F2[F*]+∂2Ψ∂F∂P̃P̃*+Σ̃*−q*F−T+qF−T(FT)*F−T$\n(29)\n\nWe remark that the linearized boundary-value problem (Eqs. (26)(28)) considers the total increments including both the incremental deformation and the incremental polarization. The condition of the nonzero solution of $(x*,P̃*)$ in Eqs. (26)(28) determines the onset of the electrical buckling—and more precisely the onset of bifurcation from the solution $(x,P̃)$ —of a finite block of dielectric elastomer subject to electromechanical loads.\n\nLinearization With Respect to Only the Deformation.\n\nA further simplification of Eqs. (26)(28) may be made by considering the linearization with respect to only the deformation. That is, we introduce an infinitesimal increment $x*$ of the deformation but a zero increment of the polarization when we linearize the boundary-value problem (Eqs. (19)(21)) at $(x,P̃)$. This simplification, of course, will yield a narrower solution space that is a subspace of the solution space considering both the incremental deformation and polarization; however, it significantly simplifies the analysis and also provides important results for electromechanical buckling. Therefore, by letting $P̃*=0$ the total increments (with superscript “*”) in Eqs. (24) and (25) reduce to the increments (with superscript “”) with respect to the deformation, the linearized boundary-value problem (Eqs. (26)(28)) can be reduced to\n$DivT†=0, ∂2Ψ∂P̃∂F[F*]−E†=0F−T·F*=0, (FT)*E+FTE†=−∇ξ†DivD̃†=0, FD̃†+F*D̃=ϵ0JE†+ϵ0J*E} in ΩR$\n(30)\n\n$x*·e1=0, D̃†·e1=0, T†e1=s†e1 on Sl∪Sr$\n(31)\n\n$ξ†|SbSu=0, T†e2=0 on Su∪Sb$\n(32)\nwhere\n$T†=∂2Ψ∂F2[F*]+Σ̃†−q*F−T+qF−T(FT)*F−T$\n(33)\nand\n$Σ̃†=E†⊗D̃+E⊗D̃†−ϵ02{2J(E·E†)F−T+|E|2[J(F−T)*+J*F−T]}$\n(34)\n\nWe remark here that the total incremental boundary-value problem (Eqs. (26)(28)) and the reduced incremental boundary-value problem (Eqs. (30)(32)) are valid for all incompressible elastic soft dielectrics. In the following, we will adopt the neo-Hookean constitutive law to generate specific results.\n\nNeo-Hookean Dielectrics\n\nIn the following, we consider incompressible neo-Hookean dielectrics, whose strain energy function [42,46] under the plane strain assumption is given by\n$Ψ(F,P̃)=μ2(|F|2−2)+|P̃|22J(ϵ−ϵ0)$\n(35)\n\nwhere μ is the shear modulus, and ϵ and ϵ0 are, respectively, the permittivities of the dielectric elastomer and the vacuum. Note that the second term on the right-hand side of Eq. (35) reflects the usual linear dielectric behavior, that is, the permittivity ϵ of the dielectric elastomer is independent of the deformation.\n\nThe derivatives of the strain energy function (Eq. (35)) are given by\n$∂Ψ∂F=μF−|P̃|22J(ϵ−ϵ0)F−T, ∂Ψ∂P̃=P̃J(ϵ−ϵ0)∂2Ψ∂F2=μI4+|P̃|22J(ϵ−ϵ0)(F−T⊗F−T−∂F−T∂F)∂2Ψ∂P̃2=I2J(ϵ−ϵ0),∂2Ψ∂F∂P̃=−F−T⊗P̃J(ϵ−ϵ0), ∂2Ψ∂P̃∂F=−P̃⊗F−TJ(ϵ−ϵ0)}$\n(36)\n\nwhere $I4$ and $I2$ are, respectively, the fourth- and second-order identity tensors in two dimensions.\n\nHomogeneous Deformation.\n\nSubstituting Eq. (36) into the boundary-value problem (Eqs. (19)(21)), a trivial solution that corresponds to homogeneous deformation is given by\n$x0(X)=λX1e1+λ−1X2e2, P̃0(X)=−(ϵ−ϵ0)λẼ0e2$\n(37)\nwhere $Ẽ0=V/l2$ and other corresponding quantities are\n$F0(X)=∇x0(X)=λe1⊗e1+λ−1e2⊗e2E0(X)=−λẼ0e2, D̃0(X)=−ϵλ2Ẽ0e2}$\n(38)\nThe Piola–Maxwell stress tensor (Eq. (18)) is\n$Σ̃0:=[−ϵ02λẼ0200(ϵ−ϵ02)λ3Ẽ02]$\n(39)\nthe hydrostatic pressure (i.e., the Lagrange multiplier) is\n$q0=μλ−2+ϵ2λ2Ẽ02$\n(40)\nthe total stress tensor (Eq. (22)) is\n$T0:=[μ(λ−λ−3)−ϵλẼ02000]$\n(41)\nand the compressive/tensile stress on $Sl∪Sr$ is\n$s0=μ(λ−λ−3)−ϵλẼ02$\n(42)\n\nNote that a negative/positive s0 in Eq. (42) represents the nominal compressive/tensile stress on the side surfaces $Sl∪Sr$ in the reference configuration. In contrast, the true compressive/tensile stress in the current configuration is given by $λs0$.\n\nFigure 2 shows how the electric field affects the mechanical behavior of the homogeneous deformation of a finite block. The dimensionless compressive/tensive stress $s0/μ$ and electric field $Ẽ0ϵ/μ$ are used. In the absence of the electric field (i.e., $Ẽ0ϵ/μ=0$) in Fig. 2(a) (or in Eq. (42)), for example, a prescribed stretch $λ>1$ corresponds to a tensile stress $s0>0$, while a stretch $λ<1$ corresponds to a compressive stress $s0<0$ on the side surfaces $Sl∪Sr$ of the block.\n\nThe electric field in Eq. (42) will decrease the nominal stress s0 on the side surfaces. This is because the Maxwell stress in Eq. (39) will make the block decrease its height l2 (due to a positive component of the Maxwell stress in the X2 direction) and increase its length l1 (due to a negative component of the Maxwell stress in the X1 direction). However, the two lubricated rigid plates exert an additional compressive stress on the side surfaces to hinder the extension of the block. Therefore, at a prescribed stretch λ in Fig. 2(a), the electric field $Ẽ0ϵ/μ$ can decrease the nominal stress vector $s0/μ$ (or the true stress vector $λs0/μ$). For example, at a prescribed stretch $λ>1$, the increase of the electric field $Ẽ0ϵ/μ$ can cause the nominal stress $s0/μ$ in Fig. 2(a) (or the true stress $λs0/μ$ in Fig. 2(b)) decrease from positive (tensile stress) to negative (compressive stress). If we were to ignore electrical breakdown, the continually increasing compressive stress will eventually force the block to buckle.\n\nTo further illustrate the effects of the electric field on deformation, we consider two special cases: a prescribed stretch λ = 1 and a zero nominal stress $s0=0$.\n\nIn the first case, the block is undeformed prior to electromechanical buckling. This is because of the constraint of incompressibility and the plane strain assumption in our model, leading to the stretch ratios $λ1=λ=1, λ2=λ−1=1$, and $λ3=1$. Although the block is undeformed under the electric field, it is no longer a stress-free state. The nominal compressive stress s0 in Eq. (42) is $s0=−ϵẼ02$, which is a quadratic function of the nominal electric field (see Fig. 3(a)). Under zero electric field, the block is stress-free, corresponding to the origin $(Ẽ0ϵ/μ,s0/μ)=(0,0)$. It is clear from Fig. 3(a) that the parabola opens downward and the axis of symmetry is $Ẽ0ϵ/μ=0$. In this case, the electric field always induces a compressive state in the block. With a continuously increasing electric field, the block eventually will buckle. Note that only electromechanical buckling is considered in this paper for the constrained deformation—other instabilities including the electrical breakdown, the electro-creasing to cratering instabilities, and the electrocavitation instability [14,1820] are beyond the scope of this paper.\n\nIn the second case, the homogeneously deformed block is stress-free. With $s0=0$ in Eq. (42), the relation between the stretch and the electric field becomes $λ=(1−ϵẼ02/μ)−1/4$ (see Fig. 3(b)). Without considering the electrical breakdown and the pull-in instability , the stretch λ, mathematically, can increase from one to infinity as the dimensionless electric field $Ẽ0ϵ/μ$ increases from zero to one.\n\nIn the following, we will analyze electromechanical buckling by studying the solution of the incremental boundary-value problem (Eqs. (30)(32)) at the homogeneous solution (Eqs. (37)(42)).\n\nIncremental Boundary-Value Problem.\n\nLet us first address Eq. (30) in ΩR. The increments of the Jacobian $J*$ and the electric field $E†$ in Eq. (30) must vanish due to the constraint of incompressibility, $F−T·∇x*=0$, namely\n$J*=JF−T·∇x*=0$\n(43)\n\n$E†=−P̃⊗F−TJ(ϵ−ϵ0)[∇x*]=−(F−T·∇x*)P̃J(ϵ−ϵ0)=0$\n(44)\nand the two relations, $(FT)*E+FTE†=−∇ξ†$ and $FD̃†+F*D̃=ϵ0JE†+ϵ0J*E$ in Eq. (30), reduce to\n$∇ξ†=−(∇x*)TE, D̃†=−F−1F*D̃$\n(45)\nThen, the incremental Piola-Maxwell stress (Eq. (34)) reduces to\n$Σ̃†=E⊗D̃†−ϵ0J|E|22(F−T)*$\n(46)\nNote that Eqs. (43)(46) hold for all kinematically admissible deformations of incompressible neo-Hookean dielectrics, including the homogeneous deformation. Moreover, the constraint makes the divergence of $D̃†$ in Eq. (45) vanish automatically in the case of homogeneous deformation, such that\n$DivD̃†=−Div(F0−1F*D̃0)=−D̃0·∇(F0−T·F*)=0$\n(47)\nBased on Eqs. (43)(47) and the homogeneous solution (Eqs. (37)(42)), the linearized boundary-value problem (Eqs. (30)(32)) reduces to\n$DivT†=0, F0−T·∇x*=0(∇x*)TE0=−∇ξ†, D̃†=−F0−1(∇x*)D̃0} in ΩR$\n(48)\n\n$x*·e1=0, D̃†·e1=0, T†e1=s†e1 on Sl∪Sr$\n(49)\n\n$ξ†|SbSu=0, T†e2=0 on Su∪Sb$\n(50)\nwhere the incremental voltage $ξ†$ and the incremental nominal electric displacement $D̃†$ are\n$ξ†=λẼ0x2*+ξ0, D̃†=ϵλẼ0(x1,2*e1+λ2x2,2*e2)$\n(51)\nand the incremental nominal stress $T†$ is\n$T†:=μ[T11†T12†T21†T22†]$\n(52)\nwith\n$T11†=[1+λ−4+(Ẽ0ϵ/μ)2]x1,1*−λ−1μ−1q*T12†=x1,2*+[λ−2+λ2(Ẽ0ϵ/μ)2]x2,1*T21†=x2,1*+λ−2x1,2*, T22†=2x2,2*−λμ−1q*$\n\nGoverning Equation.\n\nFrom the constraint of incompressibility, $F0−T·∇x*=0$, in Eq. (48), we introduce a stream function $ϕ(X1,X2)$\n$x1*=λϕ,2(X1,X2), x2*=−λ−1ϕ,1(X1,X2)$\n(53)\nwhere the subscript denotes the partial derivative. With Eqs. (52) and (53), $DivT†=0$ in Eq. (48) gives\n$λ2(ϕ,112+ϕ,222)−μ−1q,1*=0$\n(54a)\n\n$λ−2(ϕ,111+ϕ,122)+μ−1q,2*=0$\n(54b)\nEliminating $q*$ in Eq. (54) yields\n$ϕ,1111+(λ4+1)ϕ,1122+λ4ϕ,2222=0$\n(55)\n\nBoundary Conditions on $Sl∪Sr$ and on $Su∪Sb$.\n\nSubstituting Eq. (53) into the boundary conditions on $Sl∪Sr$ in Eq. (49), we have\n$ϕ,2=0, ϕ,22=0, ϕ,22−ϕ,11=0 on Sl∪Sr$\n(56)\n\nwhere the first and the third equations come from the mechanical boundary conditions (i.e., the controlled-nominal displacement $x*·e1=0$ and the free shear stress $T†e1=s†e1$), while the second equation corresponds to the electrostatic boundary condition (i.e., $D̃†·e1=0$ in Eq. (49)).\n\nSimilarly, substituting Eq. (53) into the electrostatic boundary condition on $Su∪Sb$ in Eq. (50), we have\n$ϕ,1(X1,l22)=ϕ,1(X1,−l22)$\n(57)\nMoreover, substituting Eq. (53) into the mechanical boundary conditions on $Su∪Sb$ in Eq. (50) yields\n$[λ−4+(Ẽ0ϵ/μ)2]ϕ,11−ϕ,22=0 2μϕ,12+λ2q*=0} on Su∪Sb$\n(58)\n\nSolution of the Incremental Boundary-Value Problem.\n\nDue to the boundary conditions on $Sl∪Sr$ in Eq. (56), the solution of $ϕ(X1,X2)$ in Eq. (55) admits the series form\n$ϕ(X1,X2)=∑m=1∞Ym(X2)sin(kmX1)+BX1+C,$\n(59)\nwhere $km=(mπ/l1), m=1,2,3,…$, B and C are constants. C is physically irrelevant and may be chosen to be zero. B is related to rigid body motion of the deformed dielectric block. Substituting Eq. (59) into Eq. (53), we have\n$x1*=λ∑m=1∞Ym′(X2)sin(kmX1)$\n(60a)\n\n$x2*=−λ−1{∑m=1∞kmYm(X2)cos(kmX1)+B}$\n(60b)\nwhere the prime denotes the derivative with respect to X2. Substituting Eq. (59) into Eq. (54) and performing the integration with respect to X1, we obtain\n$q*=−μλ2∑m=1∞km−1[Ym‴(X2)−km2Ym′(X2)]cos(kmX1)$\n(61)\nThen the increment $s†$ of the nominal stress in Eq. (49) is\n$s†=μλ{∑m=1∞[km−2Ym‴(X2) +(λ−4+(Ẽ0ϵ/μ)2)Ym′(X2)]km cos(kmX1)}$\n(62)\nSubstituting Eq. (59) into the electrostatic boundary conditions on $Su∪Sb$ in Eq. (57) and using the orthogonality relation of Fourier series yields\n$Ym(l22)=Ym(−l22)$\n(63)\nMoreover, substituting Eqs. (53), (59), and (61) into the mechanical boundary conditions on $Su∪Sb$ in Eq. (58) and using again the orthogonality relation of Fourier series, we obtain\n$Ym″(X2)+[λ−4+(Ẽ0ϵ/μ)2]km2Ym(X2)=0Ym‴(X2)−(1+2λ−4)km2Ym′(X2)=0} on Su∪Sb$\n(64)\nFinally, substituting Eq. (59) into the governing equation (55), we find that $Ym(X2)$ yields the following fourth-order ordinary differential equation:\n$Ym(4)(X2)−km2(1+λ−4)Ym″(X2)+km4λ−4Ym(X2)=0$\n(65)\nThe general solution of $Ym(X2)$ in Eq. (65) is\n$Ym(X2)={C1mcosh(kmλ−2X2)+C2msinh(kmλ−2X2) +C3mcosh(kmX2)+C4msinh(kmX2) for λ≠1(C¯1m+C¯2mX2)cosh(kmX2) +(C¯3m+C¯4mX2)sinh(kmX2) for λ=1$\n(66)\n\nwhere $Cim$ and $C¯im, i=1,2,3,4$, are constant coefficients.\n\nBifurcation at Varying Stretch $λ≠1$.\n\nSubstituting the general solution to $Ym(X2)$ in Eq. (66)$1$ for $λ≠1$ into Eq. (64), we obtain a system of four linear equations in four unknowns $Cim, i=1,2,3,4$. The system of four equations can be rewritten in a matrix form of $MCm=0$, where M is the 4 × 4 coefficient matrix and $Cm=(C1m,C2m,C3m,C4m)T$. The nonzero solution of $Cm$ requires a zero determinant of M, namely\n$detM=|M110M1300M220M240M320M34M410M430|=0$\n(67)\nwhich can be decomposed into a product of two 2 × 2 determinants, such that\n$|M11M13M41M43|•|M22M24M32M34|=0$\n(68)\nwhere\n$M11=[2λ−4+(Ẽ0ϵ/μ)2]coshmπl22λ2l1M13=[1+λ−4+(Ẽ0ϵ/μ)2]coshmπl22l1M22=(1+λ−4)coshmπl22λ2l1, M24=2λ−2coshmπl22l1M32=[2λ−4+(Ẽ0ϵ/μ)2]sinhmπl22λ2l1M34=[1+λ−4+(Ẽ0ϵ/μ)2]sinhmπl22l1M41=(1+λ−4)sinhmπl22λ2l1, M43=2λ−2sinhmπl22l1$\n\nEquation (68) holds if either of the 2 × 2 determinant vanishes, indicating the possibility of two types of buckling.\n\nFor the first type, the vanishing of the left 2 × 2 determinant in Eq. (68) admits nonzero $C1m$ and $C3m$ but zero $C2m$ and $C4m$, leaving two hyperbolic cosine functions of $X2∈[−l2/2,l2/2]$ in $Ym(X2)$ in Eq. (66)1 and making it become an even function of X2. This type of electrical buckling, of course, satisfies the electrostatic boundary conditions, Eq. (63), since $Ym(X2)$ is an even function. Moreover, the even function, $Ym(X2)$ in Eq. (66)1, makes the perturbed displacement $x1*(X1,X2)$ in Eq. (60) become an odd function of X2 and $x2*(X1,X2)$ become an even function of X2, such as $x1*(X1,X2)=−x1*(X1,−X2)$ and $x2*(X1,X2)=x2*(X1,−X2)$. It is assumed that the constant B in Eq. (60) for the coordinates is appropriately chosen to make $x2*(0,0)=0$. Then, the buckling modes of the first type are antisymmetric with respect to the X1 axis. This type of buckling is called an antisymmetric buckling about the X1 axis. For instance, Figs. 4(a) and 5(a) are antisymmetric bifurcation modes with m = 1 and m = 2, respectively.\n\nFor the second type, the right 2 × 2 determinant in Eq. (68) vanishes and then $Ym(X2)$ in Eq. (66)1 has nonzero $C2m$ and $C4m$ but zero $C1m$ and $C3m$. Thus, $Ym(X2)$ only contains two hyperbolic sine functions of X2 (i.e., $Ym(X2)$ becomes an odd function of X2). The perturbed displacement $x1*(X1,X2)$ in Eq. (60) is an even function of X2, such that $x1*(X1,X2)=x1*(X1,−X2)$, while the perturbed displacement $x2*(X1,X2)$ in Eq. (60) has the property $x2*(X1,X2)+x2*(X1,−X2)=−2λ−1B$. If the constant B in Eq. (60) is chosen to be zero for an appropriate fixity condition of coordinates, the perturbed displacement $x2*(X1,X2)$ in Eq. (60) becomes an odd function of X2, such as $x2*(X1,X2)=−x2*(X1,−X2)$. This type of electrical buckling satisfies the mechanical boundary conditions (Eq. (64)), however, it does not satisfy the electrostatic boundary conditions (Eq. (63)) since $Ym(X2)$ is an odd function now. For the mechanical compression, the buckling modes of the second type are symmetric with respect to the X1 axis. This type of buckling is called a symmetric buckling about the X1 axis. The symmetric bifurcation modes with m = 1 and m = 2 are shown, respectively, in Figs. 4(b) and 5(b).\n\nThe critical conditions for the two types of buckling can be explicitly written as\n\nType (i): Antisymmetric\n$(1+λ−4)[1+λ−4+(Ẽ0ϵ/μ)2]tanhmπl22λ2l1 −2λ−2[2λ−4+(Ẽ0ϵ/μ)2]tanhmπl22l1=0$\n(69)\nType (ii): Symmetric\n$(1+λ−4)[1+λ−4+(Ẽ0ϵ/μ)2]tanhmπl22l1 −2λ−2[2λ−4+(Ẽ0ϵ/μ)2]tanhmπl22λ2l1=0$\n(70)\n\nBifurcation at Fixed Stretch λ = 1.\n\nSimilarly, substituting the general solution to $Ym(X2)$ in Eq. (66)2 for λ = 1 into Eq. (64), we obtain a system of four linear equations in four unknowns $C¯im, i=1,2,3,4$. The system of four equations can be written in a matrix form as $M¯C¯m=0$, where $M¯$ is the 4 × 4 coefficient matrix and $C¯m=(C¯1m,C¯2m,C¯3m,C¯4m)T$. A nonzero solution of $C¯m$ requires $detM¯=0$, which, similar to Eq. (68), can be reduced to a product of two 2 × 2 determinants, such that\n$|M¯11M¯14M¯41M¯44|•|M¯22M¯23M¯32M¯33|=0$\n(71)\nwhere\n$M¯11=[2+(Ẽ0ϵ/μ)2]coshmπl22l1M¯14=2l1mπcoshmπl22l1+l22[2+(Ẽ0ϵ/μ)2]sinhmπl22l1M¯22=l22sinhmπl22l1, M¯23=coshmπl22l1M¯32=2l1mπsinhmπl22l1+l22[2+(Ẽ0ϵ/μ)2]coshmπl22l1M¯33=[2+(Ẽ0ϵ/μ)2]sinhmπl22l1M¯41=sinhmπl22l1, M¯44=l22coshmπl22l1$\nSimilar to the analysis of Eq. (68), the left 2 × 2 determinant in Eq. (71) might correspond to the type of an antisymmetric buckling, while the right one might corresponds to the type of a symmetric buckling. However, the right 2 × 2 determinant in Eq. (71) is always nonzero, indicating the nonexistence of the type of symmetric buckling at λ = 1 regardless of how large the electric field is. While both symmetric and antisymmetric buckling satisfy all the mechanical boundary conditions for purely mechanical compression, only antisymmetric buckling satisfies both the mechanical and electrostatic boundary conditions for the combined electromechanical loading. The critical condition for the antisymmetric buckling of a dielectric block under electric field at λ = 1 in plane strain comes from the vanishing of the left 2 × 2 determinant in Eq. (71), and yields\n$[1+12(Ẽ0ϵ/μ)2]mπl2l1−sinhmπl2l1=0$\n(72)\n\nDiscussion and Conclusions\n\nComparison With Euler's Prediction for the Mechanical Compression.\n\nThe buckling of Euler's column studied by Leonhard Euler in 1757 is one of the classical problems in engineering. The formula derived by Euler gives the critical load at which a long, slender, ideal column is in a state of unstable equilibrium (i.e., even an infinitesimal lateral force will make the column buckle).\n\nConsidering the conditions of end support of the column, Euler's formula can be expressed as\n$F=π2EeIe(Kl1)2$\n(73)\n\nwhere F is the critical force, Ee is the plane strain elastic Young's modulus, Ie is the area moment of inertia of the cross section, l1 is the length of the column, and K is column effective length factor that depends on the conditions of end support. For example, the factor K is 0.5 for both fixed ends while it is 1 for both pinned ends.\n\nThe effective Young's modulus under plane strain is $Ee=4μ$ in Eq. (73) for incompressible neo-Hookean materials with shear modulus μ, and the area moment of inertia is $Ie=l23/12$ for a block with height l2 and unit width. Thus, the critical nominal stress sc from Eq. (73) is\n$sc=Fl2×1=μπ23(Kl1/l2)2$\n(74)\n\nIn contrast to Euler's formula for slender structures, our buckling analysis is valid for a finite compressed elastic block with any aspect ratio $l1/l2$. In the absence of an electric field, Eq. (69) gives the critical stretch λc for the buckling of an incompressible neo-Hookean block subject to a purely mechanical compression. With the relation between the stretch and the nominal stress in Eq. (42), we can obtain the critical nominal stress that corresponds to the critical stretch λc. We remark here that Eq. (69) determines the critical stretch of antisymmetric buckling and Eq. (42) is used to transform the critical stretch into the critical nominal stress for the purpose of a direct comparison with Euler's prediction (Eq. (74)). Note that only the results of the antisymmetric buckling are used to compare with Euler's prediction (Eq. (74)) since the antisymmetric buckling always occurs prior to symmetric buckling. Moreover, only antisymmetric buckling satisfies all the mechanical and electrostatic boundary conditions, while the symmetric buckling satisfies only the mechanical boundary conditions. The detailed discussion of the difference between antisymmetric and symmetric buckling is given in Secs. 5.3 and 5.4.\n\nFigure 6(b) shows the critical nominal stress for the antisymmetric buckling mode m = 2 in Eq. (69) of a finite block, whose buckling pattern is shown schematically in Fig. 5(a). The buckling pattern and the boundary condition on the left and the right surfaces in Fig. 5(a) are very similar to that of the buckling of Euler's column with fixed-fixed ends. Compared with Euler's prediction, the two predicted critical loads of buckling agree well with each other only at a sufficiently large aspect ratio (i.e., $l1/l2>5$). The obvious discrepancy at small aspect ratios is because Euler's analysis is only valid for a slender column.\n\nComparison of Euler's Prediction for Electroelastic Buckling at a Fixed Stretch λ = 1.\n\nCompared with the mechanical compressive stress, the electrostatic Maxwell stress can also make the dielectric block buckle in our model. The special case of a prescribed stretch λ = 1 under an electric field in our model corresponds to zero strain but nonzero stress in the homogeneous solution. The magnitude of the compressive stress in Eq. (42) at λ = 1 is\n$|s0|=ϵẼ02$\n(75)\nFrom Euler's prediction (Eq. (74)), when $|s0|$ in Eq. (75) increases to the critical value sc in Eq. (74), the dielectric block begins to buckle and the critical nominal electric field for the electromechanical buckling is given by\n$Ẽc=π(Kl1/l2)μ3ϵ$\n(76)\n\nwhere the factor K = 0.5 is for both fixed ends, while K = 1 is for both pinned ends of the Euler column.\n\nIn contrast to the approximation (Eq. (76)) from Euler's formula, our analytical prediction of the critical nominal electric field from Eq. (72) is obtained as\n$Ẽc=(l1mπl2sinhmπl2l1−1)2μϵ$\n(77)\n\nwhere the modes m = 1 and m = 2 are related to two different boundary conditions corresponding K = 1 and K = 0.5 in Eq. (76).\n\nIn Fig. 7, we plot the variation of the dimensionless critical electric field $Ẽcϵ/μ$ with respect to the aspect ratio $l1/l2$ from both Euler's approximation (Eq. (76)) and our analytical prediction (Eq. (77)). The critical electric field decreases monotonously with the increase of the aspect ratio $l1/l2$. This trend agrees with intuition that a more slender block (i.e., a larger aspect ratio $l1/l2$) is more likely to become unstable under external stimuli such as an electric field. In the limiting case $l1/l2→∞$, the critical electric field approaches zero and an exceedingly small electric field can make the block buckle.\n\nBuckling of a Mechanically Compressed Block.\n\nIn contrast to the critical force for Euler's column, the critical stretch (or strain) is often used to define the critical conditions for the buckling of finite blocks or surface instability of soft materials [1,33,3638,]. In Biot's half-space problem , the critical stretch for surface instability of a homogeneous neo-Hookean half-space under plane strain compression is 0.544 at which all the wavelengths become unstable. Later, Levinson studied the stability of a compressed block in the current configuration. Recently, Dorfmann and Ogden studied the surface instability of the homogeneous deformation of a half-space subject to both mechanical and electrical loads by solving the incremental boundary-value problem.\n\nIn our work, the neo-Hookean block is compressed under plane strain by changing the stretch λ. The critical condition of the buckling is determined by either Eq. (69) for antisymmetric buckling or Eq. (70) for symmetric buckling in the absence of electric fields. The critical stretch λc for the mechanical buckling of the compressed block with different aspect ratios $l1/l2$ is plotted in Fig. 8. The critical stretches for antisymmetric/symmetric buckling with different modes $m=1,2,3,5$ are plotted in solid/dashed lines. In particular, the critical stretches for all modes approach 0.544 when the aspect ratio $l1/l2$ decreases to zero (i.e., $l2/l1$ increases to infinity). The critical stretch $λc=0.544$ of this limiting case ($l2/l1→∞$) coincides with Biot's prediction since the limiting case ($l2/l1→∞$) of block is that of a half-space.\n\nFigure 8 also shows that the critical stretch for antisymmetric buckling is always larger than that of symmetric buckling. This means that the antisymmetric buckling in a compressed block occurs prior to symmetric buckling. Indeed, symmetric buckling cannot occur unless the passive constraints are considered to be acting until $λ<0.544$. Therefore, only the antisymmetric buckling is compared with Euler's prediction in the preceding discussion.\n\nBuckling of an Electromechanically Compressed Block.\n\nIn Secs. 5.1, 5.2, and 5.3, we have shown that either the mechanical compression in Figs. 6 and 8 or the electric field in Fig. 7 can make the dielectric block buckle. An obvious extension to these notions is that the combined electromechanical loading ought to make buckling of the block yet easier. For purely mechanical loading, the block in our problem can only buckle under compression ($λ<1$) rather than extension ($λ>1$). Since the electric field can make the block buckle at λ = 1 in Fig. 7, the block may also become buckle in extension (i.e., $λ>1$) under an electric field.\n\nUsing Eqs. (69) and (70), we plot Fig. 9 the critical stretch λc as a function of the aspect ratio $l1/l2$ for the buckling mode m = 2 under different electric fields $Ẽ0ϵ/μ$. The solid lines denote antisymmetric buckling, while the dashed lines represent symmetric buckling. Note that antisymmetric buckling satisfies all the boundary conditions, while the symmetric buckling satisfies all the boundary conditions other than the electrostatic boundary conditions of the perturbed voltage on the upper and lower surfaces. Furthermore, the critical stretches for the antisymmetric buckling (solid lines) rather than the symmetric buckling (dashed lines) in Fig. 9 are very sensitive to the electric fields. When the aspect ratio $l1/l2$ is larger than five, for example, the differences of the critical stretches for the symmetric buckling between the mechanical compression and the electromechanical loading are negligible. On the other hand, since the occurrence of the symmetric buckling is always later than the onset of antisymmetric buckling, in practice only the effects of the electric fields on antisymmetric buckling are of interest.\n\nCompared with the critical stretch for buckling of a mechanically compressed block, the critical stretch that accounts for the electric field is shifted upward for a small aspect ratio $l1/l2$. For example, the critical stretch for the buckling of a mechanically compressed block in the limiting case $l1/l2→0$ is 0.544 while it increases to 0.628 at an electric field $Ẽ0ϵ/μ=2$ in Fig. 9.\n\nIt is clear from Fig. 9 that the electric field can cause the block to buckle more easily in a compressed state ($λ<1$). Moreover, the electric field can make the block buckle even if the block is in extension ($λ>1$).\n\nWe know that both mechanical compression and the electric field can make the block buckle. For antisymmetric buckling with mode m = 2, the variation of the critical stretch λc with respect to the critical electric field $Ẽcϵ/μ$ is plotted in Fig. 10. It is obvious that a slender block (i.e., with high aspect ratio $l1/l2$) is more likely to buckle when it is subject to a combined loading. For example, at a zero electric field (i.e., $Ẽ0ϵ/μ=0$), the critical stretch is slightly less than one in the case of a large aspect ratio $l1/l2=10$, while it approaches 0.544 at an aspect ratio $l1/l2=1$. For each aspect ratio $l1/l2$ in Fig. 10, the critical stretch λc increases monotonically with the increase of $Ẽcϵ/μ$. It clearly shows how the electric field makes the block buckle in an extended state (i.e., $λc>1$). We finally remark that for actual applications, electric breakdown should also be considered and the comparison of the critical electric fields between the electric breakdown and the electrical buckling is needed for a safe design of electrical devices.\n\nIn summary, for a mechanical compression without electric field, the block mathematically exhibits two types of buckling modes, i.e., antisymmetric and the symmetric buckling, however, the antisymmetric buckling will always precede the other. Our results, in the asymptotic limit of large aspect ratio, agree well with Euler's prediction for the buckling of a slender block and, furthermore, at a zero aspect ratio are the same as Biot's critical strain of surface instability of a compressed homogeneous half-space of a neo-Hookean material. For the case where electric fields are included, aside from similar interesting asymptotic connection to Euler's formula, we find that the electric field can cause the block to buckle more easily in a compressed state, and the electric field can even cause the block to buckle in a state of tension.\n\nAcknowledgment\n\nFinancial support from the M.D. Anderson Professorship, NSF CMMI Grant No. 1463339 and NPRP award [NPRP 6-282-2-119] from the Qatar National Research Fund (a member of The Qatar Foundation).\n\nAppendix A: First Variation of the Energy Functional\n\nThe infinitesimal variations of the deformation $x=x(X)$ and the polarization $P̃=P̃(X)$ are denoted, respectively, by $δx$ and $δP̃$, such that\n$δx=η1xd, δP̃=η2P̃p$\n(A1)\n\nwhere $η1,η2∈ℝ$ and $max{|η1|,|η2|}≪1$, and $xd$ and $P̃p$ are two smooth variations.\n\nThe deformation $x=x(X)$ and the polarization $P̃=P̃(X)$ may not appear directly in the energy functional including the deformation gradient $F=∇x$, the Jacobian $J=det∇x$, the voltage ξ, and the nominal electric displacement $D̃$, among others. Thus, we perform their first variations implicitly and only keep the first-order terms of η1 and η2, such that\n$F→F+η1Fd, J→J+η1Jd, ξ→ξ+η1ξd+η2ξp,D̃→D̃+η1D̃d+η2D̃p, E→E+η1Ed+η2Ep$\n(A2)\nwhere the subscripts “d” and “p” denote, respectively, the variations related to the deformation and the polarization. For example, $Fd$ and Jd in Eq. (A2) are\n$Fd=ddη1∇(x+η1xd)|η1=0=∇xd,Jd=ddη1det(F+η1Fd)|η1=0=JF−T·∇xd$\n(A3)\nSubstituting Eq. (A2) into the Maxwell equation (Eq. (10)) and taking partial derivatives with respect to η1 and η2 at $η1=η2=0$, respectively, we have\n$DivD̃d=DivD̃p=0 in ΩR$\n(A4)\nand the relations\n$FTEd+(FT)dE=−∇ξd, FTEp=−∇ξp,FD̃d+FdD̃=ϵ0JEd+ϵ0JdE, FD̃p=ϵ0JEp+P̃p$\n(A5)\nWith Eqs. (A1) and (A2), the variations of Eqs. (4) and (5) are\n$xd·e1=0, D̃d·e1=D̃p·e1=0 on Sl∪Sr$\n(A6)\n\n$ξd=ξp=0 on Su∪Sb$\n(A7)\n\nFirst Variation With Respect to the Polarization\n\nThe first variation of the energy functional (Eq. (11)) with respect to the polarization $P̃$ is\n$ddη2F[x,P̃+η2P̃p]|η2=0=ddη2U[x,P̃+η2P̃p]|η2=0+ddη2Eelect[x,P̃+η2P̃p]|η2=0=∫ΩR∂Ψ∂P̃·P̃p+ϵ0∫ΩRJE·Ep +∫Su∪Sb(ξpD̃·N+ξD̃p·N)$\n(A8)\nWith Eqs. (A4)(A7) and the divergence theorem, Eq. (A8) becomes\n$ddη2F[x,P̃+η2P̃p]|η2=0=∫ΩR(∂Ψ∂P̃·P̃p+ϵ0JE·Ep)+∫∂ΩRξD̃p·N=∫ΩR(∂Ψ∂P̃·P̃p+ϵ0JE·Ep)+∫ΩR(ξDivD̃p+D̃p·∇ξ)=∫ΩR(∂Ψ∂P̃·P̃p+ϵ0JE·Ep)−∫ΩRE·FD̃p=∫ΩR(∂Ψ∂P̃·P̃p+E·(ϵ0JEp−FD̃p))=∫ΩR(∂Ψ∂P̃−E)·P̃p$\n(A9)\n\nBased on the basic lemma of calculus of variations, vanishing of Eq. (A9) gives Eq. (14).\n\nFirst Variation With Respect to the Deformation\n\nWe introduce a Lagrange multiplier function $q:ΩR→ℝ2$ to address the variation of a constrained problem such that the deformation x is subject to the constraint of incompressibility $J=det∇x=1$. The modified energy functional of Eq. (11), including the Lagrangian multiplier q to enforce incompressibility, is\n$F̂[x,P̃]=∫ΩR(Ψ(∇x,P̃)+ϵ02J|E|2−q(J−1))+∫Su∪SbξD̃·N$\n(A10)\nThe first variation of Eq. (A10) with respect to the deformation x is\n$ddη1F̂[x+η1xd,P̃]|η1=0=∫ΩR(∂Ψ∂F·∇xd+ϵ02Jd|E|2+ϵ0JE·Ed−qJd) +∫Su∪Sb(ξdD̃·N+ξD̃d·N)$\n(A11)\nWith Eqs. (A3)(A7) and the divergence theorem, Eq. (A11) becomes\n$ddη1F̂[x+η1xd,P̃]|η1=0=∫ΩR(∂Ψ∂F·∇xd+ϵ02Jd|E|2+ϵ0JE·Ed−qJd) +∫∂ΩRξD̃d·N=∫ΩR(∂Ψ∂F·∇xd+ϵ02Jd|E|2+ϵ0JE·Ed−qJd) +∫ΩR(ξDivD̃d+D̃d·∇ξ)=∫ΩR(∂Ψ∂F·∇xd+ϵ02Jd|E|2−qJd+E·(ϵ0JEd−FD̃d))=∫ΩR(∂Ψ∂F·∇xd+ϵ02Jd|E|2−qJd+E·(FdD̃−ϵ0JdE))=∫ΩR(∂Ψ∂F+E⊗D̃−ϵ0J2|E|2F−T−qJF−T)·∇xd=∫ΩR(∂Ψ∂F+Σ̃−qJF−T)·∇xd=∫∂ΩRxd·(∂Ψ∂F+Σ̃−qJF−T)N −∫ΩRxd·Div(∂Ψ∂F+Σ̃−qJF−T)$\n(A12)\n\nwhere $Σ̃$ is the Piola–Maxwell stress defined by Eq. (18). Similar derivations of the Piola–Maxwell stress can also be found in the work [42,46] and many other references. With the boundary condition of $xd$ in Eq. (A6), the vanishing of Eq. (A12) gives Eqs. (15)(17).\n\nAppendix B: Linearized Analysis\n\nSuppose that a deformation x and a polarization $P̃$ have infinitesimal increments $x*$ and $P̃*, ||x*||,||P̃*||≪1$. For a general field $Θ(x,P̃)$ that is (Fréchet-) differentiable at $(x,P̃)$, we have the expansion in the neighborhood of $(x,P̃)$, such that\n$Θ(x+x*,P̃+P̃*)=Θ(x,P̃)+∂Θ∂x·x*+∂Θ∂P̃·P̃*+o(||x*||,||P̃*||)$\n(B1)\nWe define\n$Θ*=Θ†+Θ‡$\n(B2)\nwhere\n$Θ†=∂Θ∂x·x*, Θ‡=∂Θ∂P̃·P̃*$\n(B3)\n\nHere, $Θ*$ denotes the total linearized increment, and $Θ†$ and $Θ‡$ denote the linearized increments with respect to the deformation and the polarization.\n\nWith Eqs. (B1)(B3) and the chain-rule, the linearized increments of the deformation gradient $F=∇x$ and the Jacobian $J=detF=det∇x$ are\n$F*=∇x*, (F−T)*=−F−T(∇x*)TF−T,J*=∂J∂F·F*=JF−T·∇x*$\n(B4)\nSimilarly, the linearized increments of other fields at $(x,P̃)$ can be written implicitly as\n$(qsξ⋮ED̃Σ̃)Linearization→(q*=q†s*=s†+s‡ξ*=ξ†+ξ‡⋮E*=E†+E‡D̃*=D̃†+D̃‡Σ̃*=Σ̃†+Σ̃‡)$\n(B5)\n\nWe remark that the linearized increments of the deformation gradient F, the Jacobian J and the Lagrange multiplier q only depend on the increment $x*$ at $(x,P̃)$.\n\nLinearized Relation\n\nConsider the linearization of the Maxwell equation (Eq. (10)1) as an example. Substituting the sum of the fields and their linearized increments defined in Eqs. (B4) and (B5) into Eq. (10)1, and ignoring the higher order terms, we obtain\n$FTE→[FT+(FT)*](E+E*)→FTE+FTE*+(FT)*E$\n(B6a)\n\n$−∇ξ→−∇(ξ+ξ*)=−∇ξ−∇ξ*$\n(B6b)\nthen we have\n$FTE*+(FT)*E=−∇ξ*$\n(B7)\n\nOther linearized relations can also be obtained in a similar manner.\n\nLinearized Piola–Maxwell Stress\n\nSubstituting the sum of the fields and their linearized increments defined in Eqs. (B4) and (B5) into the Piola–Maxwell stress (Eq. (18)), and ignoring higher order terms, such that\n$→(E+E*)⊗(D̃+D̃*) −ϵ0(J+J*)2|E+E*|2[F−T+(F−T)*]→E⊗D̃−ϵ0J2|E|2F−T+(E⊗D̃*+E*⊗D̃) −ϵ02{2J(E·E*)F−T+|E|2[J(F−T)*+J*F−T]}=Σ̃+Σ̃*$\n(B8)\n\nthen we have the linearized increment of the Piola–Maxwell stress $Σ̃*$ in Eq. (25).\n\n2\n\nThe assumption of homogeneous deformation restricts their analysis essentially to tensile loading to avoid buckling instability.\n\n3\n\nBy “physically reasonable,” we imply conditions that are easily realizable in an experimental setup.\n\n4\n\nWe explicitly allow for inhomogeneous deformation modes to study buckling under compression.\n\nReferences\n\nReferences\n1.\nBiot\n,\nM.\n,\n1963\n, “\nSurface Instability of Rubber in Compression\n,”\nAppl. Sci. Res.\n,\n12\n(\n2\n), pp.\n168\n182\n.\n2.\nYang\n,\nS.\n,\nKhare\n,\nK.\n, and\nLin\n,\nP.-C.\n,\n2010\n, “\nHarnessing Surface Wrinkle Patterns in Soft Matter\n,”\n,\n20\n(\n16\n), pp.\n2550\n2564\n.\n3.\nGent\n,\nA.\n, and\nCho\n,\nI.\n,\n1999\n, “\nSurface Instabilities in Compressed or Bent Rubber Blocks\n,”\nRubber Chem. Technol.\n,\n72\n(\n2\n), pp.\n253\n262\n.\n4.\nHong\n,\nW.\n,\nZhao\n,\nX.\n, and\nSuo\n,\nZ.\n,\n2009\n, “\nFormation of Creases on the Surfaces of Elastomers and Gels\n,”\nAppl. Phys. Lett.\n,\n95\n(\n11\n), p.\n111901\n.\n5.\nHohlfeld\n,\nE.\n, and\n,\nL.\n,\n2011\n, “\nUnfolding the Sulcus\n,”\nPhys. Rev. Lett.\n,\n106\n(\n10\n), p.\n105702\n.\n6.\nLu\n,\nN.\n, and\nKim\n,\nD.-H.\n,\n2014\n, “\nFlexible and Stretchable Electronics Paving the Way for Soft Robotics\n,”\nSoft Rob.\n,\n1\n(\n1\n), pp.\n53\n62\n.\n7.\nShian\n,\nS.\n,\nBertoldi\n,\nK.\n, and\nClarke\n,\nD. R.\n,\n2015\n, “\nDielectric Elastomer Based ‘Grippers’ for Soft Robotics\n,”\n,\n27\n(\n43\n), pp.\n6814\n6819\n.\n8.\nRogers\n,\nJ. A.\n,\nSomeya\n,\nT.\n, and\nHuang\n,\nY.\n,\n2010\n, “\nMaterials and Mechanics for Stretchable Electronics\n,”\nScience\n,\n327\n(\n5973\n), pp.\n1603\n1607\n.\n9.\nShankar\n,\nR.\n,\nGhosh\n,\nT. K.\n, and\nSpontak\n,\nR. J.\n,\n2007\n, “\nDielectric Elastomers as Next-Generation Polymeric Actuators\n,”\nSoft Matter\n,\n3\n(\n9\n), pp.\n1116\n1129\n.\n10.\nMoscardo\n,\nM.\n,\nZhao\n,\nX.\n,\nSuo\n,\nZ.\n, and\nLapusta\n,\nY.\n,\n2008\n, “\nOn Designing Dielectric Elastomer Actuators\n,”\nJ. Appl. Phys.\n,\n104\n(\n9\n), p.\n093503\n.\n11.\nKeplinger\n,\nC.\n,\nKaltenbrunner\n,\nM.\n,\nArnold\n,\nN.\n, and\nBauer\n,\nS.\n,\n2010\n, “\nRöntgen's Electrode-Free Elastomer Actuators Without Electromechanical Pull-in Instability\n,”\nProc. Natl. Acad. Sci.\n,\n107\n(\n10\n), pp.\n4505\n4510\n.\n12.\nKoh\n,\nS. J. A.\n,\nZhao\n,\nX.\n, and\nSuo\n,\nZ.\n,\n2009\n, “\nMaximal Energy That Can Be Converted by a Dielectric Elastomer Generator\n,”\nAppl. Phys. Lett.\n,\n94\n(\n26\n), p.\n262902\n.\n13.\nBauer\n,\nS.\n,\nBauer-Gogonea\n,\nS.\n,\nGraz\n,\nI.\n,\nKaltenbrunner\n,\nM.\n,\nKeplinger\n,\nC.\n, and\nSchwödiauer\n,\nR.\n,\n2014\n, “\n25th Anniversary Article: A Soft Future: From Robots and Sensor Skin to Energy Harvesters\n,”\n,\n26\n(\n1\n), pp.\n149\n162\n.\n14.\nZhao\n,\nX.\n, and\nWang\n,\nQ.\n,\n2014\n, “\nHarnessing Large Deformation and Instabilities of Soft Dielectrics: Theory, Experiment, and Application\n,”\nAppl. Phys. Rev.\n,\n1\n(\n2\n), p.\n021304\n.\n15.\nStark\n,\nK.\n, and\nGarton\n,\nC.\n,\n1955\n, “\nElectric Strength of Irradiated Polythene\n,”\nNature\n,\n176\n(\n4495\n), pp.\n1225\n1226\n.\n16.\nPlante\n,\nJ.-S.\n, and\nDubowsky\n,\nS.\n,\n2006\n, “\nLarge-Scale Failure Modes of Dielectric Elastomer Actuators\n,”\nInt. J. Solids Struct.\n,\n43\n(\n25\n), pp.\n7727\n7751\n.\n17.\nZhao\n,\nX.\n, and\nSuo\n,\nZ.\n,\n2009\n, “\nElectromechanical Instability in Semicrystalline Polymers\n,”\nAppl. Phys. Lett.\n,\n95\n(\n3\n), p.\n031904\n.\n18.\nWang\n,\nQ.\n, and\nZhao\n,\nX.\n,\n2013\n, “\nCreasing-Wrinkling Transition in Elastomer Films Under Electric Fields\n,”\nPhys. Rev. E\n,\n88\n(\n4\n), p.\n042403\n.\n19.\nWang\n,\nQ.\n,\nZhang\n,\nL.\n, and\nZhao\n,\nX.\n,\n2011\n, “\nCreasing to Cratering Instability in Polymers Under Ultrahigh Electric Fields\n,”\nPhys. Rev. Lett.\n,\n106\n(\n11\n), p.\n118301\n.\n20.\nWang\n,\nQ.\n,\nSuo\n,\nZ.\n, and\nZhao\n,\nX.\n,\n2012\n, “\nBursting Drops in Solid Dielectrics Caused by High Voltages\n,”\nNat. Commun.\n,\n3\n, p.\n1157\n.\n21.\nHa\n,\nS. M.\n,\nYuan\n,\nW.\n,\nPei\n,\nQ.\n,\nPelrine\n,\nR.\n, and\nStanford\n,\nS.\n,\n2006\n, “\nInterpenetrating Polymer Networks for High-Performance Electroelastomer Artificial Muscles\n,”\n,\n18\n(\n7\n), pp.\n887\n891\n.\n22.\nZhao\n,\nX.\n, and\nSuo\n,\nZ.\n,\n2007\n, “\nMethod to Analyze Electromechanical Stability of Dielectric Elastomers\n,”\nAppl. Phys. Lett.\n,\n91\n(\n6\n), p.\n061921\n.\n23.\nKofod\n,\nG.\n,\n2008\n, “\nThe Static Actuation of Dielectric Elastomer Actuators: How Does Pre-Stretch Improve Actuation?\n,”\nJ. Phys. D: Appl. Phys.\n,\n41\n(\n21\n), p.\n215405\n.\n24.\nLi\n,\nB.\n,\nZhou\n,\nJ.\n, and\nChen\n,\nH.\n,\n2011\n, “\nElectromechanical Stability in Charge-Controlled Dielectric Elastomer Actuation\n,”\nAppl. Phys. Lett.\n,\n99\n(\n24\n), p.\n244101\n.\n25.\nAkbari\n,\nS.\n,\nRosset\n,\nS.\n, and\nShea\n,\nH. R.\n,\n2013\n, “\nImproved Electromechanical Behavior in Castable Dielectric Elastomer Actuators\n,”\nAppl. Phys. Lett.\n,\n102\n(\n7\n), p.\n071906\n.\n26.\nNiu\n,\nX.\n,\nStoyanov\n,\nH.\n,\nHu\n,\nW.\n,\nLeo\n,\nR.\n,\nBrochu\n,\nP.\n, and\nPei\n,\nQ.\n,\n2013\n, “\nSynthesizing a New Dielectric Elastomer Exhibiting Large Actuation Strain and Suppressed Electromechanical Instability Without Prestretching\n,”\nJ. Polym. Sci. Part B: Polym. Phys.\n,\n51\n(\n3\n), pp.\n197\n206\n.\n27.\nJiang\n,\nL.\n,\nBetts\n,\nA.\n,\nKennedy\n,\nD.\n, and\nJerrams\n,\nS.\n,\n2016\n, “\nEliminating Electromechanical Instability in Dielectric Elastomers by Employing Pre-Stretch\n,”\nJ. Phys. D: Appl. Phys.\n,\n49\n(\n26\n), p.\n265401\n.\n28.\nKeplinger\n,\nC.\n,\nLi\n,\nT.\n,\nBaumgartner\n,\nR.\n,\nSuo\n,\nZ.\n, and\nBauer\n,\nS.\n,\n2012\n, “\nHarnessing Snap-Through Instability in Soft Dielectrics to Achieve Giant Voltage-Triggered Deformation\n,”\nSoft Matter\n,\n8\n(\n2\n), pp.\n285\n288\n.\n29.\nShivapooja\n,\nP.\n,\nWang\n,\nQ.\n,\nOrihuela\n,\nB.\n,\nRittschof\n,\nD.\n,\nLópez\n,\nG. P.\n, and\nZhao\n,\nX.\n,\n2013\n, “\nBioinspired Surfaces With Dynamic Topography for Active Control of Biofouling\n,”\n,\n25\n(\n10\n), pp.\n1430\n1434\n.\n30.\nDíaz-Calleja\n,\nR.\n,\nRiande\n,\nE.\n, and\nSanchis\n,\nM.\n,\n2008\n, “\nOn Electromechanical Stability of Dielectric Elastomers\n,”\nAppl. Phys. Lett.\n,\n93\n(\n10\n), p.\n101902\n.\n31.\nXu\n,\nB.-X.\n,\nMueller\n,\nR.\n,\nKlassen\n,\nM.\n, and\nGross\n,\nD.\n,\n2010\n, “\nOn Electromechanical Stability Analysis of Dielectric Elastomer Actuators\n,”\nAppl. Phys. Lett.\n,\n97\n(\n16\n), p.\n162908\n.\n32.\nBertoldi\n,\nK.\n, and\nGei\n,\nM.\n,\n2011\n, “\nInstabilities in Multilayered Soft Dielectrics\n,”\nJ. Mech. Phys. Solids\n,\n59\n(\n1\n), pp.\n18\n42\n.\n33.\nDorfmann\n,\nL.\n, and\nOgden\n,\nR. W.\n,\n2014\n, “\nInstabilities of an Electroelastic Plate\n,”\nInt. J. Eng. Sci.\n,\n77\n, pp.\n79\n101\n.\n34.\nLeng\n,\nJ.\n,\nLiu\n,\nL.\n,\nLiu\n,\nY.\n,\nYu\n,\nK.\n, and\nSun\n,\nS.\n,\n2009\n, “\nElectromechanical Stability of Dielectric Elastomer\n,”\nAppl. Phys. Lett.\n,\n94\n(\n21\n), p.\n211901\n.\n35.\nSuo\n,\nZ.\n,\n2010\n, “\nTheory of Dielectric Elastomers\n,”\nActa Mech. Solida Sin.\n,\n23\n(\n6\n), pp.\n549\n578\n.\n36.\nDorfmann\n,\nA.\n, and\nOgden\n,\nR.\n,\n2010\n, “\nNonlinear Electroelastostatics: Incremental Equations and Stability\n,”\nInt. J. Eng. Sci.\n,\n48\n(\n1\n), pp.\n1\n14\n.\n37.\nLevinson\n,\nM.\n,\n1968\n, “\nStability of a Compressed Neo-Hookean Rectangular Parallelepiped\n,”\nJ. Mech. Phys. Solids\n,\n16\n(\n6\n), pp.\n403\n408\n.\n38.\nTriantafyllidis\n,\nN.\n,\nScherzinger\n,\nW.\n, and\nHuang\n,\nH.-J.\n,\n2007\n, “\nPost-Bifurcation Equilibria in the Plane-Strain Test of a Hyperelastic Rectangular Block\n,”\nInt. J. Solids Struct.\n,\n44\n(\n11\n), pp.\n3700\n3719\n.\n39.\nKankanala\n,\nS.\n, and\nTriantafyllidis\n,\nN.\n,\n2008\n, “\nMagnetoelastic Buckling of a Rectangular Block in Plane Strain\n,”\nJ. Mech. Phys. Solids\n,\n56\n(\n4\n), pp.\n1147\n1169\n.\n40.\nGolubitsky\n,\nM.\n, and\nSchaeffer\n,\nD. G.\n,\n1985\n,\nSingularities and Groups in Bifurcation Theory\n, Vol. 1,\nSpringer\n,\nBerlin\n.\n41.\nChen\n,\nY.-C.\n,\n2001\n, “\nSingularity Theory and Nonlinear Bifurcation Analysis\n,”\nNonlinear Elasticity: Theory and Applications\n, Y. B. Fu and R. W. Ogden, eds.,\nCambridge University Press\n,\nCambridge. UK\n.\n42.\nLiu\n,\nL.\n,\n2014\n, “\nAn Energy Formulation of Continuum Magneto-Electro-Elasticity With Applications\n,”\nJ. Mech. Phys. Solids\n,\n63\n, pp.\n451\n480\n.\n43.\nSuo\n,\nZ.\n,\nZhao\n,\nX.\n, and\nGreene\n,\nW. H.\n,\n2008\n, “\nA Nonlinear Field Theory of Deformable Dielectrics\n,”\nJ. Mech. Phys. Solids\n,\n56\n(\n2\n), pp.\n467\n486\n.\n44.\nJames\n,\nR. D.\n, and\nKinderlehrer\n,\nD.\n,\n1990\n, “\nFrustration in Ferromagnetic Materials\n,”\nContinuum Mech. Thermodyn.\n,\n2\n(\n3\n), pp.\n215\n239\n.\n45.\nShu\n,\nY. C.\n, and\nBhattacharya\n,\nK.\n,\n2001\n, “\nDomain Patterns and Macroscopic Behaviour of Ferroelectric Materials\n,”\nPhilos. Mag. Part B\n,\n81\n(\n12\n), pp.\n2021\n2054\n.\n46.\nDeng\n,\nQ.\n,\nLiu\n,\nL.\n, and\nSharma\n,\nP.\n,\n2014\n, “\nFlexoelectricity in Soft Materials and Biological Membranes\n,”\nJ. Mech. Phys. Solids\n,\n62\n, pp.\n209\n227\n."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9032073,"math_prob":0.9934964,"size":29047,"snap":"2019-43-2019-47","text_gpt3_token_len":6104,"char_repetition_ratio":0.18376201,"word_repetition_ratio":0.068538584,"special_character_ratio":0.20394532,"punctuation_ratio":0.11231185,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99809444,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-14T22:54:42Z\",\"WARC-Record-ID\":\"<urn:uuid:8a25616f-7244-4cae-9471-66d2f3133ab6>\",\"Content-Length\":\"642569\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e6368f51-628a-427b-bcc2-808185a5bbe6>\",\"WARC-Concurrent-To\":\"<urn:uuid:fcfe6bfe-e5bd-4611-8745-8cfb4d8d8bec>\",\"WARC-IP-Address\":\"173.254.190.160\",\"WARC-Target-URI\":\"https://asmedigitalcollection.asme.org/appliedmechanics/article/84/3/031008/422436/Revisiting-the-Instability-and-Bifurcation\",\"WARC-Payload-Digest\":\"sha1:43OFE3CELTDNAZMDZCYYDSVCTBUR2SWA\",\"WARC-Block-Digest\":\"sha1:VCHB2AWAM6VLGDLTVQI2Z2CN7AHC6LSX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986655554.2_warc_CC-MAIN-20191014223147-20191015010647-00205.warc.gz\"}"} |
http://neurochannels.blogspot.com/2008/02/ | [
"## Monday, February 25, 2008\n\n### Large-scale thamamocortical model",
null,
"While the Blue Brain folk want to construct an incredibly detailed model of a single cortical column, a recent paper by Izhikevich and Edelman (Large-scale model of mammalian thalamocortical systems) reports on a less detailed model of the entire human thalamocortical system.\n\nSome of the details of their model (roughly from large-scale to lower scale) include:\n1. The cortical sheet's geometry was constructed from human MRI data.\n2. Projections among cortical regions were modeled using data from diffusion tensor MRI of the human brain (above image is Figure 1 of the paper showing a subset of such connections).\n3. Synaptic connectivity patterns among neurons within and between cortical layers are based on detailed studies of cat visual cortex (and iterated to all of cortex).\n4. Individual neurons are not modelled using the relatively computationally intensive Hodgkin-Huxely models, but a species of integrate-and-fire neuron that included a variable threshold, short-term synaptic plasticity, and long term spike-timing dependent plasticity.\n5. The only subcortical structure included in the model is the thalamus, but the model does include simple simulated neuromodulatory influences (dopamine, acetylcholine).\n\nTheir model exhibited some very interesting behavior. First, larger-scale oscillatory activity that we see in real brains emerged in the model (e.g., as you would observe via EEG). Also like real brains, the model exhibited ongoing spontaneous activity in the absence of inputs (note this only occurred after an initial 'setup' period in which they simulated random synaptic release events: the learning rule seemed to take care of the rest and push the brain into a regime in which it would exhibit spontaneous activity). Quite surprisingly, they also found that when a single spike was removed from a single neuron, the state of the entire brain would diverge compared to when that spike was kept in the model. There is a lot more, so if this sounds interesting check out the paper. They also mention in the paper that they are currently examining how things change when they add sensory inputs to the model.\n\nOf course, a great deal of work is yet to be done, a great deal of thinking through the implications (and biological relevance) of some of the model's behavior (especially its global sensitivity to single spikes, which to me sounds biologically dubious). However, I find it quite amazing that by simply stamping the basic cortical template onto a model of the entire cortical sheet, and adding the rough inter-area connections, they observed many of the qualitative features of actual cortical activity. We tend to focus so much on local synaptic connections in our models of cortex, it is easy to miss the fact that the long-range projections could have similarly drastic influences on the global behavior of the system.\n\nThis paper is just fun. First, it is a great example of how to write a modeling paper for nonmathematicians. It had enough detail to give the modeler a sense for what they did, but not so much detail that your average systems neuroscientist would instinctively throw it in the trash (as is the case with too many modelling papers). Second, it provides a beautiful example of how people interested in systems-level phenomena can build biology into their model without making the model so computationally expensive that it would take fifty years to simulate ten milliseconds of cortical activity. It will be very interesting in the future as the hyper-realist Blue Brain style models make contact with these middle-level theories. I don't see conflict, but a future of productive theory co-evolution.\n\n## Monday, February 18, 2008\n\n### Visualizing the SVD\n\nWarning: this post isn't directly about neuroscience, but a mathematical tool that is used quite a bit by researchers.\n\nOne of the most important operations in linear algebra is the singular value decomposition (SVD) of a matrix. Gilbert Strang calls the SVD the climax of his linear algebra course, while Andy Long says, \"If you understand SVD, you understand linear algebra!\" Indeed, it ties about a dozen central concepts from linear algebra into one elegant theorem.\n\nThe SVD has many applications, but the point of this message is to examine the SVD itself, to massage intuitions about what is going on mathematically. To help me build intuitions, I wrote a Matlab function to visualize what is happening in each step of the decomposition (svd_visualize.m, which you can click to download). I have found it quite helpful to play around with the function. It takes in two arguments: a 3x3 matrix (A) and a 3xN 'data' matrix in which each of the N columns is a 'data' point in 3-D space. The function returns the three matrices in the SVD of A, but more importantly it generates four plots to visualize what each factor in the SVD is doing.\n\nTo refresh your memory, the SVD of an mxn matrix A is a factorization of A into three matrices, U, S, and V' such that:\nA=USV'\nwhere V' means the transpose of V. Generally, A is an mxn matrix, U is mxm, S is mxn, and V' is nxn.\n\nOne cool thing about the SVD is that it breaks up the multiplication of a matrix A and a vector x into three simpler matrix transformations which can be easily visualized. To help with this visualization, the function svd_visualize generates four graphs: the first figure plots the original data and the next three plots show how those data are transformed via sequential multiplication by each of the matrices in the SVD.\n\nIn what follows, I explore the four plots using a simple example. The matrix A is a 3x3 rank 2 matrix, and the data is a 'cylinder' of points (a small stack of unit circles each in the X-Y plane at different heights). The first plot of svd_visualize simply shows this data in three-space:",
null,
"In the above figure, the black lines are the standard basis vectors in which the cylinder is initially represented. The green and red lines are the columns of V, which form an orthogonal basis for the same space (more about this anon).\n\nWhen the first matrix in the SVD (V') is applied to the data, this serves to rotate the data in three-space so that the data is represented relative to the V-basis. Spelling this out a bit, the columns of V form an orthogonal basis for three-space. Multiplying a vector by V' changes the coordinate system in which that vector is represented. The original data is represented in the standard basis, multiplication by V' produces that same vector represented in the V-basis. For example, if we multiply V1 (the first column of V) by V, this rotates V1 so that V1 is represented as the point [1 0 0]' relative to the V-basis. Application of this rotation matrix V' to the above data cylinder yields the following:",
null,
"As promised, the data is the exact same as in Figure 1, but our cylinder has been rotated in three-space so that the V-basis vectors lie along the main axes of the plot. The two green vectors are the first two columns of V, which now lie along the two horizontal axes in the figure (for aficianados, they span the row space of A, or the set of all linear combinations of the rows of A). The red vertical line is the third column of V (its span is the null space of A, where the null space of A is the set of all vectors x such that Ax=0). So we see that the V' matrix rotates the data into a coordinate system in which the null space and row space of A can be more readily visualized.\n\nThe second step in the SVD is to multiply our rotated data by the 'singular matrix' S, which is mxn (in this case 3x3). S is a \"diagonal\" matrix that contains nonnegative 'singular values' of A sorted in descending order (technically, the singular values are the square roots of the eigenvalues of A'*A that correspond to its eigenvectors, which are the columns of V). In this case, the singular values are 3 and 1, while the third diagonal elment in S is zero.\n\nWhat does this mean? Generally, multiplying a vector x=(x1,....xn)' by a diagonal matrix with r nonzero elements on the diagonal s1,....sr simply yields b=(s1*x1, s2*x2, .... sr*xr, 0 .. 0). That is, it stretches or contracts the components of x by the magnitude of the the singular values and zeroes out those elements of x that correspond to the zeros on the diagonal. Note that S*V1 (where V1 is the first column of V) would yield b=(s1, 0 0 0 0 0). That is, it yields a vector whose first entry is s1 and the rest zero. Recall this is because S acts on vectors represented in the V-basis, and in the V-basis, V1 is simply (1,0, ..., 0).\n\nApplication of our singular matrix to the above data yields the following:",
null,
"This 3-D space represents the outputs space (range) of the A transformation. In this case, the range happens to be three-space, but if A had been Tx3, the input data in three-space would be sent to a point in a T-dimensional space. The figure shows the columns of the matrix U (in green and red) are aligned with the main axes: so the transform S returns values that are in the range of A, but represented in the orthogonal basis set in U. The green basis vectors are the first two columns of U (and they span the column space of A), while the red vector is the third column of U (which spans the null space of A').\n\nSince the column space of A (for this example) is two dimensional, any point in 3-D space in the input space (the original data) is constrained to be projected onto a plane in the output space.\n\nNotice that the individual circles that made up the cylinder have all turned into ellipses in the column space of A. This is due to the disproportionate stretching action of the singular values: the stretching is maximum for the vectors in the direction of V1. Also note that in the U-basis, S*V1 lies on the same axis as U1 (U1, in the U-basis, is of course (1, 0, 0)), but s1 units along that axis for reasons discussed in the text after Figure 2.\n\nOne way to look at S is that it implements the same linear transformation as the matrix A, but with the inputs and outputs represented in different basis sets. The inputs to S are the data represented in the V-basis, while the outputs from S are the data represented in the U-basis. That makes it clear why we first multiply the data by V': this changes the basis of the input space to that which is appropriate for S. As you might guess, the final matrix, U, simply transforms the output of the S transform from a representation in the U-basis back into the standard basis.\n\nHence, we shouldn't be surprised that the final step in the SVD is to apply the mxm (in this case, 3x3) matrix U to the transformed data represented in the U-basis. Just like V', U is a rotation matrix: it transforms the data from the U-basis (above picture) back to the standard basis. The standard basis vectors are in black in the above picture, and we can see that the U transformation brings them back into alignment with the main axes of the plot:",
null,
"Pretty cool. The SVD lets you see, fairly transparently, the underlying transformations implicitly lurking in any matrix. Unlike many other decompositions (such as a diagonalization), it doesn't require A to have any special properties (e.g., A doesn't have to be square, symmetric, have linearly independent columns, etc). Any matrix can be decomposed into a change of basis (rotation by V'), a simple scaling (and \"flattening\") operation (by the singular matrix S), and a final change of basis (rotation by U).\n\nPostscript: I realize this article uses a bit of technical jargon, so I will post a linear algebra primer someday that explains the terminology. For the aficianados, I have left out some details that would have complicated things and made this post too long. In particular, I focused on how the SVD factors act on \"data\" vectors, but little on the properties of the SVD itself (e.g., how to compute U, S, and V; comparison to orthogonal diagonalization, and tons of other things).\n\nIf you have suggestions for improving svd_visualize, please let me know in the comments or email me (thomson ~at~ neuro [dot] duke -dot- edu)."
]
| [
null,
"http://1.bp.blogspot.com/_IFzDPHUxHI0/R8MvAMnbGxI/AAAAAAAAACM/enthyviVBtk/s200/edelman_model.gif",
null,
"http://4.bp.blogspot.com/_IFzDPHUxHI0/R7seN8nbGqI/AAAAAAAAABU/YDsldRtDY3k/s400/Initial_data.jpg",
null,
"http://1.bp.blogspot.com/_IFzDPHUxHI0/R7sigMnbGsI/AAAAAAAAABk/5cpKuXgOxqU/s400/data_in_V.gif",
null,
"http://1.bp.blogspot.com/_IFzDPHUxHI0/R7s0GMnbGtI/AAAAAAAAABs/CnQoqwnF66Q/s400/transformed_data_in_U.gif",
null,
"http://3.bp.blogspot.com/_IFzDPHUxHI0/R7s0LsnbGuI/AAAAAAAAAB0/Zz6R5WfNpik/s400/transformed_data_in_E.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9251621,"math_prob":0.94424266,"size":20165,"snap":"2020-34-2020-40","text_gpt3_token_len":4584,"char_repetition_ratio":0.14964536,"word_repetition_ratio":0.81680673,"special_character_ratio":0.2211753,"punctuation_ratio":0.10252018,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9908373,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-23T15:45:43Z\",\"WARC-Record-ID\":\"<urn:uuid:0ce52270-e3c3-4ddc-a13d-cb052e11490e>\",\"Content-Length\":\"79682\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f2e4d878-13be-49f9-8db3-261a69573bbf>\",\"WARC-Concurrent-To\":\"<urn:uuid:6757d5a6-625a-425a-bc72-ed2c9efbc959>\",\"WARC-IP-Address\":\"172.217.164.161\",\"WARC-Target-URI\":\"http://neurochannels.blogspot.com/2008/02/\",\"WARC-Payload-Digest\":\"sha1:JYJO4Y3DNXFJKKDXBJC5H7AKQQFP3TKY\",\"WARC-Block-Digest\":\"sha1:TQB4ZIGHR6VFS5MR2SLIK77Z2JIFHO6C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400211096.40_warc_CC-MAIN-20200923144247-20200923174247-00328.warc.gz\"}"} |
https://answers.everydaycalculation.com/add-fractions/4-50-plus-3-96 | [
"Solutions by everydaycalculation.com\n\n4/50 + 3/96 is 89/800.\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 50 and 96 is 2400\n2. For the 1st fraction, since 50 × 48 = 2400,\n4/50 = 4 × 48/50 × 48 = 192/2400\n3. Likewise, for the 2nd fraction, since 96 × 25 = 2400,\n3/96 = 3 × 25/96 × 25 = 75/2400\n192/2400 + 75/2400 = 192 + 75/2400 = 267/2400\n5. 267/2400 simplified gives 89/800\n6. So, 4/50 + 3/96 = 89/800\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:"
]
| [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7044635,"math_prob":0.9983994,"size":301,"snap":"2020-45-2020-50","text_gpt3_token_len":116,"char_repetition_ratio":0.16835018,"word_repetition_ratio":0.0,"special_character_ratio":0.4551495,"punctuation_ratio":0.072463766,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99842036,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T10:15:51Z\",\"WARC-Record-ID\":\"<urn:uuid:31ff1c63-7258-42e2-a336-1ba792a01193>\",\"Content-Length\":\"7666\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f79b57a8-f3cd-4ad3-ab6b-a0e71fa0f15d>\",\"WARC-Concurrent-To\":\"<urn:uuid:94e56aa9-5e6e-48bb-9840-dadccaa0dfaa>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/4-50-plus-3-96\",\"WARC-Payload-Digest\":\"sha1:7XN4MUDMQBLS3SXONFL2OJTK5NDJCC4M\",\"WARC-Block-Digest\":\"sha1:3NH44JYH2ZFKKHCCWI6323XHJ47JPMFQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107876307.21_warc_CC-MAIN-20201021093214-20201021123214-00440.warc.gz\"}"} |
https://piping-designer.com/index.php/disciplines/systems/project-management/3291-point-of-total-assumption | [
"# Point of Total Assumption\n\non . Posted in Project Management Engineering\n\nPoint of total assumption, abbreviatef as PTA, is the difference between the ceiling and target prices, divided by the buyer’s portion of the share ratio for that price range, plus the target cost.\n\n## Point of Total Assumption Formula\n\n$$\\large{ PTA = \\frac{ CP \\;-\\; TP }{ BSR} + TC }$$\nSymbol\n$$\\large{ PTA }$$ = point of total assumption\n$$\\large{ BSR }$$ = buyer's share ratio\n$$\\large{ CP }$$ = ceiling price\n$$\\large{ TC }$$ = target cost\n$$\\large{ TP }$$ = target price",
null,
""
]
| [
null,
"https://piping-designer.com/images/Piping%20Designer%20Gallery/Piping-Designer_Logo_1.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.87816507,"math_prob":0.9997009,"size":507,"snap":"2023-14-2023-23","text_gpt3_token_len":144,"char_repetition_ratio":0.18290259,"word_repetition_ratio":0.0,"special_character_ratio":0.3195266,"punctuation_ratio":0.086419754,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999984,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-29T12:21:57Z\",\"WARC-Record-ID\":\"<urn:uuid:ed6ca4a2-9dc0-433c-894c-a6e8295995b0>\",\"Content-Length\":\"26437\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:328acd15-e53a-4700-85cd-2ee33a6735fe>\",\"WARC-Concurrent-To\":\"<urn:uuid:9fe0ad93-e5ae-4b74-a11a-d59db543e015>\",\"WARC-IP-Address\":\"200.225.40.42\",\"WARC-Target-URI\":\"https://piping-designer.com/index.php/disciplines/systems/project-management/3291-point-of-total-assumption\",\"WARC-Payload-Digest\":\"sha1:XMDJ2EDTC6TM7JNTNZTL5HBSGKQF4T6L\",\"WARC-Block-Digest\":\"sha1:ZAQI6IAWYJUHGBOSP76EHTOFJTYDU47O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644855.6_warc_CC-MAIN-20230529105815-20230529135815-00358.warc.gz\"}"} |
https://www.tensorflow.org/tfx/transform/api_docs/python/tft/deduplicate_tensor_per_row | [
"# tft.deduplicate_tensor_per_row\n\nDeduplicates each row (0-th dimension) of the provided tensor.\n\n`input_tensor` A two-dimensional `Tensor` or `SparseTensor`. The first dimension is assumed to be the batch or \"row\" dimension, and deduplication is done on the 2nd dimension. If the Tensor is 1D it is returned as the equivalent `SparseTensor` since the \"row\" is a scalar can't be further deduplicated.\n`name` Optional name for the operation.\n\nA `SparseTensor` containing the unique set of values from each row of the input. Note: the original order of the input may not be preserved."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7324404,"math_prob":0.7898276,"size":616,"snap":"2020-45-2020-50","text_gpt3_token_len":159,"char_repetition_ratio":0.16503268,"word_repetition_ratio":0.0,"special_character_ratio":0.21915585,"punctuation_ratio":0.10185185,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9607357,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T03:26:55Z\",\"WARC-Record-ID\":\"<urn:uuid:66c763c4-d2e3-44fb-8b95-e2ea607d058b>\",\"Content-Length\":\"358691\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eebadfac-736e-43b6-9289-db29d7a6052a>\",\"WARC-Concurrent-To\":\"<urn:uuid:95d1b32a-7643-46f3-a55d-2069e3a600dc>\",\"WARC-IP-Address\":\"172.217.164.174\",\"WARC-Target-URI\":\"https://www.tensorflow.org/tfx/transform/api_docs/python/tft/deduplicate_tensor_per_row\",\"WARC-Payload-Digest\":\"sha1:NO5G47GT46PNLFZBNGFRDVBUIJ2HW32H\",\"WARC-Block-Digest\":\"sha1:NHZQXIRXBZMFEWWNXKLSZ6G45JMDBA5S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107896048.53_warc_CC-MAIN-20201028014458-20201028044458-00559.warc.gz\"}"} |
https://se.mathworks.com/matlabcentral/profile/authors/6657407?detail=all | [
"Community Profile",
null,
"# Mausmi Verma\n\nLast seen: 4 månader ago Active since 2021\n\n#### Statistics\n\n•",
null,
"#### Content Feed\n\nView by\n\nQuestion\n\nHow to store double value in a matrix\nI have a variable as A <1x3 double> = 1 9 3 I want to store it in a predefined matrix B and want answer to look like B=4...\n\n### 1\n\nQuestion\n\nnested cell array into single cell array\nA={<1x1 cell> <1x3 cell> 4 <1x4 cell> } <1x1>= A{1, 1}{1, 1}{1, 1} <1x3 double> =[2 3 4] <1x3>= A{1, 2}{1, 1} <0x0 doub...\n\n### 1\n\nQuestion\n\nhow to convert array of cells within array of cells in single cell array\nA= {<1x1 cell> <1x3 cell> [4,0] <1x4 cell>} where <1x1>= [2,3,4] <1x3>= [ ] [3,4] [3,8,13] <1x4>= [9,4] [9,8,13] [ ]...\n\n### 1\n\nQuestion\n\nConvert nested cell array into single cell array\nI am getting variable value in workspace in the form of nested cell array A={{ } { }} but i want answer in...\n\n### 1\n\nQuestion\n\nmatching the element of matrix with an array\nSuppose i have an array as, A={ [2,3;2,7] [3,2;3,4;3,8] [4,3;4,5] [5,4;5,10] [7,2;7,8;7,12] } and a matrix as B=[17,8,14,4...\n\n### 1\n\nQuestion\n\nhow to combine two array of cells into one cellwise\nI have two cell arrays with cells of different dimension and i want to combine them into one as, A={ [2,3 ; 2,7] [3,2 ;3,4 ;3,...\n\n### 1\n\nQuestion\n\nRemoving specific value from cell array\nLets suppose i have a cell array as: A={[1 2 4 6 7]; [1 2 5 7 9 8]; [3 4 6 8]; [1 2 3 4 5 6]] now i want to remove the element...\n\n### 1\n\nQuestion\n\nI want to generate a matrix of random numbers between 0 and 2.5 with length=8 and the sum of generated numbers be 20\nI want to generate a matrix of random numbers between 0 and 2.5 The length of generated matrix should be 8 and the total sum of..."
]
| [
null,
"https://se.mathworks.com/responsive_image/150/150/0/0/0/cache/matlabcentral/profiles/6657407_1571713676492_DEF.jpg",
null,
"https://se.mathworks.com/matlabcentral/profile/badges/Thankful_5.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6901101,"math_prob":0.94512254,"size":2550,"snap":"2022-05-2022-21","text_gpt3_token_len":937,"char_repetition_ratio":0.15710919,"word_repetition_ratio":0.16666667,"special_character_ratio":0.36941177,"punctuation_ratio":0.14026402,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9733905,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-28T14:46:59Z\",\"WARC-Record-ID\":\"<urn:uuid:15acbbd0-d5b8-45e5-9853-ecabbc99945c>\",\"Content-Length\":\"85728\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:18971c08-f8ce-4c6c-8a46-877faf401298>\",\"WARC-Concurrent-To\":\"<urn:uuid:2e269fce-cef3-4654-8fbc-5ddb27232cc6>\",\"WARC-IP-Address\":\"104.69.217.80\",\"WARC-Target-URI\":\"https://se.mathworks.com/matlabcentral/profile/authors/6657407?detail=all\",\"WARC-Payload-Digest\":\"sha1:KTTDRAEPRWIKWDJVO56QC4XSY4O6CJVO\",\"WARC-Block-Digest\":\"sha1:R2I6RV7YCE3FPU2PLMYWQMHLDVZYUKDW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663016853.88_warc_CC-MAIN-20220528123744-20220528153744-00413.warc.gz\"}"} |
https://www.whorld.org/Help/Parameters/Even_Shear.htm | [
"### Even Shear\n\nEven Shear makes the curvature at even vertices asymmetrical. Curve asymmetry at odd vertices is controlled separately, via the Odd Shear parameter. The two curve control points at each vertex are normally contrained to be equidistant from the vertex. Removing this constraint yields two distinct distances, one between the counterclockwise point (A) and the vertex, and the other between the clockwise point (B) and the vertex. Shear changes the ratio of these two distances. Shear is normalized so that at zero, the control points are equidistant as usual, while at −1, the counterclockwise point coincides with the vertex, eliminating curvature on that side. Note that for Even Shear to have any effect, Even Curve must have a nonzero value.",
null,
""
]
| [
null,
"https://www.whorld.org/Help/images/even-shear.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9219409,"math_prob":0.9904472,"size":744,"snap":"2023-14-2023-23","text_gpt3_token_len":158,"char_repetition_ratio":0.1418919,"word_repetition_ratio":0.0,"special_character_ratio":0.18682796,"punctuation_ratio":0.1119403,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9814487,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T00:29:03Z\",\"WARC-Record-ID\":\"<urn:uuid:54a4f183-f3ad-4baa-8b8b-ccdf4cca9385>\",\"Content-Length\":\"1624\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a80ee91-203f-41bf-a099-2ede15ecc6f9>\",\"WARC-Concurrent-To\":\"<urn:uuid:20f4f0a1-564a-4351-a664-b171ac7c648e>\",\"WARC-IP-Address\":\"107.180.40.20\",\"WARC-Target-URI\":\"https://www.whorld.org/Help/Parameters/Even_Shear.htm\",\"WARC-Payload-Digest\":\"sha1:NGI64JQOQX6JYJBF26DNKNL3XI67PLLX\",\"WARC-Block-Digest\":\"sha1:DP2QYN2NDNIGILHD2HQRKF3BW57IKCCN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646181.29_warc_CC-MAIN-20230530230622-20230531020622-00172.warc.gz\"}"} |
https://www.bristolmathsresearch.org/seminar/min-xu/ | [
"# Min Xu\n\nRutgers University\n\n### High-dimensional nonparametric density estimation via symmetry and shape constraints\n\nStatistics Seminar\n12th March 2021, 4:00 pm – 5:00 pm\n,\n\nAbstract: We tackle the problem of high-dimensional nonparametric density estimation by taking the class of log-concave densities on R^p and incorporating within it symmetry assumptions, which facilitate scalable estimation algorithms and can mitigate the curse of dimensionality. Our main symmetry assumption is that the super-level sets of the density are K-homothetic (i.e. scalar multiples of a convex body K ⊆ R^p). When K is known, we prove that the K-homothetic log-concave maximum likelihood estimator based on n independent observations from such a density achieves the minimax optimal rate of convergence with respect to, e.g., squared Hellinger loss, of order n^(- 4/5), independent of p. Moreover, we show that the estimator is adaptive in the sense that if the data generating density admits a special form, then a nearly parametric rate may be attained. We also provide worst-case and adaptive risk bounds in cases where K is only known up to a positive definite transformation, and where it is completely unknown and must be estimated nonparametrically. Our estimation algorithms are fast even when n and p are on the order of hundreds of thousands, and we illustrate the strong finite-sample performance of our methods on simulated data. Joint work with Richard Samworth (Cambridge).\n\nOrganiser: Henry Reeve",
null,
""
]
| [
null,
"https://www.bristolmathsresearch.org/wp-content/plugins/cookies-for-comments/css.php",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8993299,"math_prob":0.9776748,"size":1461,"snap":"2021-43-2021-49","text_gpt3_token_len":304,"char_repetition_ratio":0.111873716,"word_repetition_ratio":0.0,"special_character_ratio":0.19575633,"punctuation_ratio":0.101886794,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.978688,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T04:54:03Z\",\"WARC-Record-ID\":\"<urn:uuid:4b4508b2-ab20-4642-82be-64b7316f205c>\",\"Content-Length\":\"45778\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:74d302d7-622e-4a5e-9063-35545d44d555>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab0c0df3-9a2d-4ede-820e-77d650ac1e23>\",\"WARC-IP-Address\":\"89.145.92.29\",\"WARC-Target-URI\":\"https://www.bristolmathsresearch.org/seminar/min-xu/\",\"WARC-Payload-Digest\":\"sha1:ZDFJ74OUJLUWTSGRD4B3KY5KR2O3H2EM\",\"WARC-Block-Digest\":\"sha1:VG7O2MCROFL6QLGBTRQYYV5TNNSXY254\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363290.39_warc_CC-MAIN-20211206042636-20211206072636-00054.warc.gz\"}"} |
https://math.stackexchange.com/questions/3257128/am-gm-inequality-involving-squares-and-proof | [
"# AM-GM Inequality Involving Squares and Proof\n\nProve:\n\n$$(a^2 + b^2 + c^2)/3 \\geq ((a + b + c)/3)^2$$ OR $$(a^2 + b^2 + c^2)/3 \\leq ((a + b + c)/3)^2$$\n\nfor all $$a, b, c \\geq 0.$$ The problem wants me to find which inequality is correct and then provide a proof for it.\n\nI'm pretty sure the first inequality is correct (I assumed this after substituting random values as $$a, b,$$ and $$c$$). I've tried factoring the inequalities and this is what I ended up with:\n\n$$9(a^2 + b^2 + c^2) \\geq 3(a^2 + b^2 + c^2 - 2ab + 2ac + 2bc)$$\n\nHowever, I don't know where to go from here. I'm also meant to be applying the AM-GM theorem to solve this problem but I'm unsure where and how to apply it in this situation. Any help would be extremely appreciated :)\n\nPedestrian:\n\n$$a,b,c \\ge 0.$$\n\nThe first inequality is equivalent to (Dr. Graubner):\n\n$$a^2+b^2+c^2 \\ge ab+bc +ac$$.\n\nAM-GM:\n\n$$a^2+b^2 \\ge 2ab$$; $$a^2+c^2 \\ge 2ac$$; $$b^2+c^2\\ge 2bc$$;\n\nAdding LHS and RHS of these inequalities:\n\n$$2(a^2+b^2+c^2) \\ge 2(ab+ac+bc),$$\n\nand we are done.\n\n• Can you please explain how you got to the equivalent first inequality? – Alexander B Jun 10 at 9:04\n• Alexander: $3(a^2+b^2+c^2) \\ge (a^2+b^2+c^2+2ab+2ac+2bc)$. Subtract the RHS square terms : $2a^+2b^2+2c^2 \\ge 2(ab+ac+bc)$OK? – Peter Szilas Jun 10 at 9:59\n\nYour first inequality is equivalent to $$a^2+b^2+c^2\\geq ab+bc+ca$$ and this is $$(a-b)^2+(b-c)^2+(c-a)^2\\geq 0$$ which is true.\n\nYes, we can use AM-GM here: $$\\frac{a^2+b^2+c^2}{3}-\\left(\\frac{a+b+c}{3}\\right)^2=\\frac{1}{9}\\sum_{cyc}(2a^2-2ab)=$$ $$=\\frac{1}{9}\\sum_{cyc}(a^2+b^2-2ab)\\geq\\frac{1}{9}\\sum_{cyc}(2\\sqrt{a^2b^2}-2ab)=\\frac{2}{9}\\sum_{cyc}(|ab|-ab)\\geq0.$$ Also, we can use AM-GM for three variables.\n\nIndeed, $$a^3+b^3+c^3-3abc=(a+b+c)\\sum_{cyc}(a^2-ab)=$$ $$=\\frac{9}{2}(a+b+c)\\left(\\frac{a^2+b^2+c^2}{3}-\\left(\\frac{a+b+c}{3}\\right)^2\\right)$$ and since by AM-GM $$a^3+b^3+c^3-3abc\\geq0$$ and $$a+b+c\\geq0,$$ we are done!\n\n• Is there a simpler way to prove the inequality but still use AM-GM? – Alexander B Jun 10 at 8:12\n• @Alexander B I posted another way, but I think the first was much more easier. – Michael Rozenberg Jun 10 at 8:16"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9376174,"math_prob":1.0000019,"size":692,"snap":"2019-26-2019-30","text_gpt3_token_len":234,"char_repetition_ratio":0.12645349,"word_repetition_ratio":0.02962963,"special_character_ratio":0.3684971,"punctuation_ratio":0.08280255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000061,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-17T04:17:24Z\",\"WARC-Record-ID\":\"<urn:uuid:f6ae2955-301c-497a-bf9a-f830266302a1>\",\"Content-Length\":\"155037\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:94cddb31-50b0-4d68-b4c6-8ddb90e4b891>\",\"WARC-Concurrent-To\":\"<urn:uuid:a4cdb0ef-5046-4fc2-8aee-845b2bf76fa0>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3257128/am-gm-inequality-involving-squares-and-proof\",\"WARC-Payload-Digest\":\"sha1:LZ5W46K24KIDQCC4ATP6EUTXKPYVWX6K\",\"WARC-Block-Digest\":\"sha1:DY7KZWGRP6PGT2SOQP5OBQL4FXIIWRHR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525046.5_warc_CC-MAIN-20190717041500-20190717063500-00558.warc.gz\"}"} |
https://www.colorhexa.com/798880 | [
"# #798880 Color Information\n\nIn a RGB color space, hex #798880 is composed of 47.5% red, 53.3% green and 50.2% blue. Whereas in a CMYK color space, it is composed of 11% cyan, 0% magenta, 5.9% yellow and 46.7% black. It has a hue angle of 148 degrees, a saturation of 5.9% and a lightness of 50.4%. #798880 color hex could be obtained by blending #f2ffff with #001101. Closest websafe color is: #669999.\n\n• R 47\n• G 53\n• B 50\nRGB color chart\n• C 11\n• M 0\n• Y 6\n• K 47\nCMYK color chart\n\n#798880 color description : Dark grayish cyan - lime green.\n\n# #798880 Color Conversion\n\nThe hexadecimal color #798880 has RGB values of R:121, G:136, B:128 and CMYK values of C:0.11, M:0, Y:0.06, K:0.47. Its decimal value is 7964800.\n\nHex triplet RGB Decimal 798880 `#798880` 121, 136, 128 `rgb(121,136,128)` 47.5, 53.3, 50.2 `rgb(47.5%,53.3%,50.2%)` 11, 0, 6, 47 148°, 5.9, 50.4 `hsl(148,5.9%,50.4%)` 148°, 11, 53.3 669999 `#669999`\nCIE-LAB 55.31, -7.105, 2.437 20.585, 23.232, 23.821 0.304, 0.343, 23.232 55.31, 7.511, 161.067 55.31, -7.856, 4.525 48.199, -8.115, 4.438 01111001, 10001000, 10000000\n\n# Color Schemes with #798880\n\n• #798880\n``#798880` `rgb(121,136,128)``\n• #887981\n``#887981` `rgb(136,121,129)``\nComplementary Color\n• #7a8879\n``#7a8879` `rgb(122,136,121)``\n• #798880\n``#798880` `rgb(121,136,128)``\n• #798888\n``#798888` `rgb(121,136,136)``\nAnalogous Color\n• #88797a\n``#88797a` `rgb(136,121,122)``\n• #798880\n``#798880` `rgb(121,136,128)``\n• #887988\n``#887988` `rgb(136,121,136)``\nSplit Complementary Color\n• #888079\n``#888079` `rgb(136,128,121)``\n• #798880\n``#798880` `rgb(121,136,128)``\n• #807988\n``#807988` `rgb(128,121,136)``\n• #818879\n``#818879` `rgb(129,136,121)``\n• #798880\n``#798880` `rgb(121,136,128)``\n• #807988\n``#807988` `rgb(128,121,136)``\n• #887981\n``#887981` `rgb(136,121,129)``\n• #55605a\n``#55605a` `rgb(85,96,90)``\n• #616d67\n``#616d67` `rgb(97,109,103)``\n• #6d7b73\n``#6d7b73` `rgb(109,123,115)``\n• #798880\n``#798880` `rgb(121,136,128)``\n• #87948d\n``#87948d` `rgb(135,148,141)``\n• #94a09a\n``#94a09a` `rgb(148,160,154)``\n• #a2aca6\n``#a2aca6` `rgb(162,172,166)``\nMonochromatic Color\n\n# Alternatives to #798880\n\nBelow, you can see some colors close to #798880. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #79887c\n``#79887c` `rgb(121,136,124)``\n• #79887e\n``#79887e` `rgb(121,136,126)``\n• #79887f\n``#79887f` `rgb(121,136,127)``\n• #798880\n``#798880` `rgb(121,136,128)``\n• #798881\n``#798881` `rgb(121,136,129)``\n• #798883\n``#798883` `rgb(121,136,131)``\n• #798884\n``#798884` `rgb(121,136,132)``\nSimilar Colors\n\n# #798880 Preview\n\nThis text has a font color of #798880.\n\n``<span style=\"color:#798880;\">Text here</span>``\n#798880 background color\n\nThis paragraph has a background color of #798880.\n\n``<p style=\"background-color:#798880;\">Content here</p>``\n#798880 border color\n\nThis element has a border color of #798880.\n\n``<div style=\"border:1px solid #798880;\">Content here</div>``\nCSS codes\n``.text {color:#798880;}``\n``.background {background-color:#798880;}``\n``.border {border:1px solid #798880;}``\n\n# Shades and Tints of #798880\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #010101 is the darkest color, while #f6f7f6 is the lightest one.\n\n• #010101\n``#010101` `rgb(1,1,1)``\n• #0a0b0b\n``#0a0b0b` `rgb(10,11,11)``\n• #131615\n``#131615` `rgb(19,22,21)``\n• #1d201e\n``#1d201e` `rgb(29,32,30)``\n• #262b28\n``#262b28` `rgb(38,43,40)``\n• #2f3532\n``#2f3532` `rgb(47,53,50)``\n• #383f3c\n``#383f3c` `rgb(56,63,60)``\n• #424a45\n``#424a45` `rgb(66,74,69)``\n• #4b544f\n``#4b544f` `rgb(75,84,79)``\n• #545f59\n``#545f59` `rgb(84,95,89)``\n• #5d6963\n``#5d6963` `rgb(93,105,99)``\n• #66736c\n``#66736c` `rgb(102,115,108)``\n• #707e76\n``#707e76` `rgb(112,126,118)``\n• #798880\n``#798880` `rgb(121,136,128)``\n• #83918a\n``#83918a` `rgb(131,145,138)``\n• #8e9a94\n``#8e9a94` `rgb(142,154,148)``\n• #98a49e\n``#98a49e` `rgb(152,164,158)``\n``#a3ada7` `rgb(163,173,167)``\n``#adb6b1` `rgb(173,182,177)``\n• #b7bfbb\n``#b7bfbb` `rgb(183,191,187)``\n• #c2c9c5\n``#c2c9c5` `rgb(194,201,197)``\n• #ccd2cf\n``#ccd2cf` `rgb(204,210,207)``\n• #d7dbd9\n``#d7dbd9` `rgb(215,219,217)``\n• #e1e4e2\n``#e1e4e2` `rgb(225,228,226)``\n• #ebedec\n``#ebedec` `rgb(235,237,236)``\n• #f6f7f6\n``#f6f7f6` `rgb(246,247,246)``\nTint Color Variation\n\n# Tones of #798880\n\nA tone is produced by adding gray to any pure hue. In this case, #798880 is the less saturated color, while #04fd78 is the most saturated one.\n\n• #798880\n``#798880` `rgb(121,136,128)``\n• #6f927f\n``#6f927f` `rgb(111,146,127)``\n• #669b7f\n``#669b7f` `rgb(102,155,127)``\n• #5ca57e\n``#5ca57e` `rgb(92,165,126)``\n• #52af7d\n``#52af7d` `rgb(82,175,125)``\n• #48b97d\n``#48b97d` `rgb(72,185,125)``\n• #3fc27c\n``#3fc27c` `rgb(63,194,124)``\n• #35cc7b\n``#35cc7b` `rgb(53,204,123)``\n• #2bd67b\n``#2bd67b` `rgb(43,214,123)``\n• #21e07a\n``#21e07a` `rgb(33,224,122)``\n• #18e97a\n``#18e97a` `rgb(24,233,122)``\n• #0ef379\n``#0ef379` `rgb(14,243,121)``\n• #04fd78\n``#04fd78` `rgb(4,253,120)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #798880 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.50974303,"math_prob":0.59309727,"size":3710,"snap":"2020-34-2020-40","text_gpt3_token_len":1620,"char_repetition_ratio":0.12250405,"word_repetition_ratio":0.007352941,"special_character_ratio":0.5692722,"punctuation_ratio":0.23378076,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99290437,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-03T12:32:21Z\",\"WARC-Record-ID\":\"<urn:uuid:3fed9222-336e-49a7-ba47-6c7d4fb46d7d>\",\"Content-Length\":\"36340\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e5b93df2-bfd5-47ad-9588-826d547a83f7>\",\"WARC-Concurrent-To\":\"<urn:uuid:12a19d83-5a6a-458c-8c75-5e27eada7ea1>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/798880\",\"WARC-Payload-Digest\":\"sha1:CV7NBEO3TP2MTIZ5I37BOIGCQ6PH3MYV\",\"WARC-Block-Digest\":\"sha1:TWAGD6O5YFK77UMTXSAV2SK6HTFS4OBI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735810.18_warc_CC-MAIN-20200803111838-20200803141838-00532.warc.gz\"}"} |
http://ristoranteclass.it/11-23/8961.html | [
"# first 20 digits of pi\n\n### Here are the first 10,000 digits of Pi in honor of Pi Day ...\n\nOne way to remember the first few digits of pi is to count the letters in the words of this phrase: “How I need a drink, alcoholic of course, after the heavy lectures involving quantum mechanics.”\n\n### Digits Of Pi\n\nToday is Pi Day — the day each year, March 14, that follows the first three digits of pi (3.14). And this year’s Pi Day is a special one: Since — in the U.S. — the date is represented as 3 ...\n\n### One billion digits of pi - Massachusetts Institute of ...\n\n· Pi is the ratio of the circumference to the diameter (the diameter is 2 times the radius) of a circle. Computing pi is a common way to judge the computing power of supercomputers, and mathematicians now know approximately 10 trillion digits of pi.\n\n### What are the first twenty digets of pie? | Yahoo Answers\n\nFirst 200 Million Digits of Pi Activity: Pi goes on forever in random numbers never repeating itself. It is both an irrational and transcendental number. Pi has been computed to over one trillion digits …\n\n### The First Thousand Digits of Pi - Fact Monster\n\n1 Million Digits of Pi The first 10 digits of pi (π) are 3.1415926535. The first million digits of pi (π) are below, got a good memory? Then recite as many digits as you can in our quiz!! Why not calculate the circumference of a circle using pi here. Or simply learn about pi here.Maximize the fun you can have this Pi Day by checking out our Pi Day Stuff, Pi Day Deals and Pi Day Celebrations!\n\n### Pi to 20 decimal places! - Eve Astrid Andersson\n\nMore digits: Scroll down to see the first 10,000 digits of Pi at the bottom of this page, or grab even more using the links below. Files containing digits: 10 50 100 1000 10000 100000; 1 million digits of Pi (Might take a while to download) The Pi searcher can show digits of Pi anywhere in the first 200 million digits, using the second line in ...\n\n### Analyzing the first 10 million digits of pi: Randomness ...\n\nFor pi and e, there are no “half to even” cases, since their binary expansions are infinite. This makes the rounding rule simple: if the rounding bit is 0, round down; if the rounding bit is 1, round up. I will show the correctly rounded approximations of pi and e in these five formats. Pi (π) Here are the first 50 decimal digits of pi:\n\n### One Million Digits of Pi On One Page!- [Plus Guides And ...\n\nHowever, Pi starts with 3 which is also a digit. Thus, if you start at 3, then the twenty-fifth digit of Pi is 3. First 25 digits of Pi: The Pi number above gives you \"3.\" followed by 25 digits after the decimal point. If you want to just memorize, learn or see the first 25 digits of Pi …\n\n### First 200 Million Digits of Pi - Pi Across America\n\nOne of the challenges on w3resources is to print pi to 'n' decimal places. Here is my code: from math import pi fraser = str(pi) length_of_pi = [] number_of_places = raw_input(\"Enter the number...\n\n### The First 25 Million Digits of Pi Book - Optional ...\n\nSome spent their lives calculating the digits of Pi, but until computers, less than 1,000 digits had been calculated. In 1949, a computer calculated 2,000 digits and the race was on. Millions of digits have been calculated, with the record held (as of September 1999) by a supercomputer at the University of Tokyo that calculated 206,158,430,000 digits.\n\n### How Many Digits Of Pi You Have To Have Memorized To …\n\nWhat is the first 20 digits of Pi :) - 3329671 metric mean of 5 and 125. A. 65 B. 25 C. 2√30 D. √130 4. Suppose the altitude to the hypotenuse of a right triangle bisects the hypotenuse.\n\n### Pi to 20 decimal places\n\n3. 1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679 ...\n\n### Digits of Pi - Up to 1 Million Digits\n\nThe digits to the right of its decimal point can keep going forever, and there is absolutely no pattern to these digits. A team of researchers at Tokyo University in Japan calculated the digits of pi to 1.24 trillion places. Chances are, you'll never need to know even the first ten digits, but just for fun, here are the first thousand: π = 3.\n\n### First 20 Digits of e - Miniwebtool\n\n20 Digits of Pi. STUDY. Flashcards. Learn. Write. Spell. Test. PLAY. Match. Gravity. Created by. Person4525. Created for Geometry. Terms in this set (18) What are the first 3 digits of pi? 3.14. What are the first 4 digits of pi? 3.141. What are the first 5 digits of pi? 3.1415. What are the first 6 digits of pi? 3.14159. What are the first 7 ...\n\n### Print pi to a number of decimal places - Stack Overflow\n\nPi to 20 decimal places! collected by Eve Andersson : Home: Pi: Digits: 20 Decimal Places 3. 14159265358979323846: Great Pi Day Gift! Los Boludos Made with original vintage vacuum tubes! ...\n\n### The Square Root of Two to 1 Million Digits\n\n· The Square Root of Two to 1 Million Digits What follows are the first 1 million digits of the square root of 2. Actually there are slightly more than 1M digits here. These digits were computed by Robert Nemiroff (George Mason University and NASA Goddard Space Flight Center) and checked by Jerry Bonnell (University Space Research Association and ...\n\n### Pi Day tip on how to remember mathematical constant …\n\nAs we will show, reaching 10 trillion digits of Pi is much more difficult than 5 trillion digits using our current methods and 2010 computer hardware. Hardware: Shigeru Kondo's Desktop The machine we used is mostly the same as the previous computation.\n\n### 2000 places of Pi - MacTutor History of Mathematics\n\nThe First 25 Million Digits of Pi Book makes a coffee table book worthy of conversation, or treasure it on your book shelf along with other math or science classics. Whether you are an expert or just a Pi fan, this is a book you will enjoy having.\n\n### pi to 20 digits - Wolfram|Alpha\n\n10000 digits of e 2.718281828459045235360287471352662497757247093699959574966967627724076630353 ...\n\n### Pi - Wikipedia\n\n· 3.14159 26535 89793 23846 26433 83279 50288 41971 69399 37510\n\n### 10000 digits of e - MacTutor History of Mathematics\n\nHowever, Pi starts with 3 which is also a digit. Thus, if you start at 3, then the twentieth digit of Pi is 4. First 20 digits of Pi: The Pi number above gives you \"3.\" followed by 20 digits after the decimal point. If you want to just memorize, learn or see the first 20 digits of Pi including the 3, then you can omit the last digit. Pi Decimal ...\n\n### What are the first 20 digits of pi - Answers\n\nSee the digits of pi, top 10, 20, 30, 40, 50, 60, 100, 200, 300, 400, 1000, 10000, 100000 digits of pi\n\n### First 500 Digits of Pi - Mrs. Jackson's Math Class\n\nOne billion digits of π. One billion (10^9) digits of pi (actually 1,000,000,001 digits if you count the initial \"3\") are in the file pi-billion.txt. The MD5 checksum is in pi-billion.md5. JA0HXV has calculated 100 billion digits of pi and posted them at the website: ...\n\n### First 20 Digits of Pi - Miniwebtool\n\nThe first 20 digits in hexadecimal (base 16) ... Pi Day in 2015 was particularly significant because the date and time 3/14/15 9:26:53 reflected many more digits of pi. In parts of the world where dates are commonly noted in day/month/year format, ...\n\n### Pi and e In Binary - Exploring Binary\n\nWelcome to Mrs. Jackson's Math Class. Learn the digits of Pi for the Pi contest. Here are 500 digits of Pi\n\n### Top 10000 Digits Of Pi\n\n· Here are the first 10,000 digits of Pi in honor of Pi Day. Share this article share tweet text email link Luke Kerr-Dineen. like March 14, 2017 7:32 am. It’s Pi Day ...\n\n### Pi - 10 Trillion Digits - numberworld.org\n\n· The first 51 digits of pi The important ones. For all practical purposes no more than 10 digits of pi are required for mathematic computation, however even with astronomically precise calculations no more than fifty are really necessary.\n\n### what is the first 25 digits to pi? | Yahoo Answers\n\nCompute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. For math, science, nutrition, history ...\n\nsitemap"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.86752075,"math_prob":0.8814128,"size":6629,"snap":"2020-24-2020-29","text_gpt3_token_len":1712,"char_repetition_ratio":0.16890566,"word_repetition_ratio":0.0677392,"special_character_ratio":0.30743703,"punctuation_ratio":0.15997182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9938169,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-06T20:18:41Z\",\"WARC-Record-ID\":\"<urn:uuid:89fd0459-7590-4e43-baa4-07f108732220>\",\"Content-Length\":\"18134\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e74bbf5c-ba52-470a-bcf7-8888315e5860>\",\"WARC-Concurrent-To\":\"<urn:uuid:0c11197d-5002-4447-874d-6156b5553dd3>\",\"WARC-IP-Address\":\"104.27.177.86\",\"WARC-Target-URI\":\"http://ristoranteclass.it/11-23/8961.html\",\"WARC-Payload-Digest\":\"sha1:YPFLCJ4RQOVC5OVXWNRQJDLM2ONCHDY6\",\"WARC-Block-Digest\":\"sha1:6YVUP2X6IULUWY6TBDTRCZ2W64RCEB7I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655890181.37_warc_CC-MAIN-20200706191400-20200706221400-00400.warc.gz\"}"} |
https://scholarsarchive.byu.edu/etd/1332/ | [
"## Theses and Dissertations\n\n### How Eighth-Grade Students Estimate with Fractions\n\n#### Abstract\n\nThis study looked at what components are in student solutions to computational estimation problems involving fractions. Past computational estimation research has focused on strategies used for estimating with whole numbers and decimals while neglecting those used for fractions. An extensive literature review revealed one study specifically directed toward estimating with fractions (Hanson & Hogan, 2000) that researched adult estimation strategies and not children's strategies. Given the lack of research on estimation strategies that children use to estimate with fractions, this study used qualitative research methods to find which estimation components were in 10 eighth-grade students' solutions to estimation problems involving fractions. Analysis of this data differs from previous estimation studies in that it considers actions as the unit of analysis, providing a smaller grain size that reveals the components used in each estimation solution. The analysis revealed new estimation components as well as a new structure for categorizing the components. The new categories are whole number and decimal estimation components, fraction estimation components, and components used with either fractions or whole numbers and decimals. The results from this study contribute to the field of mathematics education by identifying new components to consider when conducting future studies in computational estimation. The findings also suggest that future research on estimation should use a smaller unit of analysis than a solution response to a task, the typical unit of analysis in previous research. Additionally, these results contribute to mathematics teaching by suggesting that all components of an estimation solution be considered when teaching computational estimation, not just the overarching strategy.\n\nMA\n\n#### College and Department\n\nPhysical and Mathematical Sciences; Mathematics Education\n\n2008-03-13\n\nThesis\n\n#### Handle\n\nhttp://hdl.lib.byu.edu/1877/etd2294\n\n#### Keywords\n\nmathematics education, computational estimation, estimation\n\nEnglish\n\nCOinS"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.86570215,"math_prob":0.67123663,"size":2324,"snap":"2023-40-2023-50","text_gpt3_token_len":418,"char_repetition_ratio":0.16896552,"word_repetition_ratio":0.0,"special_character_ratio":0.17211704,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9580133,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-28T19:59:35Z\",\"WARC-Record-ID\":\"<urn:uuid:6649f3a2-4a17-45ea-81b3-de3c6b165146>\",\"Content-Length\":\"39279\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be72bb4d-e045-46b5-98cf-4fdffd7d5b3b>\",\"WARC-Concurrent-To\":\"<urn:uuid:aad6d704-b344-4a74-995c-75c0c65e0aac>\",\"WARC-IP-Address\":\"13.57.92.51\",\"WARC-Target-URI\":\"https://scholarsarchive.byu.edu/etd/1332/\",\"WARC-Payload-Digest\":\"sha1:TWURGHBHNGLSKSIZDSCD7JDG7ACGFJSY\",\"WARC-Block-Digest\":\"sha1:6U5MXDCLDTB6BKD46BC2AZCBU5QAV5WV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679099942.90_warc_CC-MAIN-20231128183116-20231128213116-00206.warc.gz\"}"} |
https://cstheory.stackexchange.com/questions/31007/information-complexity-of-query-algorithms | [
"# Information complexity of query algorithms?\n\nInformation complexity has been a very useful tool in communication complexity, mainly used to lower bound the communication complexity of distributed problems.\n\nIs there an analogue of information complexity for query complexity? There are many parallels between query complexity and communication complexity; oftentimes (but not always!) a lower bound in one model gets translated to a lower bound in the other model. Sometimes this translation is quite nontrivial.\n\nIs there a notion of information complexity that is useful for lower bounding the query complexity of problems?\n\nA first pass seems to indicate that information complexity is not very useful; for example, the query complexity of computing the OR of $N$ bits is $\\Omega(N)$ for randomized algorithms and $\\Omega(\\sqrt{N})$ for quantum algorithms, whereas the most straightforward adaption of the notion of information complexity indicates that the information learned by any query algorithm is at most $O(\\log N)$ (because the algorithm stops when it sees the first $1$ in the input)."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9084732,"math_prob":0.9558377,"size":1049,"snap":"2019-35-2019-39","text_gpt3_token_len":195,"char_repetition_ratio":0.2076555,"word_repetition_ratio":0.0,"special_character_ratio":0.1877979,"punctuation_ratio":0.068571426,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98669636,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-20T09:32:41Z\",\"WARC-Record-ID\":\"<urn:uuid:14ebd543-2d52-4959-b98c-bd9dd94c95c7>\",\"Content-Length\":\"136948\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:01101515-e2d4-4f56-99cb-7bd1743cad26>\",\"WARC-Concurrent-To\":\"<urn:uuid:721cdce1-ff2b-4351-a2c4-e1780f21ee92>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://cstheory.stackexchange.com/questions/31007/information-complexity-of-query-algorithms\",\"WARC-Payload-Digest\":\"sha1:VUF2W6RG7RL3OY5YGJNCSE6MALQLCWA6\",\"WARC-Block-Digest\":\"sha1:NMXH3OJYRI5IAAAWBDQDGAN25IHQUUZU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573988.33_warc_CC-MAIN-20190920092800-20190920114800-00515.warc.gz\"}"} |
https://gitlab.xiph.org/xiph/aom-rav1e/-/commit/75e513f126adb60061e02856bef8ae3078769d26?view=inline | [
"### Set spatial neighbor search resolution 16x16 for block size 64x64\n\n```When the block has width/height above or equal to 64, use 16x16\nblock search step for reference motion vector search in the non-\nimmediate rows and columns.\n\nChange-Id: If11ce97a9328b879f30ef87115086aa0cd985a2f```\nparent 883c63ca\n ... ... @@ -161,22 +161,26 @@ static uint8_t scan_row_mbmi(const AV1_COMMON *cm, const MACROBLOCKD *xd, for (i = 0; i < xd->n8_w && *refmv_count < MAX_REF_MV_STACK_SIZE;) { POSITION mi_pos; const int use_step_16 = (xd->n8_w >= 8); mi_pos.row = row_offset; mi_pos.col = i; if (is_inside(tile, mi_col, mi_row, &mi_pos)) { const MODE_INFO *const candidate_mi = xd->mi[mi_pos.row * xd->mi_stride + mi_pos.col]; const MB_MODE_INFO *const candidate = &candidate_mi->mbmi; const int len = int len = AOMMIN(xd->n8_w, num_8x8_blocks_wide_lookup[candidate->sb_type]); if (use_step_16) len = AOMMAX(2, len); newmv_count += add_ref_mv_candidate( candidate_mi, candidate, rf, refmv_count, ref_mv_stack, cm->allow_high_precision_mv, len, block, mi_pos.col); i += len; } else { ++i; if (use_step_16) i += 2; else ++i; } } ... ... @@ -193,22 +197,26 @@ static uint8_t scan_col_mbmi(const AV1_COMMON *cm, const MACROBLOCKD *xd, for (i = 0; i < xd->n8_h && *refmv_count < MAX_REF_MV_STACK_SIZE;) { POSITION mi_pos; const int use_step_16 = (xd->n8_h >= 8); mi_pos.row = i; mi_pos.col = col_offset; if (is_inside(tile, mi_col, mi_row, &mi_pos)) { const MODE_INFO *const candidate_mi = xd->mi[mi_pos.row * xd->mi_stride + mi_pos.col]; const MB_MODE_INFO *const candidate = &candidate_mi->mbmi; const int len = int len = AOMMIN(xd->n8_h, num_8x8_blocks_high_lookup[candidate->sb_type]); if (use_step_16) len = AOMMAX(2, len); newmv_count += add_ref_mv_candidate( candidate_mi, candidate, rf, refmv_count, ref_mv_stack, cm->allow_high_precision_mv, len, block, mi_pos.col); i += len; } else { ++i; if (use_step_16) i += 2; else ++i; } } ... ...\nMarkdown is supported\n0% or .\nYou are about to add 0 people to the discussion. Proceed with caution.\nFinish editing this message first!"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6635705,"math_prob":0.96737075,"size":350,"snap":"2021-31-2021-39","text_gpt3_token_len":99,"char_repetition_ratio":0.11849711,"word_repetition_ratio":0.0,"special_character_ratio":0.28857142,"punctuation_ratio":0.05357143,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.973744,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-23T20:14:29Z\",\"WARC-Record-ID\":\"<urn:uuid:66b35515-5b5b-42a7-a3aa-60f039b85ee7>\",\"Content-Length\":\"187458\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76be79cb-3ec4-49c4-8e45-a2e4122d1199>\",\"WARC-Concurrent-To\":\"<urn:uuid:68ad6ed7-a063-449d-a005-3097966d64e9>\",\"WARC-IP-Address\":\"140.211.166.4\",\"WARC-Target-URI\":\"https://gitlab.xiph.org/xiph/aom-rav1e/-/commit/75e513f126adb60061e02856bef8ae3078769d26?view=inline\",\"WARC-Payload-Digest\":\"sha1:SEMKRCWYHI6I6U7LGHKD2LSOTUSUDJGU\",\"WARC-Block-Digest\":\"sha1:MDZMZSOAVEAPIO7CER75PIM75FQNOORQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046150000.59_warc_CC-MAIN-20210723175111-20210723205111-00325.warc.gz\"}"} |
https://pythonexamples.org/pandas-dataframe-pop/ | [
"# How to Delete Column from Pandas DataFrame?\n\n## Pandas DataFrame.pop() – Delete Column\n\nPandas DataFrame.pop() function is used to delete a column from the DataFrame.\n\nIn this tutorial, we shall go through examples to learn how to use pop() to delete a column from Pandas DataFrame.\n\n### Example 1: Delete a column using pandas pop() function\n\nIn this example, we deleted a specific column, using column name, from the DataFrame with pop(). pandas pop() function updates the original dataframe. The data in the deleted column is lost.\n\nPython Program\n\n``````import pandas as pd\n\nmydictionary = {'names': ['Somu', 'Kiku', 'Amol', 'Lini'],\n'physics': [68, 74, 77, 78],\n'chemistry': [84, 56, 73, 69],\n'algebra': [78, 88, 82, 87]}\n\n#create dataframe\ndf_marks = pd.DataFrame(mydictionary)\nprint('Original DataFrame\\n--------------')\nprint(df_marks)\n\n#delete column\ndf_marks.pop('algebra')\nprint('\\n\\nDataFrame after deleting column\\n--------------')\nprint(df_marks)``````\nRun\n\nOutput\n\n### Example 2: Delete a non-existing column using pandas pop() function\n\nIn this example, we will try deleting a column that is not present in the DataFrame.\n\nWhen you try to delete a non-existing column of DataFrame using pop(), the function pop() throws KeyError.\n\nPython Program\n\n``````import pandas as pd\n\nmydictionary = {'names': ['Somu', 'Kiku', 'Amol', 'Lini'],\n'physics': [68, 74, 77, 78],\n'chemistry': [84, 56, 73, 69],\n'algebra': [78, 88, 82, 87]}\n\n#create dataframe\ndf_marks = pd.DataFrame(mydictionary)\nprint('Original DataFrame\\n--------------')\nprint(df_marks)\n\n#delete column that is not present\ndf_marks.pop('geometry')\nprint('\\n\\nDataFrame after deleting column\\n--------------')\nprint(df_marks)``````\nRun\n\nOutput"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6750509,"math_prob":0.8714238,"size":1775,"snap":"2022-40-2023-06","text_gpt3_token_len":444,"char_repetition_ratio":0.18068887,"word_repetition_ratio":0.37083334,"special_character_ratio":0.30704224,"punctuation_ratio":0.18373494,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99455875,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T01:15:00Z\",\"WARC-Record-ID\":\"<urn:uuid:37c081d1-a7dc-4b4d-aa62-8203b18924a3>\",\"Content-Length\":\"31393\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:135504d9-36d0-450d-8be6-22813b56f1c6>\",\"WARC-Concurrent-To\":\"<urn:uuid:daeb5fe0-aec5-4822-8011-526ab3244d27>\",\"WARC-IP-Address\":\"99.84.208.77\",\"WARC-Target-URI\":\"https://pythonexamples.org/pandas-dataframe-pop/\",\"WARC-Payload-Digest\":\"sha1:A5RWUA6YGJABK3TFOHTZM5Z5H76SATJ4\",\"WARC-Block-Digest\":\"sha1:OTDG5WAKGINFYJWSYUYM54A6ZZ6MU7ZV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337446.8_warc_CC-MAIN-20221003231906-20221004021906-00312.warc.gz\"}"} |
https://wikidev.in/wiki/matlab/image_processing/imwrite | [
"You are here : matlabImage Processingimwrite\n\nimwrite() - Image Processing\n\n`imwrite(A,filename) writes image data A to the file specified by filename, inferring the file format from the extension. imwrite creates the new file in your current folder. The bit depth of the output image depends on the data type of A and the file format. For most formats:If A is of data type uint8, then imwrite outputs 8-bit values.If A is of data type uint16 and the output file format supports 16-bit data (JPEG, PNG, and TIFF), then imwrite outputs 16-bit values. If the output file format does not support 16-bit data, then imwrite returns an error.If A is a grayscale or RGB color image of data type double or single, then imwrite assumes that the dynamic range is [0,1] and automatically scales the data by 255 before writing it to the file as 8-bit values. If the data in A is single, convert A to double before writing to a GIF or TIFF file.If A is of data type logical, then imwrite assumes that the data is a binary image and writes it to the file with a bit depth of 1, if the format allows it. BMP, PNG, or TIFF formats accept binary images as input arrays.If A contains indexed image data, you should additionally specify the map input argument.`\n\nSyntax\n\n```imwrite(A,filename)\nimwrite(A,map,filename)\nimwrite(___,fmt)\nimwrite(___,Name,Value)```\n\nExample\n\n```%Write a 50-by-50 array of grayscale values to a PNG file in the current folder.%\nA = rand(50);\nimwrite(A,'myGray.png')```"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7873224,"math_prob":0.70853835,"size":1427,"snap":"2019-26-2019-30","text_gpt3_token_len":345,"char_repetition_ratio":0.15741391,"word_repetition_ratio":0.033755273,"special_character_ratio":0.24246672,"punctuation_ratio":0.13099042,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9709759,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-25T09:28:51Z\",\"WARC-Record-ID\":\"<urn:uuid:f589d112-9d03-4276-b6b1-98d259f33ec0>\",\"Content-Length\":\"8448\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:64237e59-739c-48bd-9060-859571d826b1>\",\"WARC-Concurrent-To\":\"<urn:uuid:a0c06976-14a4-4724-9d26-d26309a0631a>\",\"WARC-IP-Address\":\"104.27.133.94\",\"WARC-Target-URI\":\"https://wikidev.in/wiki/matlab/image_processing/imwrite\",\"WARC-Payload-Digest\":\"sha1:Z3WRSNIWHYIT7OVWTAT4J3ZRJZK4ZTVX\",\"WARC-Block-Digest\":\"sha1:EG2ERDX6BO74UMEB4YSPEPAGGFBZBBBK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999817.30_warc_CC-MAIN-20190625092324-20190625114324-00087.warc.gz\"}"} |
https://homework.cpm.org/category/CON_FOUND/textbook/ac/chapter/13/lesson/13.OF1-S/problem/7-134 | [
"",
null,
"",
null,
"### Home > AC > Chapter 13 > Lesson 13.OF1-S > Problem7-134\n\n7-134.\n\nWrite the equation of each circle graphed below.\n\nRecall the general equation of a circle: $\\left(x − h\\right)² + \\left(y − k\\right)² = r²$where $\\left(h, k\\right)$ is the center of the circle and $r$ is the radius.\n\n1.",
null,
"Write in graphing form:\n($(x − 1)^{2} + y^{2} = 9$\n\n1.",
null,
""
]
| [
null,
"https://homework.cpm.org/dist/7d633b3a30200de4995665c02bdda1b8.png",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAABDCAYAAABqbvfzAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYxIDY0LjE0MDk0OSwgMjAxMC8xMi8wNy0xMDo1NzowMSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNS4xIE1hY2ludG9zaCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo5QzA0RUVFMzVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo5QzA0RUVFNDVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjlDMDRFRUUxNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0IiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjlDMDRFRUUyNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0Ii8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+RSTQtAAAG9JJREFUeNrsXQmYXEW1Pj09PVtmJjsBDGFXiCKKIBJ2REEQQdaARBBiFFRAnrIoyhqCgLwnEfEpPMAgggsGJG7w2MMuiuwkJDGQINmTycxklu62/r5/0ZWaur3M9GQCc/7vO1/fvrfuvXXr1q3/nFOnqhLZbFYUCoVCoVC8u1GlRaBQKBQKhRK6QqFQKBQKJXSFQqFQKBRK6AqFQqFQKJTQFQqFQqFQQlcoFAqFQqGErlAoFAqFonKoLveE2jM+uTHk+zNGjjZyj5EXqJhgQH3KyClGOo1MNbK2vzOSTWakbmWTjHp+69y2QqFQKBQW85+avvES+kaCKUaOMHK8kcWS9zQkjYzj9l1Gnuj3nCSykuxIaa1VKBQKxbvLQt9I0Gjk30YehtPA2d9tZJGRPYxs0++EnjCaRFe1NC4emSN2hUKhUCiU0MtDjZE3jRwXODaRhP5hI7f1ZyayVRmpWdMoqbb63LZCoVAoFAOFd2tQHHzcWxppChwbxt89+zsTWWOV161okkQ6oTVJoVAoFErovQA8C6OMjA0csy74nSXfn155GA6vXlcj9cuHqnWuUCgUCiX0XqDByOiIUnNu9ThCh/W+T79Z54bEa1c1SnVbjdnW/nOFQqFQKKGXi/cbeR+3Px44PtrZPrw/M1K/vDlSKxQKhUKhUEIvG/tK1IcO7CE9KXVn/v7ZyAFGNqm4dY6hautqpGZNg7rbFQqFQqGE3sv8gtDXOeTt9pMPN/Ixh9CNCS2HVJzQq7JSu3qIJDtTaqErFAqFQgm9FwBZY/z520ZWS9Sfvrdz/AjHeke6RyWaOa6iwJBzuNsTyuYKhUKhUELvFdAn/rREQ9NeN/KkkaN4bAQJ/x7+hy/8RhL+DpVk86p0taRadOy5QqFQKJTQe4NtSNog8aESzdf+RyOfolX+ZSMPSDRbHIBhbXcaaTcyuVKZQP95am2dVHelctsKhUKhUAxGQoeP+hoj1xu5yciFZZwLUv6NRIuwWMKeLdGscRdLFN3+O8lHuY800mbkdiOnSn7CmT4Sukj9imZJZHShOoVCoVAMXkLH/bBc2ywj5xg5wcjnSjgP4803owU+kvsQ8PaskYeMnGbkCu6vd44D15LMT6yIRmLUiZq19WqdKxQKhWJQE/q2Eo0hR7/3GCMLJFoGddciefymkR/zfyN/U7TO20niNhjOTizTwN9/GPmrkfMcsu+ddV6VkVR7nVS31mn/uUKhUCgGNaGDyP9l5F6J3OMdRr5n5FwjH4w55wwjrxj5G/+787dfQwsd/eZf5b46z1IHLqUicVLfzHOR6vYaqepOas1RKBQKxaAldIwXR7/3XIn6wVskcp+D4NEHfomRXbxzDpJorPkPnX2WsDHm/FEeQ/Db13j9as9CF6bDuPSLJLygS4xFns1Z4lYy1encdK+JjA5XUygUCsXgJfQvGblDIrc7VkI71sh2Rg418gKtdFjrdknUCUYmSdTX3u1c533O9uP8vZrKAYLfugKEDpwvkZv/nFIzjGj2mtUNuRnhILWrhkhVV1LXPlcoFArFRocNtR76YUbeMrKElvqJJGlMDvNFWta3GDmGFjf2wa89xchSI0NoqeM6n3KuO4q//5Ro7fPvS34WOZ/Q0ZeO6PoLmPblYpke8crmhtRr1198pSohmaT2nysUCoVi8BH6hySa8AWBaacbSUvUdw7vAJjyK0a+bmSakVVGWiVykSPgDUPVOmlZg/zv4q+d3rXOuQ/c9kdKNFY9ROjAd5nmBiN7SX4IXBCIZI/c7vlkiYS62xUKxYbH/KemayEoCqI/Xe4YKnYKyXO8kZslmhBmUyM/kshNjpXTrpNoARUExX2e5yVI7BCYwwh8m0kLf0vnHm7g22u00LMFCH0l8zSBaRUKhUKhUAvdA4aLoX97FxL19iTVZ0nMcHnDHf5Vh4hB1KOYbpGRtRJN07o/rfKmInm8yMhEEjWC69p4D1x/SMw5mF3uKp77dyN3azVQKBQKhRJ6HqMlH8X+iJHlsn4wW7kAIY+k9b41lYQPkPDx20zLf3zM+bDkEdmO/vUXjbxqZB6tfATGITjvVxK53v+uVUGhUCgUg4rQs15AWCL9jtf+TUrkMM86vyGgfzr3E9sn3WrObzWJFprtZ5z9uOHmRnYzcqCR/WJIHX3wB1GEOYGSgWC4xySKuMc1fm9kHyMLtTooFAqFYtAQet2yJvJxQjLVGelsbn9nnDb25Qg+QzLPRPSbSaZzc59Ho72iKPFkR7VUmbSZmgJGfO787DtR5bx+xlEefk/ixopqCKA7TOJd7Ql6EPaW/JKrrUyPceyH0HpXKBQKheK9T+gjX9jCsZWz0l3XJV2N7dLZtC43RrtueWN+nXCQfqpb2ke1SMfwVknXduUixhsXDZfGN0fkyD+TSsdb6WZ/d32ndAxtM+SfkM7GDllnrgXNAJO7MPocUfD/TxkvmcRZ5nqnSmkBf5b8ETX/oERD2u7UaqFQKBSK9zyh+y736vaUVLfVSMPbCE5ff4hXDu01UruqIWfNg5xxvHZ1Q2TVGx5PdhbOAqZaradXAOfAI9A+eo20jVljlIeGnMcAln7HsFbpauh8KV3XNaW7oeN2c+1rEunEeEPuXQVvkIAHAHnOol/+DpN+lsnYmWb/v8p1Xkjk1u/QaqVQKBSKjZ7QexB8jsCzBQZ0g+SjrVRrtG4KplB1jPBid3jnfCA3c1tLvQxZNCJH9u+wqSF2XCpd0w3Sv79t9JqPdA5vHZdOdVfB2x6arjVrlIzkulR2yOLmNnMcD5HoGtIxdN3IlrebFozOXb+HghKPL0i0UMxtWq0UCoVC8a4jdAJ907tLNIkMItPB2JgZDtHjz5DofHLEvdFv3SSFJ3gBE6+QaJz569ZDUN2Rst6CKl5naBb6QXcyR+5GMplU98PrRrQuXjt2ec6yr0onc3ey+WhcOFIaI8XgIJuPbFUmaxSOj1V1VafM9bHe+vz1lICsYf2wEgL3va7aolAoFIp3JaFjKVPMwY7JWjaPSYOo8usoLuCixpKoW5R4Lyzmgrnb/8fIn5z1yJO8TjThDAztZHQskU7OHvLvofvVL2/sXrPlMml934qc6z/VWifD5mwqtSuHIP0hhsBnradBGOKnsnCyT+gFACVG54RVKBQKxYCgLzPFYeKY+yUKJNu8QLodSbhYLrXZNXYlmgimVMCC/rREE8P8oKTrJLJ7GgI/VjJVMmzupjLipbHSvHCUjP77VjkyN6RdY6z1qYHz7FaXVhGFQqFQvJcJHdO3wqrdrYxzMIf6LVIZtzQmhil16taLDUE3od8ervjm18fkoutpgcOz8BGtBgqFQqEYrIR+JS30cnGERCupVQJYaAV99sVmo8MSrWfkTHlD4jkijyzwkfQuKBQKhUIxKAkds7JNjDn2N4lWTcPCK/MKWNcIT0/HHEcA3F8kWp0NU7c+GZMO1zi1xDz/l0TLtrr4tqy/trpCoVAoFO9a9CYoDv3YqcB+zNp2vOTHYWNd8wckmnvdBf7vIdHCLCE8Z+RgT+k4wciNJHEXmLK1toByYDGc1vgU/se88F/T169QKBSKwWyhfzSwL03L3J1U5d8S9XPPpcyhzCepJ0pUMtDZfatEAXg+xkq03Gop0eUnG9mV25dIFKGvUCgUCsWgtdBDEe1wky8I7P+NkT95+0DkiB6vr0D+s5JfBqYY4FU4z8i1Ro7ZCN8FFIzNJD+Gvz2QppZeiqxXnp0SnqEuxXJexzSFUMf0uG9cXEKC10tKgWV3nGtUM72ftkviZ9SrYV46me+4Z+qKKSMAK/8hRgLL8S6SwvMcWDQzvascJkuopwm+szYqyA2SH3kRum89v6EE33NrjKLdwLy0Ffh2G4qUg32uVon3YtWxXrWXUEd8FCqftTH765n3cuqEC7zXUczvGyW8W5TzFrwvFmda1k/5wn0wEqelQJ7qWX/XlHC9Jr6z9hLrr0LRKws9tPhJS4FKutaTFjbUcSQcIhO48vcP7F9sZHWJhA58zshvpW/D9SoNNFAIMkRXQ27yHInWkL+ADa2LqTyGCXv+6ciz9GLs7aWfxLT3s4GIAxq8x5n2oALpQCB38X7PeXlw5bNM/2mmfdY59jz/38HjPr7BfFwVk4ejeXxG4NhHeN2XJJr/AOWJlfWOK/IO7D0v8fbv4z0Xnvlv3vNAfsf07+exh6ic+cR5Ae9jPVbYvijwbhDvMZv32jMmz0fy/FsK1P+TmZ9rCjz7VF7nm72ou7vElAfK6RGWq0/4tzL9PwJ1Au/04zH3QnDrLyRaCvkVvtvZRd7tRL7/13gOzv2l9OwGRPndXCBfuO8nipSFfbffKpBmBtNMLXKtk5gOsUTDlKYU/WmhZ2MIvbNCefqQ00BmaG3tE9Nozab2HCLoNY5G7Fp3owNp0T0wpgzFoFLYjB6Mnfn/VeYRDc6lEi0aM9GxEDZhwybcZxeoBfHbYMVT2ABZLX8bCqam/WlMPr4i+eF7Q4rkGaMbtuS76QqUWcJpxOud/HY69cfm91iS6IWedY38xgUsDuXxVd7+/VlvhrNsXmR5oSG+nedMi7EyJ/P4ZCoSqx2PyFjHE5Ry6ppb31c639P2tIirPCX4VxKtBgjMo/W1PZ/9Uzy2wrnODvRWYA6HCQEr3JbDigIWHIJGtyWxX0GPgA+U89Ysq3JRRyXGWrJZx1BA3vYyciiVsLWO8rgd03YG6vBRVODvcu6D7+MevosMFTYowntQcPw7Xt6+4xDnElrmyOsJLG8onU85dXIrJ1+2TXHzdQzzNTNG0Z1MRWwyvYAhq34sy+Ub/BbfiCnT8/jemjYy40PxHrTQQ+iqoFtoNK2PI9kQ7BtDtLDkf+6QiA806D8q4X7PsdFMDED5X83GaIFEa7uPpxxPUsAwv9O9cgZ+xgZ/R/4iNuA2ktN0yc++57pZz2BjEfIQuKMFisUjWCI7xcmDK+PZ+LrXQgO8k5Nmd8fC/j6f3ffQxE3qkw4QKkj8Jv7+kff6MJXDHzLNZVSQfNgpi4VKneuheJjPY8t5MvfPoQJkn/dwrx52eN/Dt0jYq1incc4H+X6XkbAv9JTmDsfrcEGJ5eBiJz4b0OwoE6FvN84zVgz2/UKp2I1ltAOf78tU9A/y6rDN77leHd6dym09CXGYo1TdSDKczfLYieV3GdOc79WhfRwyv5RpbZ14gG3M9Z4HzObrvJh81Xn58pXJcY6XZq8i3w6I+rSYNJ93PAgdou52xQAQ+kBgKt1icV6GIbRKFhS5DhqDtwcg/2igPsftMyVa/jXDjxgW5ZU8dnbAbbmazzWPv3B7TqIS00wLxMeOtH58wHrbtBf5X+TkwZW5bMh90niNx+fTMsJ8BLMc5aAv+CS9Bkv4PHNYlktIpo+wrp8ZOHcij83l/0nOsTbut+X8hkN+9nlej7G0xCGkE7l9Cb0IHSyTu0ggQqKPc69+m5ZoOTiGHoV5zO+kfqzLackHvM7n9g2S78I4WnpOKLXUq8OoEyfxnYEcd2G63aiItbKePM93i/7w7xm5m+lOdK5tn/XPVBiX8ZyX6alq4/UPCTwL7v8vL1+TuB+KcqhLwN77Nf6eUEKZTQ54C1EPz1JaUgw0oW/oRUlg2V5cJE2t89HH4T5q300DUPZoHBpp3TweOD6dpPftwHtKxlhLL3M7zl39TU8Bgqvwq45VWA7K6a6B5VoT2P9bx5rsSx3awfG2LA0cn0Kiv9Xb30yLKMuyWUhLb8uY+6Sc56ktMW9Qlmx/+gOB4w+R3DeR9fvdq0g8C3jfH5dxT6Q71lEGXqVC8MF+qstx5fG04wWqLaH+LCVxAkMdi1eoWL0WOOde/m7r7NveO+biLXrAzohRxEL5Wu7UK1/p2oyKwTpes4WK+ogSPJH+PBoHSnwMgULRL4Qeck03SnhseiXRzgbxMDZSxQjIRr+jEX8wcBxW0jkFnqm/Yee1XynhaG7sn0Fr3Y+E7o7xSNh+8IXesQdo2XzMs0pgOW1HC/8fZea/EjETbzl5b+jDdWwjG+dpQUAUgsf+GmhA4SlBlwC6CeBih2v1iAq+5yaSWafk+9r9et1CIqnzvrMsLbZVtCi/U+I94fL9AOsBvAD3U2Hqr9EdWQlH2u/rELVfx0PR+weQjLO08oHhzjUk5juxdci2aU1F6sPdVJifCRwL5etAyceCvOwd+yy/ZVjyCGJDtwCi8A8t0Hb+kt/w1x3FxSrcwEyJjw1SKCpiZbkNUKjRapJ8UE9fAGviSoeQYXku4wf+ai8UljQVgNmelfgTiSJJB7rsu6T8/stNaNW6VuC32OgsCxAXgv4w8c+1THc3G3jr3kMU9GllNN7AFWwwk16D9b2YhlJilCrrceiLhZ4sUDcLwbpGf+80pCdy/3SpzOp5SckPLQzFBXQ7+xMBJe0JiVzXeEfnUvF4usg9j3eIK81fBGIhIvxyqVwAq1uXMT/FWueZP8P8WgLzyxJW7OZMm6FX5EQqP4gHedF7t+uKKJZJpwxD9WFXfjdZJ13I6j/Cy9dYenf8fPllfadThw5mHZoRk2d8n2OoKEyi9wWWOUZ9wN3/fxLFZWj/uaLfCT2k9Q7nR+AT+v5s4NNO5QSp3sCPI4TFrNCVBAgGQTBnOhbs1AEue7dhKddDcDLFByL7vyw9o5mHsnFBfy2Gtu1GBeyjtDhmUukpB3EL8/y0DEJ3yyJbobIsFWioD2KjbUdVII5hCZ9tl148R2/ec7H3D+/Xj0jGu7Px372AEjhC8gFwv+bvoxL1Ce9A6/3+CtdlfP+PxRybwW/Px3HSc8hZG7/9s5xyK/ZuE166uHNQhhO8c690lA6LYwKeDHjIEIB7tqeYjGd5tku+L38W0+9PBXtujBJyNQkdVvr/UuGCAYKA1/kyMF5DxSAk9BcC+6C9fs2z8rDvssBHBFxVwPqp7qdnRV6OYkOOhV2WD3DZ9+WDfZtKSZKNACwjuPxulsi1HipTuG2voyJzjuOt+G82pMky84358Z+UvFswUaB+FPKgDFRZHk6yhJvddjesIrmfxkb9mQrlLdGH57CW4mkkzY+TBBbFXOMztEThfXrEsW7RdQOX/cR+IPRuWq7dfKcZEtmdjlLhA11hiB9AVx2i4D9EMjy1l+82UeQcxGu8QuPCkm1XgXwlWc7IF0ZOTAmktYGHs0jCwJtMj2NHSj641QW6l+5gvUM3GQJz0RXWQkLfSqlJsaEI/a8kR/+jQXAV+o7gEkRf4BdjyBxE9KCEg6T6E8v4cR0vPYOjBgJtzsddI4XXhk94FsgvJN//Xw5gZaCf7mj+XyDR+OjeAIQxu49lYPu+OyTvUrWKRZzClw4oA+scS7FURcK6SuGh2JPfQkbyoyKg/F1c5L2Ugg5aZPUSjhOwM9+JxA/Vs+WNbo6LJBri9ouYdLYb4SXvuawCcBjLaWUF6/JKWqpryzgHwai3OSQICxf90RjG+ZyTrt3xMoUwxClnW286vPplFVeLmwsQ+h+db+JNtmeH0ZvldtHVOJb8K3z+JOuntcqhPP1Qes7SZ2daRJ5ukXyA73S2Ux9QalL0Br2xkBBA9ZeYY0fzY/lpDJkDP6FLKjUAz3ujQ2YDjVX8qEfHNFZoQOACnik9I2t7a9kulfUnl7mOjXBvrldXgTKw0elLnEbYTuoyJuacTZ3ycz0WwLiYc6ZQibya/3eSfDQxJtV5lMdhrf+A+xE1vW8FnnEFSQllHJo2eRRJqU16Dvfzgbw9zXNs95Gr6CHP+3H7C95zXeeU38H94G0q1zho8Ej0CSo2/ph7G/W+eUybMc6rD1lHWdk65t7betcOKQhW6XhM8rP8uXBHDZxHb8iD/D2f+6Gc7FqgDOyshlYpvVYpSbGhCd0O8elNANzj1EIH0ipevJGU/Rx6K+okP3TMfS/Q2g8gma8ONKC9xfW0gEAMN/XhOi1lpE1Lz0AsDEeyE7Xc5+x/mL8TAoQKIjuJ2+5qfU84SpAfXTyWFu2+TkNvXaVv0Br7jSP4/6pDin3FUsfiDAUens73PUcKj2e3jf43aFmGukg+T6JEEOTtged6vsBztffxOftSJ9P0PgBwU3/CMyDWkZxPCNSHL3h1QBzP0XHSc6w3vAC7sx17rEi+YO3b2QWP8IwU6+GZS0+DW9b4P9/zBMV5by6nV+g6Cfe3KxQlo7f91a+wgt9awCoKWfbHSt9dmO8VrGUjdj01fFikGGJUS9I6hA3Kd6Uy0dYWi9lgurOR9QYns4FLBOoUvAovelb1+ZJ3PW5FTwkaW7g1f+aR80zWL/R7wmWJvkaMrf86FYGF9LZYPMWG9Bg2pldTYRlH5RPW3WtsNF1X6eUSng4XZT+Lv2OkbxMPZfme9yPBQIGzUd/HOXkBcZQy2uFJWuoXBAh1IrevlfA0txNIdgfwHSxwjkHhCc15kKLy9Eg/fw/38N1/gs/2WYcwf05FBvVkRyp9GP+Ncd8Y5vaW5GeNBG6gVwZu9XtZHkizN89JUZl9roR8WSt9Ar/FQ6lkH+5Y578LnIeI/RlUsnBea8z1URf+UKaCrFBUlNCFHzg+kMvYKMW5YGHJ3yzR0JvVXgPUHEhf7rKmdpUjH0PLuEbcilH93c8PMkFUMmaz+hLFAtbk2bJ+P7V1B5Y6ZrsupkxDQ4CaS3hmt6xPLZBuCQndXmszkqePZ+ideMuziibz3EMCxPQyFZ63A+ckaeH5i6y8SOsObtmjqBRkJD9TnY+H+Qyb0AK8xiub5hiLtNqpey4xoovqFF7ncIcMrKcDBHaHsy/pvOOQJY5vDv26OzvvAwqDndp2ZsxzQcnBzHbbsq5d6NxnP8m7631MjyF06wIfVoa3z9az2oCVPo1K7aFU6OxznMO6jzI8V9aPTH+ZyqXr3XiLRHozy+hG716/ooLgoqlIvv7A+ngg68WmrE9xAYb30usxjnVyRoF7rIkp16GiY9EVG4jQhZYSgt8QbIbpRnciQWXo9kODfZ/0nOjEupum8eNIO/mZ1wt33Q9oSaWdRnCJlD4U6kESjjseGNd4dgO8g8tpBdg5vrtpOaCBn+OlvZ3l83AZStc0elSKWZFX0QouZLV08nqjC3gNkpJ3f2Jq3qmyflBQgiSGYw9IeEz0clpoIL6DmS8ohugT/rX07IKwjeJRJDpEem9BpegR75x2PkMhFze8J6eTIBd75DGNhNEZ4/24hPfw83gTlbOJJJkEy+D2wPtZRpJHw7405tuBBXi8971cwW8t7n2jfqPvfU/nPFiIr0p+oZQQad8Xc715VC7WluF5g7W8jazvIreAgnUWyTLlKaCnsqxQJ7Zk+T7EfS0xyuIEltFeJMc3SMx/jsnXdgXydSYV03rWtWl8f3HBhVA4v0KPwhpHMYIy9XiRMprH72ZlActeoehpcWWz5Q3/3WrX0wZ7kUmiKjjC62w25NdrtVIoFJXG/KemayEo+tVCH3x0noiN/XlaCg87UigUCoVi47HQFQqFQqFQbHzQgAuFQqFQKJTQFQqFQqFQKKErFAqFQqGoCP4jwADQNvw20jA5ogAAAABJRU5ErkJggg==",
null,
"https://s3-us-west-2.amazonaws.com/c3po-media-dev/files/6a1e1e50-259e-11e9-ab6b-d7741872f579/CPM_Algebra2_Chap7_47_original.jpg",
null,
"https://s3-us-west-2.amazonaws.com/c3po-media-dev/files/a75c7ba0-259d-11e9-ab6b-d7741872f579/CPM_Algebra2_Chap7_48_original.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.82877976,"math_prob":0.999995,"size":295,"snap":"2022-27-2022-33","text_gpt3_token_len":78,"char_repetition_ratio":0.16494845,"word_repetition_ratio":0.6530612,"special_character_ratio":0.2779661,"punctuation_ratio":0.13559322,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999758,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T11:27:14Z\",\"WARC-Record-ID\":\"<urn:uuid:b4225991-0011-4121-9852-10d4d4748188>\",\"Content-Length\":\"35633\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b72866f8-b39d-4b3d-9be4-289c7d9c05ec>\",\"WARC-Concurrent-To\":\"<urn:uuid:e86bf62e-74ba-4ff4-9ba1-b103d4ec6702>\",\"WARC-IP-Address\":\"104.26.7.16\",\"WARC-Target-URI\":\"https://homework.cpm.org/category/CON_FOUND/textbook/ac/chapter/13/lesson/13.OF1-S/problem/7-134\",\"WARC-Payload-Digest\":\"sha1:YUDDSWMN7FBJ377AGNLLXOTSRQVBUV3Z\",\"WARC-Block-Digest\":\"sha1:GWZ5GEWH7XXHBO7ALD3YMLNE3IRKUAV6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103940327.51_warc_CC-MAIN-20220701095156-20220701125156-00101.warc.gz\"}"} |
https://arxiv.org/abs/1608.01512 | [
"math.LO\n\n# Title:Strong failures of higher analogs of Hindman's theorem\n\nAbstract: We show that various analogs of Hindman's Theorem fail in a strong sense when one attempts to obtain uncountable monochromatic sets:\nTheorem 1: There exists a colouring $c:\\mathbb R\\rightarrow\\mathbb Q$, such that for every $X\\subseteq\\mathbb R$ with $|X|=|\\mathbb R|$, and every colour $\\gamma\\in\\mathbb Q$, there are two distinct elements $x_0,x_1$ of $X$ for which $c(x_0+x_1)=\\gamma$. This forms a simultaneous generalization of a theorem of Hindman, Leader and Strauss and a theorem of Galvin and Shelah.\nTheorem 2: For every Abelian group $G$, there exists a colouring $c:G\\rightarrow\\mathbb Q$ such that for every uncountable $X\\subseteq G$, and every colour $\\gamma$, for some large enough integer $n$, there are pairwise distinct elements $x_0,\\ldots,x_n$ of $X$ such that $c(x_0+\\cdots+x_n)=\\gamma$. In addition, it is consistent that the preceding statement remains valid even after enlarging the set of colours from $\\mathbb Q$ to $\\mathbb R$.\nTheorem 3: Let $\\circledast_\\kappa$ assert that for every Abelian group $G$ of cardinality $\\kappa$, there exists a colouring $c:G\\rightarrow G$ such that for every positive integer $n$, every $X_0,\\ldots,X_n \\in[G]^\\kappa$, and every $\\gamma\\in G$, there are $x_0\\in X_0,\\ldots, x_n\\in X_n$ such that $c(x_0+\\cdots+x_n)=\\gamma$. Then $\\circledast_\\kappa$ holds for unboundedly many uncountable cardinals $\\kappa$, and it is consistent that $\\circledast_\\kappa$ holds for all regular uncountable cardinals $\\kappa$.\n Comments: Final accepted version. For several of the earlier results that were stated only for regular cardinals, there is now a treatment of the singular cardinal case. Also, a new partition relation for the Real Line was obtained, see Theorem C3 Subjects: Logic (math.LO); Combinatorics (math.CO) MSC classes: 03E02 (Primary), 03E75, 03E35, 05D10, 05A17, 11P99, 20M14 (Secondary) Journal reference: Transactions of the American Mathematical Society 369 no. 12 (2017), 8939-8966 DOI: 10.1090/tran/7131 Cite as: arXiv:1608.01512 [math.LO] (or arXiv:1608.01512v4 [math.LO] for this version)\n\n## Submission history\n\nFrom: David Fernández Bretón [view email]\n[v1] Thu, 4 Aug 2016 12:35:07 UTC (9 KB)\n[v2] Sun, 28 Aug 2016 17:33:20 UTC (20 KB)\n[v3] Sat, 24 Sep 2016 11:55:16 UTC (24 KB)\n[v4] Tue, 22 Nov 2016 14:59:44 UTC (28 KB)"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.78147185,"math_prob":0.9910084,"size":2005,"snap":"2019-13-2019-22","text_gpt3_token_len":623,"char_repetition_ratio":0.11794103,"word_repetition_ratio":0.0,"special_character_ratio":0.30174562,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991167,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-19T15:01:34Z\",\"WARC-Record-ID\":\"<urn:uuid:c4cbc8d9-5aee-4c80-b6c2-e78b545d29e3>\",\"Content-Length\":\"19579\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b782ca6b-f4ea-4a50-b4f4-84803acfc5c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9871170-8337-48a1-a3e0-400aca9b983e>\",\"WARC-IP-Address\":\"128.84.21.199\",\"WARC-Target-URI\":\"https://arxiv.org/abs/1608.01512\",\"WARC-Payload-Digest\":\"sha1:HQ7PNZXB44UDQCIR6ACW7FL6PI2U7XAH\",\"WARC-Block-Digest\":\"sha1:WCO57N6BSHG34NSYUXU7D2MKJUNWCZ7D\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912201996.61_warc_CC-MAIN-20190319143502-20190319165502-00262.warc.gz\"}"} |
https://www.geeksforgeeks.org/tag/two-pointer-algorithm/page/7/ | [
"Skip to content\n\n# Tag Archives: two-pointer-algorithm\n\nGiven an array of distinct integers and a sum value. Print all triplets with sum smaller than given sum value. Expected Time Complexity is O(n2).… Read More\nGiven an array of N numbers where a subarray is sorted in descending order and rest of the numbers in the array are in ascending… Read More\nGiven two sorted arrays of distinct elements, we need to print those elements from both arrays that are not common. The output should be printed… Read More\nGiven an array of integers, you have to find three numbers such that the sum of two elements equals the third element.Examples: Input : {5,… Read More\nWrite a program to reverse the given string while preserving the position of spaces. Examples: Input : \"abc de\" Output : edc ba Input :… Read More\nGiven an array of distinct elements. The task is to find triplets in the array whose sum is zero. Examples : Input : arr[] =… Read More\nGiven two sorted arrays and a number x, find the pair whose sum is closest to x and the pair has an element from each… Read More\nGiven a sorted array and a number x, find a pair in array whose sum is closest to x.Examples: Input: arr[] = {10, 22, 28,… Read More\nGiven an array of integers, find all combination of four elements in the array whose sum is equal to a given value X. For example, if… Read More\nGiven an array and a value, find if there is a triplet in array whose sum is equal to the given value. If there is… Read More"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.77639365,"math_prob":0.98701304,"size":1315,"snap":"2021-21-2021-25","text_gpt3_token_len":308,"char_repetition_ratio":0.17009915,"word_repetition_ratio":0.10288066,"special_character_ratio":0.23574145,"punctuation_ratio":0.11913358,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99444216,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-24T23:01:29Z\",\"WARC-Record-ID\":\"<urn:uuid:83bcb759-0365-40b7-9742-4203bd024a29>\",\"Content-Length\":\"93332\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6435a63e-f5a5-4d97-abc2-d7c1027acbb1>\",\"WARC-Concurrent-To\":\"<urn:uuid:55baa73a-d8ac-454d-91fb-79e5e6afe0ea>\",\"WARC-IP-Address\":\"23.12.145.61\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/tag/two-pointer-algorithm/page/7/\",\"WARC-Payload-Digest\":\"sha1:OCAEZIHFRA57Y3M7PU74GYBJK5PLOHS5\",\"WARC-Block-Digest\":\"sha1:P3RO4H7UGHMZYOTER5QFCE7TNLPFN4V5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488559139.95_warc_CC-MAIN-20210624202437-20210624232437-00423.warc.gz\"}"} |
https://www.meritnation.com/ask-answer/question/how-to-calculate-the-number-of-binary-operations-on-any-set/relations-and-functions/2466697 | [
"how to calculate the number of binary operations on any set A , say of 4 elements?\n\nLet S be a finite set having n elements.\n\nThen S × S has n2 elements.\n\nSince a binary operation on S is a function from S × S to S.\n\n∴ Total number of binary operations on S is equal to the number of functions from S × S to S.\n\nAlso, cardinality of domain (i.e. S × S) is n2 and cardinality of co-domain (i.e. S) is n.\n\nSo, total number of functions from S × S to S are",
null,
"∴ Total number of binary operations on set S having n elements is",
null,
".\n\nNow, you are saying that on any set A (i.e. S is given to be A here) having 4 elements (i.e. value of n is 4)\n\nUsing the formula derived above, The number of binary operations on\n\nSet A =",
null,
"=",
null,
"• 41\nWhat are you looking for?"
]
| [
null,
"https://s3mn.mnimgs.com/img/shared/discuss_editlive/3021597/2012_06_13_15_47_51/mathmlequation8402978672626898654.png",
null,
"https://s3mn.mnimgs.com/img/shared/discuss_editlive/3021597/2012_06_13_15_47_51/mathmlequation8402978672626898654.png",
null,
"https://s3mn.mnimgs.com/img/shared/discuss_editlive/3021597/2012_06_13_15_47_51/mathmlequation2349482958611467009.png",
null,
"https://s3mn.mnimgs.com/img/shared/discuss_editlive/3021597/2012_06_13_15_47_51/mathmlequation3249821803862957466.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9355226,"math_prob":0.99809337,"size":703,"snap":"2019-43-2019-47","text_gpt3_token_len":192,"char_repetition_ratio":0.15593705,"word_repetition_ratio":0.12666667,"special_character_ratio":0.27596018,"punctuation_ratio":0.11904762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994592,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T13:51:22Z\",\"WARC-Record-ID\":\"<urn:uuid:1b1ed2ee-0419-4ed9-be54-c08d4f23b8e9>\",\"Content-Length\":\"29457\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ed3c024-7e3f-4f53-92e2-7331a591b9ce>\",\"WARC-Concurrent-To\":\"<urn:uuid:f6ad7d69-686c-481b-aefe-1240d3b3b775>\",\"WARC-IP-Address\":\"165.254.45.201\",\"WARC-Target-URI\":\"https://www.meritnation.com/ask-answer/question/how-to-calculate-the-number-of-binary-operations-on-any-set/relations-and-functions/2466697\",\"WARC-Payload-Digest\":\"sha1:QVCRRIJCFP2PLH5ONNUTGFK5ZLEZH63D\",\"WARC-Block-Digest\":\"sha1:GIQRKZEYKEPGNBBLWW3S2BKHI4R7YJI2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986693979.65_warc_CC-MAIN-20191019114429-20191019141929-00002.warc.gz\"}"} |
https://mathoverflow.net/questions/198039/is-there-a-formula-for-the-total-chern-class-of-the-tangent-space-of-a-projectiv | [
"# Is there a formula for the total Chern Class of the tangent space of a projectivized vector bundle?\n\nLet $V\\rightarrow M$ be a complex vector bundle (of rank $k$) over a complex manifold $M$ (you can assume $M$ is compact if that helps, but it may not be relevant to my question). Let $\\pi:\\mathbb{P}V \\rightarrow M$ be the projectivization of $V$.\n\n$\\textbf{Question}:$ Is there a formula for $c(T\\mathbb{P}V)$, the total Chern class of the Tangent space of $\\mathbb{P}V$?\n\nMy naive guess would be that it should be $\\pi^*(c(TM))(1+c_1(\\gamma^*))^{k+1}$, where $\\gamma \\rightarrow \\mathbb{P}V$ is the tautological line bundle over $\\mathbb{P}V$. I think my guess is correct if $M$ was just a point, or more generally if $V$ was a trivial bundle. But I do not know if this is correct in general.\n\nThe specific case for which I need an answer is when $M:= \\mathbb{P}^1 \\times \\mathbb{P}^1$ and $V:= \\mathcal{O}(d_1) \\oplus \\mathcal{O}(d_2)$.\n\n$\\textbf{Added Later}:$ It has been pointed out my guess is wrong in general. The correct answer is $$\\pi^*(c(TM))c(\\pi^*V \\otimes \\gamma^*).$$\n\nNo, your formula is not correct. You have to take into account the Chern classes of $V$. The relative tangent bundle $T_{\\mathbb{P}V/M}$ is given by the so-called Euler exact sequence $$0\\rightarrow \\mathscr{O}_{\\mathbb{P}V}\\rightarrow \\pi ^*V\\otimes \\gamma^* \\rightarrow T_{\\mathbb{P}V/M}\\rightarrow 0\\ ,$$ while $$0\\rightarrow T_{\\mathbb{P}V/M}\\rightarrow T_{\\mathbb{P}V}\\rightarrow \\pi ^*T_M\\rightarrow 0\\ .$$Putting things together we find $c(T_{\\mathbb{P}V})=c(\\pi ^*V\\otimes \\gamma^* )\\,\\pi ^*c(T_M)$.\n\nThen use the standard formula for $c(\\pi ^*V\\otimes \\gamma^* )$.\n\nFor any smooth fiber bundle\n\n$$F\\hookrightarrow P \\stackrel{\\pi}{\\to} M$$\n\nwe have a short exact sequence of vector bundles over $P$\n\n$$0\\to VTP\\to TP \\to \\pi^* TM\\to 0,$$\n\nwhere $VTP$ denotes the vertical tangent bundle defined as the kernel of the differential of $\\pi$. If the bundle is holomorphic then the above is a short exact sequence of complex vector bundles and we deduce\n\n$$c(TP)= c(VTP)\\cdot \\pi^* c(TM).$$\n\nThe classical Euler exact sequence argument shows that when $P=\\mathbb{P}(V)$ that $\\newcommand{\\bC}{\\mathbb{C}}$\n\n$$\\gamma^*\\otimes \\pi^*V \\cong \\underline{\\bC}\\oplus VTP,$$\n\nwhere $\\underline{\\bC}$ denotes the trivial line bundle. Hence\n\n$$c(TP)= c(\\gamma^*\\otimes \\pi^*V)\\cdot \\pi^* c(TP).$$\n\nIn Section I.3 of Fulton-Lang Riemann-Roch algebra you can find an explicit formula for $c_k(L\\otimes E)$, $L$ line bundle and $E$ vector bundle of rank $m$. More precisely\n\n$$c_k(L\\otimes E)=\\sum_{j=1}^k \\binom{m-j}{k-j} c_j(E)c_1(L)^{k-j}.$$ Note. The original answer had an error that I have now corrected. (Hat tip to abx)."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.82665956,"math_prob":0.9998031,"size":981,"snap":"2020-24-2020-29","text_gpt3_token_len":322,"char_repetition_ratio":0.114636645,"word_repetition_ratio":0.0,"special_character_ratio":0.31396535,"punctuation_ratio":0.08695652,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000083,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-02T13:24:10Z\",\"WARC-Record-ID\":\"<urn:uuid:7bdae083-d717-4393-b674-4ffc98cfbdfa>\",\"Content-Length\":\"131670\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68410815-6946-4e55-b648-881f9f6b9abe>\",\"WARC-Concurrent-To\":\"<urn:uuid:046967c8-9303-45ef-bd2a-e0c52f7fe777>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/198039/is-there-a-formula-for-the-total-chern-class-of-the-tangent-space-of-a-projectiv\",\"WARC-Payload-Digest\":\"sha1:YKLV3XZNRNWQPZQJG53PB6AD6GSGIGHU\",\"WARC-Block-Digest\":\"sha1:GAZEMCNH4KVGSCX3VQIRJ343OPPVDQ24\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655878753.12_warc_CC-MAIN-20200702111512-20200702141512-00464.warc.gz\"}"} |
https://www.vedantu.com/maths/tan-theta-formula | [
"# Tan Theta Formula\n\n## Tan Theta\n\nThe tangent is defined in right triangle trigonometry as the ratio of the opposite side to the adjacent side (It is applicable for acute angles only because it's only defined this way for right triangles). To find values of the tangent function at different angles when evaluating the tangent function, we first define the reference angle created by the terminal side and the x-axis. Then we calculate the tangent of this reference angle and determine whether it is positive or negative based on which quadrant the terminal side is in. In the first and third quadrants, the tangent is positive. In the second and fourth quadrants, the tangent is negative. The slope of the terminal side is also equal to the tangent.\n\nLet us discuss an introduction to Trigonometry in detail before looking at the formula. Trigonometry is a branch of mathematics concerned with the application of specific functions of angles to calculations. In trigonometry, there are six functions of an angle that are widely used. Sine (sin), cosine (cos), tangent (tan), cotangent (cot), secant (sec), and cosecant (csc) are their names and abbreviations. In relation to a right triangle, these six trigonometric functions. The sine of A, or sin A, is defined as the ratio of the side opposite to A and the side opposite to the right angle (the hypotenuse) in a triangle. The other trigonometric functions are defined similarly. These functions are properties of the angle that are independent of the triangle's size, and measured values for several angles were tabulated before computers made trigonometry tables outdated. In geometric figures, trigonometric functions are used to calculate unknown angles and distances from known or measured angles. Trigonometry has a wide range of applications, from specific fields such as oceanography, where it is used to measure the height of tides in oceans, to the backyard of our home, where it can be used to roof a building, make the roof inclined in the case of single independent bungalows, and calculate the height of the roof etc. Here, we will discuss the tan theta formula in detail.\n\n### How to Find the Tangent?\n\nYou must first locate the hypotenuse to find the tangent. The hypotenuse is typically the right triangle's longest side. The next task is to decide the angle. There are only two angles to choose from. You cannot choose the right angle. After you've chosen an angle, you will mark the sides. The side opposite to this angle will be the opposite side and the side next to the angle is the adjacent side. After labelling the sides, you can take the required ratio. Let’s discuss ratios, what is tan theta and it’s practical applications?\n\n### What is Tan Theta?\n\nThe length of the opposite side to the length of the adjacent side of a right-angled triangle is known as the tangent function or tangent ratio of the angle between the hypotenuse and the base.\n\nAs discussed, the tangent function is one of the three most common trigonometric functions, along with sine and cosine. The tangent of an angle in a right triangle is equal to the length of the opposite side (O) divided by the length of the adjacent side (A). It is written simply as 'tan' in a formula.\n\n⇒ tan x = O/A\n\ntan(x) is the symbol for the tangent function which is also called the tan x formula. It is one of the six trigonometric functions that are commonly used. Sine and cosine are most often associated with the tangent. In trigonometry, the tangent function is a periodic function that is very useful.\n\nThe tan formula is as follows:\n\nWhat is tan theta in terms of sine and cos?\n\n⇒ tan x = sin x/cos x\n\nor, tan theta = sin theta/cos theta (here, theta is an angle)\n\nThe sine of an angle is equal to the length of the opposite side divided by the length of the hypotenuse side, while the cosine of an angle is the ratio of the adjacent side to the hypotenuse side.\n\nHence, sin x = Opposite Side/Hypotenuse Side\n\ncos x = Adjacent Side/Hypotenuse Side\n\nTherefore, (tan formula) tan x = Opposite Side/Adjacent Side\n\n### Finding the Tangent of the Triangle\n\nAngles A and B are the two angles we will deal with in this triangle. To find our tangent, we must first find out hypotenuse. Our right angle is clear to me. Can you see it? That means our hypotenuse is direct across it, and the side that measures 5 is the hypotenuse. Can you see it? Okay, as we have our hypotenuse, let's choose an angle to work with. We'll choose angle B. As B is our angle, our opposite side is the side that measures 3. Our adjacent side is the one that measures 4 because it is the only side next to angle B which is not the hypotenuse.\n\nThis means that our tangent of angle B will be the ratio of the opposite side over the adjacent side or we can write it as 3/4 which will be equal to 0.75. Similarly, If we chose angle A, our sides will change and the tangent will be 4/3 which will be equal to 1.33.\n\n### Tangent to Find the Missing Side\n\nIn a few problems, we will have to find the missing side of the triangle. This particularly has real-life applications like when construction companies are building on hills. In these kinds of problems, an angle is given to us. To solve these problems, we first have to locate the side that is missing. In the triangle given above, the side missed is the adjacent side to the angle given. Right? What we have to do now is just to write the equation from the definition of the tangent. Once we have the equation, we can move ahead, use algebra to solve for the variable which will be missing.\n\nBy multiplying both sides by x and then dividing both sides by tan 66, we've identified our variable. And by using the calculator to measure tan 66, we get the answer as 2.22.\n\n### Arctan - Inverse Tangent Function\n\nThere is an inverse function for any trigonometry function, for example, tan has arctan, which works in a reverse manner. These inverse functions have the same name as the originals, but with the word 'arc' added to the start. So, arctan is the opposite of tan. If we know the tangent of an angle and want to know the actual angle, we use the inverse function.\n\n### Large & Negative Angles\n\nThe two variable angles in a right triangle are always less than 90 degrees. However, we can find the tangent of any angle, irrespective of its height, as well as the tangent of negative angles. We can also graph the tangent function.\n\n### Tangents Will Be Used to Calculate the Height of a Building or a Mountain:\n\nYou can easily find the height of a building if you know the distance from where you observe it and also the angle of elevation. Similarly, you can find another side of the triangle if you know the value of one side and the angle of depression from the top of the house. All you need to know is a side and an angle of the triangle.\n\n### Conclusion\n\nIn fields like astronomy, mapmaking, surveying, and artillery range finding, trigonometry emerged from the need to compute angles and distances. Plane trigonometry deals with issues involving angles and distances in a single plane. Spherical trigonometry considers applications of related problems in more than one plane of three-dimensional space."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.92787665,"math_prob":0.9794985,"size":8762,"snap":"2021-43-2021-49","text_gpt3_token_len":1943,"char_repetition_ratio":0.17743777,"word_repetition_ratio":0.04737184,"special_character_ratio":0.20851403,"punctuation_ratio":0.10438293,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99912256,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T22:56:10Z\",\"WARC-Record-ID\":\"<urn:uuid:4fa50ff2-cbb9-441c-b7c1-36f865034d47>\",\"Content-Length\":\"90869\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ab1d3d8d-bc40-48ee-b33e-02814786ddc7>\",\"WARC-Concurrent-To\":\"<urn:uuid:58b54166-d5b2-4be1-85a2-38b445274037>\",\"WARC-IP-Address\":\"13.32.208.96\",\"WARC-Target-URI\":\"https://www.vedantu.com/maths/tan-theta-formula\",\"WARC-Payload-Digest\":\"sha1:Y44DAWEJMSIPQVODFO4QEAMOUKGILPON\",\"WARC-Block-Digest\":\"sha1:S2IOM7QP5MF3JAR4YWELISJHXUAWJ23H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585215.14_warc_CC-MAIN-20211018221501-20211019011501-00087.warc.gz\"}"} |
https://code.orgmode.org/bzg/org-mode/commit/8ecc966292f322ec6d0d0fb29e1087a55d22975f | [
"### Implement \"delay\" cookies for scheduled items.\n\n```* org-agenda.el (org-agenda-skip-scheduled-delay-if-deadline):\nNew option. The structure of the possible values is copied\n(org-agenda-get-scheduled): Honor the two new option,\n`org-scheduled-delay-days' and\nscheduled entry has a delay cookie like \"-2d\" (similar to the\nneeded.\n\n* org.el (org-deadline-warning-days): Small docstring fix.\n(org-scheduled-delay-days): New option (see\n(org-get-wdays): Use the new option.\n\nThanks to Andrew M. Nuxoll and Michael Brand for this idea.\n\nYou can now use a \"delay cookie\" in scheduled items. For example,\n\n* TODO Sleep\nSCHEDULED: <2013-02-06 mer. -3d>\n\nwill not be shown on 06/02 but on 09/02, three days later.\n\nThe value of the cookie overrides any value of `org-scheduled-delay-days',\nunless `org-scheduled-delay-days' is negative (same logic than for\n\nAlso check org-agenda-skip-scheduled-delay-if-deadline, which does for",
null,
"Bastien Guerry 6 years ago\nparent\ncommit\n8ecc966292\n2 changed files with 89 additions and 24 deletions\n1. 51 8\nlisp/org-agenda.el\n2. 38 16\nlisp/org.el\n\n#### + 51 - 8 lisp/org-agenda.el View File\n\n ``@@ -843,6 +843,21 @@ because you will take care of it on the day when scheduled.\"`` `` (const :tag \"Remove prewarning if entry is scheduled\" t)`` `` (integer :tag \"Restart prewarning N days before deadline\")))`` `` `` ``+(defcustom org-agenda-skip-scheduled-delay-if-deadline nil`` ``+ \"Non-nil means skip scheduled delay when entry also has a deadline.`` ``+This variable may be set to nil, t, the symbol `post-deadline',`` ``+or a number which will then give the number of days after the actual`` ``+scheduled date when the delay should expire. The symbol `post-deadline'`` ``+eliminates the schedule delay when the date is posterior to the deadline.\"`` ``+ :group 'org-agenda-skip`` ``+ :group 'org-agenda-daily/weekly`` ``+ :version \"24.3\"`` ``+ :type '(choice`` ``+ (const :tag \"Always honor delay\" nil)`` ``+ (const :tag \"Ignore delay if posterior to the deadline\" post-deadline)`` ``+ (const :tag \"Ignore delay if entry has a deadline\" t)`` ``+ (integer :tag \"Honor delay up until N days after the scheduled date\")))`` ``+`` `` (defcustom org-agenda-skip-additional-timestamps-same-entry nil`` `` \"When nil, multiple same-day timestamps in entry make multiple agenda lines.`` `` When non-nil, after the search for timestamps has matched once in an`` ``@@ -5331,7 +5346,13 @@ the documentation of `org-diary'.\"`` `` (setq results (append results rtn))))))))`` `` results))))`` `` `` ``+(defsubst org-em (x y list)`` ``+ \"Is X or Y a member of LIST?\"`` ``+ (or (memq x list) (memq y list)))`` ``+`` `` (defvar org-heading-keyword-regexp-format) ; defined in org.el`` ``+(defvar org-agenda-sorting-strategy-selected nil)`` ``+`` `` (defun org-agenda-get-todos ()`` `` \"Return the TODO information for agenda display.\"`` `` (let* ((props (list 'face nil`` ``@@ -6143,7 +6164,8 @@ FRACTION is what fraction of the head-warning time has passed.\"`` `` deadline-results))`` `` d2 diff pos pos1 category category-pos level tags donep`` `` ee txt head pastschedp todo-state face timestr s habitp show-all`` ``- did-habit-check-p warntime inherited-tags ts-date)`` ``+ did-habit-check-p warntime inherited-tags ts-date suppress-delay`` ``+ ddays)`` `` (goto-char (point-min))`` `` (while (re-search-forward regexp nil t)`` `` (catch :skip`` ``@@ -6162,12 +6184,38 @@ FRACTION is what fraction of the head-warning time has passed.\"`` `` warntime (get-text-property (point) 'org-appt-warntime))`` `` (setq pastschedp (and todayp (< diff 0)))`` `` (setq did-habit-check-p nil)`` ``+ (setq suppress-delay`` ``+ (let ((ds (and org-agenda-skip-scheduled-delay-if-deadline`` ``+ (let ((item (buffer-substring (point-at-bol) (point-at-eol))))`` ``+ (save-match-data`` ``+ (and (string-match`` ``+ org-deadline-time-regexp item)`` ``+ (match-string 1 item)))))))`` ``+ (cond`` ``+ ((not ds) nil)`` ``+ ;; The current item has a deadline date (in ds), so`` ``+ ;; evaluate its delay time.`` ``+ ((integerp org-agenda-skip-scheduled-delay-if-deadline)`` ``+ ;; Use global delay time.`` ``+ (- org-agenda-skip-scheduled-delay-if-deadline))`` ``+ ((eq org-agenda-skip-scheduled-delay-if-deadline`` ``+ 'post-deadline)`` ``+ ;; Set delay to no later than deadline.`` ``+ (min (- d2 (org-time-string-to-absolute`` ``+ ds d1 'past show-all (current-buffer) pos))`` ``+ org-scheduled-delay-days))`` ``+ (t 0))))`` ``+ (setq ddays (if suppress-delay`` ``+ (let ((org-scheduled-delay-days suppress-delay))`` ``+ (org-get-wdays s t t))`` ``+ (org-get-wdays s t)))`` `` ;; When to show a scheduled item in the calendar:`` `` ;; If it is on or past the date.`` ``- (when (or (and (< diff 0)`` ``+ (when (or (and (> ddays 0) (= diff (- ddays)))`` ``+ (and (zerop ddays) (= diff 0))`` ``+ (and (< diff 0)`` `` (< (abs diff) org-scheduled-past-days)`` `` (and todayp (not org-agenda-only-exact-dates)))`` ``- (= diff 0)`` `` ;; org-is-habit-p uses org-entry-get, which is expansive`` `` ;; so we go extra mile to only call it once`` `` (and todayp`` ``@@ -6578,7 +6626,6 @@ The modified list may contain inherited tags, and tags matched by`` `` s))`` `` `` `` (defvar org-agenda-sorting-strategy) ;; because the def is in a let form`` ``-(defvar org-agenda-sorting-strategy-selected nil)`` `` `` `` (defun org-agenda-add-time-grid-maybe (list ndays todayp)`` `` \"Add a time-grid for agenda items which need it.`` ``@@ -6893,10 +6940,6 @@ without respect of their type.\"`` `` (cond ((and ha (not hb)) -1)`` `` ((and (not ha) hb) +1))))`` `` `` ``-(defsubst org-em (x y list)`` ``- \"Is X or Y a member of LIST?\"`` ``- (or (memq x list) (memq y list)))`` ``-`` `` (defun org-entries-lessp (a b)`` `` \"Predicate for sorting agenda entries.\"`` `` ;; The following variables will be used when the form is evaluated.``\n\n#### + 38 - 16 lisp/org.el View File\n\n ``@@ -2864,7 +2864,7 @@ is used.\"`` `` (string :tag \"Format string\")))))`` `` `` `` (defcustom org-deadline-warning-days 14`` ``- \"No. of days before expiration during which a deadline becomes active.`` ``+ \"Number of days before expiration during which a deadline becomes active.`` `` This variable governs the display in sparse trees and in the agenda.`` `` When 0 or negative, it means use this number (the absolute value of it)`` `` even if a deadline has a different individual lead time specified.`` ``@@ -2874,6 +2874,20 @@ Custom commands can set this variable in the options section.\"`` `` :group 'org-agenda-daily/weekly`` `` :type 'integer)`` `` `` ``+(defcustom org-scheduled-delay-days 0`` ``+ \"Number of days before a scheduled item becomes active.`` ``+This variable governs the display in sparse trees and in the agenda.`` ``+The default value (i.e. 0) means: don't delay scheduled item.`` ``+When negative, it means use this number (the absolute value of it)`` ``+even if a scheduled item has a different individual delay time`` ``+specified.`` ``+`` ``+Custom commands can set this variable in the options section.\"`` ``+ :group 'org-time`` ``+ :group 'org-agenda-daily/weekly`` ``+ :version \"24.3\"`` ``+ :type 'integer)`` ``+`` `` (defcustom org-read-date-prefer-future t`` `` \"Non-nil means assume future for incomplete date input from user.`` `` This affects the following situations:`` ``@@ -16216,21 +16230,29 @@ If SECONDS is non-nil, return the difference in seconds.\"`` `` (and (< (org-time-stamp-to-now timestamp-string) ndays)`` `` (not (org-entry-is-done-p))))`` `` `` ``-(defun org-get-wdays (ts)`` ``- \"Get the deadline lead time appropriate for timestring TS.\"`` ``- (cond`` ``- ((<= org-deadline-warning-days 0)`` ``- ;; 0 or negative, enforce this value no matter what`` ``- (- org-deadline-warning-days))`` ``- ((string-match \"-\\\\([0-9]+\\\\)\\\\([hdwmy]\\\\)\\\\(\\\\'\\\\|>\\\\| \\\\)\" ts)`` ``- ;; lead time is specified.`` ``- (floor (* (string-to-number (match-string 1 ts))`` ``- (cdr (assoc (match-string 2 ts)`` ``- '((\"d\" . 1) (\"w\" . 7)`` ``- (\"m\" . 30.4) (\"y\" . 365.25)`` ``- (\"h\" . 0.041667)))))))`` ``- ;; go for the default.`` ``- (t org-deadline-warning-days)))`` ``+(defun org-get-wdays (ts &optional delay zero-delay)`` ``+ \"Get the deadline lead time appropriate for timestring TS.`` ``+When DELAY is non-nil, get the delay time for scheduled items`` ``+instead of the deadline lead time. When ZERO-DELAY is non-nil`` ``+and `org-scheduled-delay-days' is 0, enforce 0 as the delay,`` ``+don't try to find the delay cookie in the scheduled timestamp.\"`` ``+ (let ((tv (if delay org-scheduled-delay-days`` ``+ org-deadline-warning-days)))`` ``+ (cond`` ``+ ((or (and delay (< tv 0))`` ``+ (and delay zero-delay (<= tv 0))`` ``+ (and (not delay) (<= tv 0)))`` ``+ ;; Enforce this value no matter what`` ``+ (- tv))`` ``+ ((string-match \"-\\\\([0-9]+\\\\)\\\\([hdwmy]\\\\)\\\\(\\\\'\\\\|>\\\\| \\\\)\" ts)`` ``+ ;; lead time is specified.`` ``+ (floor (* (string-to-number (match-string 1 ts))`` ``+ (cdr (assoc (match-string 2 ts)`` ``+ '((\"d\" . 1) (\"w\" . 7)`` ``+ (\"m\" . 30.4) (\"y\" . 365.25)`` ``+ (\"h\" . 0.041667)))))))`` ``+ ;; go for the default.`` ``+ (t tv))))`` `` `` `` (defun org-calendar-select-mouse (ev)`` `` \"Return to `org-read-date' with the date currently selected.``"
]
| [
null,
"https://secure.gravatar.com/avatar/beca86d6416e66d0b9d5b25699995b98",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6265436,"math_prob":0.81846863,"size":4838,"snap":"2019-51-2020-05","text_gpt3_token_len":1510,"char_repetition_ratio":0.21617708,"word_repetition_ratio":0.13440861,"special_character_ratio":0.3978917,"punctuation_ratio":0.095607236,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9781571,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-07T10:23:56Z\",\"WARC-Record-ID\":\"<urn:uuid:d92356d2-8c10-4584-9470-d4de06a88232>\",\"Content-Length\":\"83575\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:28fe6f0d-555b-410b-a084-aa15002456a6>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1f904c9-2e1d-4fc4-8ca7-e2da37815dc0>\",\"WARC-IP-Address\":\"45.77.206.30\",\"WARC-Target-URI\":\"https://code.orgmode.org/bzg/org-mode/commit/8ecc966292f322ec6d0d0fb29e1087a55d22975f\",\"WARC-Payload-Digest\":\"sha1:YFJNIEO6TALC37FYAMS7BAE6HOVZGPFF\",\"WARC-Block-Digest\":\"sha1:Q5DBPXVUH7OIFW4STREX2KU2UHLQ6WGQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540497022.38_warc_CC-MAIN-20191207082632-20191207110632-00326.warc.gz\"}"} |
https://physics.stackexchange.com/questions/78121/textbook-problem-free-energy-expression | [
"Textbook Problem: Free energy expression\n\nIn Klotz, Introduction to Chemical Thermodynamics, Ex. 8.2 requires me to derive $$dG = V \\left( \\frac{\\partial p}{\\partial V} \\right)_V dV + \\left[ V \\left( \\frac{\\partial p}{\\partial T} \\right)_V - S \\right]dT,$$ where $G$ is the Gibbs free energy, $S$ is the entropy and all the other variables have their ordinary common-sense meaning.\n\nI'd appreciate feedback on my solution, which I find just very simple, maybe too simple to be right.\n\nFirst of all, when recognizing that $G$ is a function of $V$ and $T$, we can write the total differential as\n\n$$dG = \\left( \\frac{\\partial G}{\\partial V} \\right)_T dV + \\left( \\frac{\\partial G}{\\partial T} \\right)_V dT$$\n\nIn Eq. 8.19, we learn that\n\n$$dG = V dp - S dT$$\n\nand so, when multiplying both sides by $1/\\partial V$ at constant temperature, we find\n\n$$\\left( \\frac{\\partial G}{\\partial V} \\right)_T = V \\left( \\frac{\\partial p}{\\partial V} \\right)_T - S \\left( \\frac{\\partial T}{\\partial V} \\right)_T$$\n\nwhere $(\\partial T /\\partial V)_T$ is zero, because we're at constant $T$. A similar procedure yields the bracket expression, this time instead we just multiply both sides with $1/\\partial T$ and using constant $V$, we arrive at\n\n$$\\left( \\frac{\\partial G}{\\partial T} \\right)_V = V \\left( \\frac{\\partial p}{\\partial T} \\right)_V - S \\left( \\frac{\\partial T}{\\partial T} \\right)_V = V \\left( \\frac{\\partial p}{\\partial T} \\right)_V - S$$\n\nand from comparison with the total differential expression, one immediately recognizes the above very first expression.\n\nIs this a valid solution/Ansatz?\n\n• Comment 1: Thanks for the edit. Comment 2: Can we add equation numbers? Sep 22 '13 at 18:24\n\nSeems OK to me. Although I would just write $dp$ in $dG=Vdp−SdT$ in terms of $dV$ and $dT$.\nEDIT (09/22/2013): In more detail: substitute $dp=(\\frac{\\partial p}{\\partial V})_T dV+(\\frac{\\partial p}{\\partial T})_V dT$ into $dG=Vdp-SdT$.\n• @TMOTTM: We start with what you have to proof an rewrite it once, just by setting the brackets differently: $dG = V \\left( \\frac{\\partial p}{\\partial V} \\right)_V dV + \\left[ V \\left( \\frac{\\partial p}{\\partial T} \\right)_V - S \\right]dT = V\\ \\left[ \\left( \\frac{\\partial p}{\\partial V} \\right)_V dV + \\left( \\frac{\\partial p}{\\partial T} \\right)_V \\ dT \\right] - S \\ dT$. Username akhmeteli points out that the expression in the bracket is $dp$, and then you're done. Oct 22 '13 at 20:48"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7782423,"math_prob":0.9999417,"size":1548,"snap":"2022-05-2022-21","text_gpt3_token_len":464,"char_repetition_ratio":0.2623057,"word_repetition_ratio":0.114285715,"special_character_ratio":0.3113695,"punctuation_ratio":0.082474224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000058,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T09:16:34Z\",\"WARC-Record-ID\":\"<urn:uuid:01047681-60cd-4013-a642-b318bbb2a6f9>\",\"Content-Length\":\"137139\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b944760-f07f-482d-a0f7-2f35a551e532>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a2d0bdb-c5d2-4abc-a4dd-177336402d1e>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/78121/textbook-problem-free-energy-expression\",\"WARC-Payload-Digest\":\"sha1:P3UY6N5AXS6H5Y5NPBDVOTNRQWOINRZB\",\"WARC-Block-Digest\":\"sha1:3TGLVDKJ4UV2PEKURIVQF7ZKXBHDVYRI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305242.48_warc_CC-MAIN-20220127072916-20220127102916-00487.warc.gz\"}"} |
https://www.hpmuseum.org/forum/thread-4503-post-40483.html | [
"HP-41 Programs\n08-09-2015, 07:16 PM\nPost: #1\n Tim Walker",
null,
"Junior Member Posts: 2 Joined: Aug 2015\nHP-41 Programs\nHi: I' am looking for a program to calculate the volume of laminar flow of water in a pipe using Manning's formula. Inputs would be: Slope of the pipe, diameter of the pipe, Manning's roughness factor, and measured depth of flow. Barcode of the program would be great, or mag cards (I could provide the cards if I can find some), or even the program listing if that is all you have. Email me direct at [email protected] Thanks, Tim\n08-09-2015, 07:44 PM\nPost: #2\n Thomas Klemm",
null,
"Senior Member Posts: 1,804 Joined: Dec 2013\nRE: HP-41 Programs\nBut as far as I remember the slope of the pipe wasn't involved. Maybe it's still useful for you.\nSome of the attachments in my posts contain the barcode of the programs. Thus it should be easy for you to load them into your calculator.\n\nKind regards\nThomas\n08-11-2015, 07:32 PM\nPost: #3",
null,
"SlideRule",
null,
"Senior Member Posts: 1,325 Joined: Dec 2013\nRE: HP-41 Programs\nCircular X-Section, Partially Full, Gravity Flow, SOLVER approach\n\nProblem\nA 4-foot-diameter finished concrete pipe is laid at a slope of 0.2%. The water depth is 3 feet. Calculate the flow rate and flow velocity using Manning's equation.\n\nSolution\n\nStep 1. Calculate the flow rate (ft3/sec).\n- Q for Partially Full Circular X-Section\nQ=C÷N((0.5×SIN(ACOS((H−D÷2)÷(D÷2))×2)+π(360−ACOS((H−D÷2)÷(D÷2))×2)÷360)\n×(D÷2)^2)^(5÷3)÷(π×D(360−ACOS((H−D÷2)÷(D÷2))×2)÷360)^(2÷3)×S^0.5\n\nWhere:\nQ = flow rate (ft3/sec, m3/s)\nC = 1.49 for English units, 1 for Metric units\nN = Manning roughness coefficient\nH = depth of water (ft, m)\nD = diameter of pipe (ft, m)\nS = slope of energy line (decimal)\n\nStep 2. Compute the velocity of flow.\n- V for Partially Full Circular X-Section\nV=C÷N((0.5×SIN(ACOS((H−D÷2)÷(D÷2))×2)+π(360−ACOS((H−D÷2)÷(D÷2))\n×2)÷360)×(D÷2)^2)÷(π×D(360−ACOS((H−D÷2)÷(D÷2))×2)÷360)^(2÷3)×S^0.5\n\nWhere:\nV = velocity of flow (ft/sec, m/s)\nC = 1.49 for English units, 1 for Metric units\nN = Manning roughness coefficient\nH = depth of water (ft, m)\nD = diameter of pipe (ft, m)\nS = slope of energy line (decimal)\n\nsee attached PDF for Equations & Illustrations.\n[attachment=2421]\nBEST!\nSlideRule\n08-18-2015, 03:55 PM\nPost: #4\n Tim Walker",
null,
"Junior Member Posts: 2 Joined: Aug 2015\nRE: HP-41 Programs\nTo Sliderule:\n\nThanks again!\n\nTim Walker\n « Next Oldest | Next Newest »\n\nUser(s) browsing this thread: 1 Guest(s)"
]
| [
null,
"https://www.hpmuseum.org/forum/images/buddy_offline.gif",
null,
"https://www.hpmuseum.org/forum/images/buddy_offline.gif",
null,
"https://www.hpmuseum.org/forum/uploads/avatars/avatar_187.gif",
null,
"https://www.hpmuseum.org/forum/images/buddy_offline.gif",
null,
"https://www.hpmuseum.org/forum/images/buddy_offline.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.81168735,"math_prob":0.95277196,"size":2308,"snap":"2022-40-2023-06","text_gpt3_token_len":831,"char_repetition_ratio":0.106770836,"word_repetition_ratio":0.18598382,"special_character_ratio":0.3544194,"punctuation_ratio":0.122137405,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9890049,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T01:31:29Z\",\"WARC-Record-ID\":\"<urn:uuid:61f1f9d3-799e-4eb3-a4d9-f88b2b07294a>\",\"Content-Length\":\"25000\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:111584b2-d11f-441c-8776-366456345bad>\",\"WARC-Concurrent-To\":\"<urn:uuid:797d12d3-31a1-48a6-979f-7eb5bebf7f88>\",\"WARC-IP-Address\":\"209.197.117.170\",\"WARC-Target-URI\":\"https://www.hpmuseum.org/forum/thread-4503-post-40483.html\",\"WARC-Payload-Digest\":\"sha1:K2QQT653FGQF5FLXNXMYB6GYEV445DQ3\",\"WARC-Block-Digest\":\"sha1:E6D6GMJGKELHSWIY6SH6UPDM6ZMGKHPH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494852.95_warc_CC-MAIN-20230127001911-20230127031911-00023.warc.gz\"}"} |
https://vcs.vera-visions.com/eukara/hl-pak0-gen/src/branch/master/ccase.sh | [
"You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.\n\n#### 39 lines 486 B Raw Permalink Blame History\n\n ```#!/bin/sh ``` ``` ``` ```usage () ``` ```{ ``` ``` echo Usage: `basename \\$0` [-r ] file... >&2 ``` ``` exit 2 ``` ```} ``` ``` ``` ```if [ \\$# -lt 1 ] ``` ```then ``` ``` usage ``` ```fi ``` ``` ``` ```if [ \"\\$1\" = \"-r\" ] ``` ```then ``` ``` recursive=1 ``` ``` shift ``` ``` if [ \\$# -lt 1 ] ``` ``` then ``` ``` usage ``` ``` fi ``` ```else ``` ``` recursive=0 ``` ```fi ``` ``` ``` ```for i in \"\\$@\" ``` ```do ``` ``` new=`echo \\$i | tr \"[:upper:]\" \"[:lower:]\"` ``` ``` if [ \"\\$new\" != \"\\$i\" ] ``` ``` then ``` ``` echo \\$i \"->\" \\$new >&2 ``` ``` mv \"\\$i\" \"\\$new\" ``` ``` fi ``` ``` if [ \\$recursive = 1 -a -d \"\\$new\" ] ``` ``` then ``` ``` \\$0 -r \"\\$new\"/* ``` ``` fi ``` ```done ```"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.59507895,"math_prob":0.56583846,"size":635,"snap":"2023-40-2023-50","text_gpt3_token_len":249,"char_repetition_ratio":0.15689382,"word_repetition_ratio":0.12738854,"special_character_ratio":0.52283466,"punctuation_ratio":0.10169491,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98873794,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T21:18:57Z\",\"WARC-Record-ID\":\"<urn:uuid:64815090-4e2b-4a98-abc4-500c94aaae6d>\",\"Content-Length\":\"39035\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4ceabea8-5487-4630-bba6-2bcaa10adb4d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f5c37fe8-532b-44a6-ade1-3e90f5c719ef>\",\"WARC-IP-Address\":\"97.90.117.47\",\"WARC-Target-URI\":\"https://vcs.vera-visions.com/eukara/hl-pak0-gen/src/branch/master/ccase.sh\",\"WARC-Payload-Digest\":\"sha1:RW55WKNFHWSN577EAZELZ2DCZOPLQU4I\",\"WARC-Block-Digest\":\"sha1:XIHRLYHKMDB7OHSGYHOX66IIBW3I4PB3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100508.53_warc_CC-MAIN-20231203193127-20231203223127-00722.warc.gz\"}"} |
http://zdyboke.com/qspevdutipx-34-760-1.iunm | [
"• 产品中心\n\n• 舒比拓\n• 欧志姆\n• 纳弗拉\n• 皓蓝\n• 芜琼花\n• 哈皮卡\n• 皓齿清川西\n• 纳齿健\n• 优益洁\n• 乐益齿\n• 惠百施\n• 玖耀(中石油专属)\n\n• 猫小菲\n• 恩芝\n• 阿莎娜\n• 樱恋\n• 尼奈\n• 珂尼娜\n• 优香水滴\n\n• 清妍萃\n• 蝶印\n• 莱清菲\n• 卡蓓诺\n• 伊植贝\n• 迈森\n• 零可琳\n• 斐绿蔓\n• 津纯美\n• 百沐乐\n• 艾丝塔\n• 乐诗黎\n• 乐途\n• 赫丝町\n\n• 虹丝克润\n• 克林汉\n• 虹克林畔\n• 臻图\n• 悠蓓静\n• 葆色\n• 雅力新\n\n• 普玛兰得\n• 五色墨\n• 裴丽\n• 萩原\n• 艾兰\n• 棠印\n\n• 芜琼花\n• 希官羽\n• 魅儿\n• 葆色\n• 芳幸\n• 虹克林畔\n• 克林汉\n• 卫丫\n• 臻图\n• 诺因(中石油专属)\n\n• 翡皙\n• 千佰果\n• 芜琼花\n• 品颂\n• 露善\n• 卜赛维\n• 麦虎\n• 茉斐琳(中石油专属)\n• 安沐舒\n• 安沐舒\n\n• 迈森\n• 瑷微丹\n• 露善\n• 蝶印\n• 赫丝町\n• 艾佩丽可\n• 藤美姬\n• 植密社\n\n• 怡馥利\n• 纽碧缇\n• 桃沢子\n• 芒乐\n\n• 葆帝薇\n• 德露宝\n• 多顺\n• ### 家居日杂\n\n• 诺因(中石油专属)\n• 简沐\n• 新闻中心\n• 集团新闻\n• 产品新闻\n• 媒体报道\n•",
null,
"• 企业文化\n• 集团文化\n• 集团活动\n•",
null,
"• 和麦秀\n• 线上销售\n• 加入我们\n• 校园招聘\n• 社会招聘\n• 薪酬福利\n•",
null,
"• 联系我们\n• 联系我们\n•",
null,
"• ### 纳齿健牌 钛合托玛琳去垢牙刷\n\n•",
null,
"• #### 纳齿健牌 钛合托玛琳去垢牙刷\n\n纳齿健 钛合托玛琳去垢牙刷\n\n采用双层细软纤丝细毛,可以更好的清洁牙间隙和牙垢。添加电气石和二氧化钛成分,增加纤细弹性。有效析出牙齿沉淀色素,长期使用,使牙齿沉淀牙垢,逐步脱落。并且在刷头达到防止细菌滋生的作用,保证牙刷的洁净卫生,达到牙齿洁净美白的最佳状态。刷柄如宝石般通透晶莹,让你在刷牙时有高品质的生活体验。"
]
| [
null,
"http://www.hemaiheda.com/uploads/image/20151209/1449637815.jpg",
null,
"http://www.hemaiheda.com/uploads/image/20151209/1449642402.jpg",
null,
"http://www.hemaiheda.com/uploads/image/20170410/1491808356.jpg",
null,
"http://www.hemaiheda.com/uploads/image/20151209/1449639744.jpg",
null,
"http://www.hemaiheda.com/uploads/image/20151228/1451292049.png",
null
]
| {"ft_lang_label":"__label__zh","ft_lang_prob":0.93665886,"math_prob":0.41606408,"size":358,"snap":"2020-34-2020-40","text_gpt3_token_len":548,"char_repetition_ratio":0.14124294,"word_repetition_ratio":0.25,"special_character_ratio":0.10335196,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9734184,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T21:15:53Z\",\"WARC-Record-ID\":\"<urn:uuid:1533af0a-434b-4bbc-878e-be50fbc98548>\",\"Content-Length\":\"39735\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:09cb0105-2635-46b8-a102-d004134c89f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:7a218e7e-0a4e-48a6-b9f0-19a077e6373e>\",\"WARC-IP-Address\":\"175.29.246.119\",\"WARC-Target-URI\":\"http://zdyboke.com/qspevdutipx-34-760-1.iunm\",\"WARC-Payload-Digest\":\"sha1:WJS3IFBNP6YCJHAEDFHCPDUVXTLNE5YK\",\"WARC-Block-Digest\":\"sha1:A7O6EZXU23X3GHKHHXBMTU5A4PN6HFJ2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439741154.98_warc_CC-MAIN-20200815184756-20200815214756-00233.warc.gz\"}"} |
https://fawalltweakunes.gq/avo-course-notes-part-4-three-term-avo-inversion.php | [
"### AVO Course Notes, Part 4. Three-term AVO Inversion\n\nShuey R. Williams M. Etris, Nick J. Crabtree and Jan Dewar Errors and Omissions A large volume of data is being converted to make this online archive.\n\n• PEH:Reservoir Geophysics!\n• AVO Modeling in Seismic Processing and Interpretation Part 1. Fundamentals?\n• Vita: Life in a Zone of Social Abandonment.\n• Spin 2004: 16th International Spin Physics Symposium; Workshop On Polerized Electron Sources and Polarimeters.\n• Zoeppritz-based AVO inversion using an improved Markov chain Monte Carlo method | SpringerLink.\n• Machine Learning Methodology;\n• Technical article: Gardner's relations and AVO inversion.\n\nIf you notice any problems with an article examples: incorrect or missing figures, issue with rendering of formulas etc. The CSEG does not endorse or warrant the information printed. Article References Print. Examples of Azimuthal AVO The literature abounds with numerous excellent examples and observations of AVAZ some of which are shown here in order to better understand the expectation and potential in pursuing this type of analysis Lynn et al.\n\nFigure 1. Figure 2. AVAZ variation: parallel is strong flat gradient and perpendicular is weak -ve gradient. Figure 3. Figure 4. Figure 5. Figure 6. Figure 7a. Figure 7b. Note that unlike figure 6, the relative stack response from these azimuthally anisotropic gradients would be stronger than the isotropic equivalent. Figure 8a. Figure 8b. Table 1. Figure 9a. Figure Figure 11a. Figure 11b. Figure 11c. Zoom of figure 11a for 3 term vs. Figure 11d. Figure 12a. Figure 12b. Figure 13a.\n\n## AVO Modeling in Seismic Processing and Interpretation Part 1. Fundamentals | CSEG RECORDER\n\nHTI vertical fracture model. Figure 13b. VTI horizontal layering model. Pre-drill MuRho inversion from 3D at proposed Well A location showing highest rigidity Colorado B zone hot colours that is most likely to support natural or induced fractures. In our first example,the methodis tested on a simple synthetic. This example was usedinitially becauseit truly represents a \"blocky\" impedance and therefor. In this casewe haveuseda smoothedversion of the sonic velocities to provide the constraint.\n\nA visual comparisonwoulU indicate that the extracteU velocity profile corresponds very well to the input. A moredetailed comparisonof the two figures showsthat the original and extracted logs do not matchperfectly. It is doubtful that a perfect match could ever be obtai neU. At the' top of the figure we see a sonic log with 'its reflectivity sequencebelow.\n\nIn this example,we have assumedthat the density is constant, but this is not a necessary restriction. The reflectivity wascbnvolvedwith a zero-phasewavelet,bandlimitedfrom10 to 60 Hz, andthe final syntheticis shownat the bottomof the figure. The results of the maximum-likelihood inversion method are sbown in Figure 6. In this calculation, the waveletwasassumed known. Notethe blocky nature of the estimatedvelocity profile compared with the actual sonic log profile. Again, the input and output logs do not matchperfectly. The fact that the two do not perfectly matchis due to slight errors in the reflectivity sizes whichare amplified by the integration process,and is partially the effect of the constaintused.\n\nTheconstraintshownin Figure 6. In practice, this information could be derived from stacking velocities or from nearby well control. This blocky impedance canbe contrastedwith the more traditional narrow-band. Finally, Figure 6. In summary, maximum-likelihoodinversion is a procedurewhich extracts a broad-band estimate of the seismic reflectivity and, by the introduction of 1inear constraints, al lows us to invert to an acoustic impedancesection which retains the major geological features of boreholelog data.\n\nAnother method of- recursive, single trace inversion which uses a \"sparse-spike\" assumption is the L1 normmethod, developed primarilyby Dr. DougOldenburgof UBC. This method is also often referred to as the linear programming method,and this can lead to confusion.\n\nActually, the two namesrefer to separateaspectsof the method. Themathematical modelusedin the construction of the algorithm is the minimizationof the L1 norm. However,the methodusedto solve the problem is linear programming. The basic theory of this methodis found in a paper by Oldenburg, et el The authors point out that if a high-resolution aleconvolution is performedon the seismictrace, the resulting estimateof the reflectivity can be thought of as an averagedversion of the original reflectivity, as shownat the top of Figure6.\n\nNow, the layered earth model equates to a \"blocky\" impedancefunction, which in turn equates to a \"sparse-spiKe\" reflectivity function. The above constraint will thus restrict our inverted result to a \"sparse\" structure so that extremely fine structure, such as very small reflection coefficients, will not be fully inverted. The other key difference in the linear programmingmethod is that the L1 norm is minimized rather than the L2 norm. The L1 norm is defined as the sum of the absolute values of the seismic trace.\n\nThe two norms are shownbelow, applied to the trace x: x1 : xi and x2: xi i i:1 The fact that the L1 norm favours a \"sparse\" structure is shown in the following simple example. Taken from the notes to Dr.\n\nOldenburg's CSEG convention course' \"Inverse theory with application to aleconvolution and seismograminversion\". Hence, minimizing the L1 norm would reveal that g is a \"preferred\" seismic trace based on it's sparseness. Oldenburget al. That is, the reliable frequencyband is honored whileat the same timea sparsereflectivity is created.\n\n• A Day and a Night and a Day: A Novel.\n• Practical applications of P-wave AVO for unconventional gas Resource Plays!\n• Babys First Book of Seriously Fucked-up Shit.\n\nThe results of their. The data consist of 49 traces with a sample rate of 4 msecand a Hz bandwidth. The figure showsthe linear programming reflectivity and impedanceestimates below the input seismic section. It should be pointed out that a three trace spatial smootherhas been applied to the final results in both cases. Finally, let us consider a dataset fromAlberta which has been processeU through the LP inversion method. The input seismic is shownin Figure 6. The constraints useU here were from well log data.\n\nIn the final inversion notice that the impedance has been superimposed on the final reflectivity estimate using a grey level scale. The sequence! BaseUon the these data handouts, do the following interpretation exerc i se: [ Tie the synthetic to the seismicline at SP Hint- use reverse polari ty syntheti c. Use a blocked off version of the sonic log. As the time separation between reflection coefficients becomessmaller, the interference between overlapping wavelets becomesmore severe.\n\nIndeed, in Figure 6. In fact, the effect is more of a differentiation of the wavelet, which alters the amplitude spectrumas wel1 as the phase spectrum. In this section we will look closer at the effect of wavelets on thin beds and how. The first comprehensivel'ook at thin bed effects was done by Widess In this paper he used a model which has becomethe standard for discussing thin beds, the wedgemodel. That is, consider a high velocity laye6 encasedin a low velocity layer or vice versa and allow the thickness of the layer to pinch out to zero.",
null,
"Next create the reflectivity responsefrom the impedance,and convolvewith a wavelet. The thickness of the layer is given in terms of two-waytime through the layer and is then related to the dominantperiod of the wavelet. The usual wavelet used is a Ricker becauseof the simpl i city of its shape. Figure 7.\n\n### Lithology and fluid prediction from amplitude versus offset (AVO) seismic data\n\nNote that what is refertea to as a wavelengthin his plot i s actually twice the dominantperiod. A few important points can be noted from Figure 7. First, the wavelets start interfering with eackotherat a thicknessjust below two dominant periods,but remain Clistinguishable down to about one period."
]
| [
null,
"https://www.researchgate.net/profile/Paul_Veeken/publication/269490374/figure/fig1/AS:668993115402248@1536511796666/a-The-AVO-response-on-a-refl-ection-in-a-CDP-gather-The-amplitude-changes-clearly-with.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8556537,"math_prob":0.8420299,"size":8828,"snap":"2020-24-2020-29","text_gpt3_token_len":1988,"char_repetition_ratio":0.118880324,"word_repetition_ratio":0.026105873,"special_character_ratio":0.19324875,"punctuation_ratio":0.117981076,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9624548,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-12T15:53:05Z\",\"WARC-Record-ID\":\"<urn:uuid:74a1a736-7c9c-4ae2-ac58-12699e245e5d>\",\"Content-Length\":\"21153\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7cc62a09-0b6b-4258-bc35-e72027240cfe>\",\"WARC-Concurrent-To\":\"<urn:uuid:74d8c2b4-1c0f-4746-aff5-e12b5e344566>\",\"WARC-IP-Address\":\"104.27.168.38\",\"WARC-Target-URI\":\"https://fawalltweakunes.gq/avo-course-notes-part-4-three-term-avo-inversion.php\",\"WARC-Payload-Digest\":\"sha1:JY6G7NJOWRVEQDN752QGYXNJ2U26N6CJ\",\"WARC-Block-Digest\":\"sha1:UEAIKU62X2NIRGZYY44SY5Z7T2MK2RVF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657138752.92_warc_CC-MAIN-20200712144738-20200712174738-00197.warc.gz\"}"} |
https://www.colorhexa.com/cf0053 | [
"# #cf0053 Color Information\n\nIn a RGB color space, hex #cf0053 is composed of 81.2% red, 0% green and 32.5% blue. Whereas in a CMYK color space, it is composed of 0% cyan, 100% magenta, 59.9% yellow and 18.8% black. It has a hue angle of 335.9 degrees, a saturation of 100% and a lightness of 40.6%. #cf0053 color hex could be obtained by blending #ff00a6 with #9f0000. Closest websafe color is: #cc0066.\n\n• R 81\n• G 0\n• B 33\nRGB color chart\n• C 0\n• M 100\n• Y 60\n• K 19\nCMYK color chart\n\n#cf0053 color description : Strong pink.\n\n# #cf0053 Color Conversion\n\nThe hexadecimal color #cf0053 has RGB values of R:207, G:0, B:83 and CMYK values of C:0, M:1, Y:0.6, K:0.19. Its decimal value is 13566035.\n\nHex triplet RGB Decimal cf0053 `#cf0053` 207, 0, 83 `rgb(207,0,83)` 81.2, 0, 32.5 `rgb(81.2%,0%,32.5%)` 0, 100, 60, 19 335.9°, 100, 40.6 `hsl(335.9,100%,40.6%)` 335.9°, 100, 81.2 cc0066 `#cc0066`\nCIE-LAB 44.08, 70.912, 15.106 27.295, 13.893, 9.428 0.539, 0.274, 13.893 44.08, 72.503, 12.025 44.08, 123.633, 3.06 37.274, 65.482, 11.095 11001111, 00000000, 01010011\n\n# Color Schemes with #cf0053\n\n• #cf0053\n``#cf0053` `rgb(207,0,83)``\n• #00cf7c\n``#00cf7c` `rgb(0,207,124)``\nComplementary Color\n• #cf00bb\n``#cf00bb` `rgb(207,0,187)``\n• #cf0053\n``#cf0053` `rgb(207,0,83)``\n• #cf1500\n``#cf1500` `rgb(207,21,0)``\nAnalogous Color\n• #00bbcf\n``#00bbcf` `rgb(0,187,207)``\n• #cf0053\n``#cf0053` `rgb(207,0,83)``\n• #00cf15\n``#00cf15` `rgb(0,207,21)``\nSplit Complementary Color\n• #0053cf\n``#0053cf` `rgb(0,83,207)``\n• #cf0053\n``#cf0053` `rgb(207,0,83)``\n• #53cf00\n``#53cf00` `rgb(83,207,0)``\n• #7c00cf\n``#7c00cf` `rgb(124,0,207)``\n• #cf0053\n``#cf0053` `rgb(207,0,83)``\n• #53cf00\n``#53cf00` `rgb(83,207,0)``\n• #00cf7c\n``#00cf7c` `rgb(0,207,124)``\n• #830034\n``#830034` `rgb(131,0,52)``\n• #9c003f\n``#9c003f` `rgb(156,0,63)``\n• #b60049\n``#b60049` `rgb(182,0,73)``\n• #cf0053\n``#cf0053` `rgb(207,0,83)``\n• #e9005d\n``#e9005d` `rgb(233,0,93)``\n• #ff0368\n``#ff0368` `rgb(255,3,104)``\n• #ff1d77\n``#ff1d77` `rgb(255,29,119)``\nMonochromatic Color\n\n# Alternatives to #cf0053\n\nBelow, you can see some colors close to #cf0053. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #cf0087\n``#cf0087` `rgb(207,0,135)``\n• #cf0076\n``#cf0076` `rgb(207,0,118)``\n• #cf0064\n``#cf0064` `rgb(207,0,100)``\n• #cf0053\n``#cf0053` `rgb(207,0,83)``\n• #cf0042\n``#cf0042` `rgb(207,0,66)``\n• #cf0030\n``#cf0030` `rgb(207,0,48)``\n• #cf001f\n``#cf001f` `rgb(207,0,31)``\nSimilar Colors\n\n# #cf0053 Preview\n\nThis text has a font color of #cf0053.\n\n``<span style=\"color:#cf0053;\">Text here</span>``\n#cf0053 background color\n\nThis paragraph has a background color of #cf0053.\n\n``<p style=\"background-color:#cf0053;\">Content here</p>``\n#cf0053 border color\n\nThis element has a border color of #cf0053.\n\n``<div style=\"border:1px solid #cf0053;\">Content here</div>``\nCSS codes\n``.text {color:#cf0053;}``\n``.background {background-color:#cf0053;}``\n``.border {border:1px solid #cf0053;}``\n\n# Shades and Tints of #cf0053\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #0b0004 is the darkest color, while #fff6fa is the lightest one.\n\n• #0b0004\n``#0b0004` `rgb(11,0,4)``\n• #1e000c\n``#1e000c` `rgb(30,0,12)``\n• #320014\n``#320014` `rgb(50,0,20)``\n• #46001c\n``#46001c` `rgb(70,0,28)``\n• #590024\n``#590024` `rgb(89,0,36)``\n• #6d002c\n``#6d002c` `rgb(109,0,44)``\n• #810034\n``#810034` `rgb(129,0,52)``\n• #94003b\n``#94003b` `rgb(148,0,59)``\n• #a80043\n``#a80043` `rgb(168,0,67)``\n• #bb004b\n``#bb004b` `rgb(187,0,75)``\n• #cf0053\n``#cf0053` `rgb(207,0,83)``\n• #e3005b\n``#e3005b` `rgb(227,0,91)``\n• #f60063\n``#f60063` `rgb(246,0,99)``\n• #ff0b6d\n``#ff0b6d` `rgb(255,11,109)``\n• #ff1e78\n``#ff1e78` `rgb(255,30,120)``\n• #ff3284\n``#ff3284` `rgb(255,50,132)``\n• #ff4690\n``#ff4690` `rgb(255,70,144)``\n• #ff599c\n``#ff599c` `rgb(255,89,156)``\n• #ff6da7\n``#ff6da7` `rgb(255,109,167)``\n• #ff81b3\n``#ff81b3` `rgb(255,129,179)``\n• #ff94bf\n``#ff94bf` `rgb(255,148,191)``\n• #ffa8cb\n``#ffa8cb` `rgb(255,168,203)``\n• #ffbbd6\n``#ffbbd6` `rgb(255,187,214)``\n• #ffcfe2\n``#ffcfe2` `rgb(255,207,226)``\n• #ffe3ee\n``#ffe3ee` `rgb(255,227,238)``\n• #fff6fa\n``#fff6fa` `rgb(255,246,250)``\nTint Color Variation\n\n# Tones of #cf0053\n\nA tone is produced by adding gray to any pure hue. In this case, #6f6066 is the less saturated color, while #cf0053 is the most saturated one.\n\n• #6f6066\n``#6f6066` `rgb(111,96,102)``\n• #775864\n``#775864` `rgb(119,88,100)``\n• #7f5063\n``#7f5063` `rgb(127,80,99)``\n• #874861\n``#874861` `rgb(135,72,97)``\n• #8f4060\n``#8f4060` `rgb(143,64,96)``\n• #97385e\n``#97385e` `rgb(151,56,94)``\n• #9f305c\n``#9f305c` `rgb(159,48,92)``\n• #a7285b\n``#a7285b` `rgb(167,40,91)``\n• #af2059\n``#af2059` `rgb(175,32,89)``\n• #b71858\n``#b71858` `rgb(183,24,88)``\n• #bf1056\n``#bf1056` `rgb(191,16,86)``\n• #c70855\n``#c70855` `rgb(199,8,85)``\n• #cf0053\n``#cf0053` `rgb(207,0,83)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #cf0053 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.53172874,"math_prob":0.8585422,"size":3655,"snap":"2020-45-2020-50","text_gpt3_token_len":1594,"char_repetition_ratio":0.14023556,"word_repetition_ratio":0.011111111,"special_character_ratio":0.55376196,"punctuation_ratio":0.23276836,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98400325,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T08:55:42Z\",\"WARC-Record-ID\":\"<urn:uuid:1c0f60a9-c6af-483f-b7d4-b680b97eed60>\",\"Content-Length\":\"36214\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9513d38-a645-466c-9399-7077f8652e60>\",\"WARC-Concurrent-To\":\"<urn:uuid:3dbdd26c-e65f-44ab-b4bd-368bdc52d370>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/cf0053\",\"WARC-Payload-Digest\":\"sha1:F7GVJWLY6CNPQLIIP3F7OD5R6ASHLDR4\",\"WARC-Block-Digest\":\"sha1:RI3UMZMA2TC5TZHTJI4SPL6FHGPOIJAT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107897022.61_warc_CC-MAIN-20201028073614-20201028103614-00155.warc.gz\"}"} |
https://electronics.stackexchange.com/questions/479479/how-do-i-find-specific-transfer-functions-from-a-plant-of-a-mimo-system | [
"# How do I find specific transfer functions from a plant of a MIMO system?\n\nSuppose I have a physical system, for example a mass-spring-damper system, which has been written in state space. Now, suppose that the matrices describing the system are $$\\A,B,C,D\\$$.\n\nMoreover, let' s say it is a MIMO system, and I have $$\\6\\$$ inputs, which are $$\\u_{1},u_{2},d_{1},d_{2},n_{1},n_{2}\\$$, where the first two are the outputs of the controller, the second pair are the disturbances and the last ones are the noises.\n\nAns there are $$\\4\\$$ outputs, which are the displacement of a mass, defined as $$\\z_{1},z_{2}\\$$ each of which has two components.\n\nSo, I define the plant as:\n\nG = ss(A,B,C,D)\n\n\nnow, I have my plant defined. Suppose now I want to find here the transfer functions of the systems, such as the sensitivity function, the complementary sensitivity function and the control sensitivity function, how could I do?\n\nI know that I will have smething like:\n\n$$\\\\begin{bmatrix} z_{11}\\\\ z_{12}\\\\ z_{21}\\\\ z_{2,2} \\end{bmatrix} = \\begin{bmatrix} & & & \\\\ & & & \\\\ & & & \\\\ & & & \\\\ & & & \\\\ & & & \\end{bmatrix} \\cdot \\begin{bmatrix} d_1\\\\ d_2\\\\ u_1\\\\ u_2\\\\ n_1\\\\ n_2 \\end{bmatrix}\\$$\n\nand to know the transfer function I need to look at the appropriate entry in the transfer matrix, but how do I know that the order of the inputs and of the outputs is this?\n\nSo, what I mean is that I could also have\n\n$$\\\\begin{bmatrix} z_{21}\\\\ z_{22}\\\\ z_{11}\\\\ z_{12} \\end{bmatrix} = \\begin{bmatrix} & & & \\\\ & & & \\\\ & & & \\\\ & & & \\\\ & & & \\\\ & & & \\end{bmatrix} \\cdot \\begin{bmatrix} u_1\\\\ u_2\\\\ d_1\\\\ d_2\\\\ n_1\\\\ n_2 \\end{bmatrix}\\$$\n\nor other combinations, so if I do $$\\G(1,1)\\$$, in the first case I obtain a transfer function, and in the second case I obtain a different transfer function.\n\nSo, how can I do?\n\nFor any linear time-invariant system the transfer function will be\n\n$$G(s) = YU^{-1} = C (sI-A)^{-1}B + D,$$\n\nand the blank matrix in you question will be $$\\G(s) \\in \\mathcal{R}^{4 \\times 6} \\$$, with the top element of the output matrix $$\\ z\\$$\n\n$$z_{1} = ([C_{1i}] (sI-A)^{-1}B + D)U,$$\n\nwhere $$\\ [C_{1i}] \\$$ is the row vector with the elements in the first row of $$\\C\\$$. Do notice that $$\\z_1(u_1,u_2,\\dots,n_2) \\$$.\n\nIf you permute the inputs in $$\\U\\$$ it is the same as applying an invertible matrix $$\\ P\\$$ to have a $$\\ \\hat{U} = PU\\$$. So, as long as $$\\ P\\$$ is just a permutation (a single 1 per line and no columns has more than a single 1), if $$\\G(1,1)\\$$ gives you a TF you will be able to find that same TF somewhere in $$\\ \\hat{G} \\$$.\n\nMoreover, if you have\n\n$$\\begin{bmatrix} d_1\\\\ d_2\\\\ u_1\\\\ u_2\\\\ n_1\\\\ n_2 \\end{bmatrix} \\rightarrow G,$$\n\n$$\\begin{bmatrix} u_1\\\\ u_2\\\\ d_1\\\\ d_2\\\\ n_1\\\\ n_2 \\end{bmatrix} \\rightarrow \\hat{G},$$\n\nThe permutation is\n\n$$P = \\begin{bmatrix} 0 & 0 & 1 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 1 & 0 & 0 \\\\ 1 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 1 \\end{bmatrix}$$\n\n$$G(1,1) = \\hat{G}(1,3).$$\n• Thanks for answering, so if I have understood, if I select $G(1,1)$, doesn' t matter if I do a permutation of the inputs or of the outputs, I will alway get the same result, so always the same transfer function for $G(1,1)$. But how do I know which transfer function is in position $G(1,1)$? For example I know that the transfer function from the disturbance to the output is the sensitivity function, how do I understand if it is in $G(1,1)$? Thank you again.\n• \"if $G(1,1)$ gives you a TF you will be able to find that same TF somewhere in $\\hat{G}$.\" but in many situations $G(1,1) \\neq \\hat{ G}(1,1)$.\n• Not sure what you mean, but if you want to find how $u_4 \\rightarrow y_1$ look at $z_{1,u_4} = z_{y_1,u_4} = ([C_{1,i}] (sI-A)^{-1}[B_{j,4}] + [D_{j,4}])u_4,$"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.860732,"math_prob":0.99992335,"size":1698,"snap":"2022-40-2023-06","text_gpt3_token_len":524,"char_repetition_ratio":0.17591499,"word_repetition_ratio":0.16722408,"special_character_ratio":0.36277974,"punctuation_ratio":0.13294798,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998764,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-09T02:38:12Z\",\"WARC-Record-ID\":\"<urn:uuid:839b3e74-08e9-47c6-96a0-27934e8153af>\",\"Content-Length\":\"152677\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:30d4e80d-d834-44bb-870e-dba6c621b467>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e104eee-32a8-4b5c-a916-4185fe32a7c9>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/479479/how-do-i-find-specific-transfer-functions-from-a-plant-of-a-mimo-system\",\"WARC-Payload-Digest\":\"sha1:YYB3MLXSBN6EHAVBNO26YVJUYLUU25RW\",\"WARC-Block-Digest\":\"sha1:PKJ3OLEMZPHCSQXAWLSHMLPNGJHQ45HF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764501066.53_warc_CC-MAIN-20230209014102-20230209044102-00788.warc.gz\"}"} |
https://physics.stackexchange.com/questions/350196/show-that-there-are-compressive-forces-in-a-static-fluid-not-by-arguing-that-the | [
"# Show that there are compressive forces in a static fluid not by arguing that there is no shear force\n\nAll textbooks I have read say that there is no shear force in a static fluid because a fluid will flow continuously under the influence of a force parallel to its surface. I understand well this part. However, no textbook really shows why there can be compressive forces in a fluid. The fact that there is no shear force in a static fluid does not support the argument that there are compressive forces. I could as well say there is no compressive force in a static fluid (of course I know this is not the case).\n\nIn fact, I did try to find an example to support the argument that there are compressive forces in a static fluid but I encountered this repeating issue: a compressive force on one plane can be a shear force on another plane if that plane is oriented at right angle to the original plane.\n\nFurthermore, I couldn't find a virtual experiment in my head in which I could exert a compressive force in a fluid. If you think about it, we cannot really \"push\" a liquid.\n\nMy questions are:\n\n1. How do we prove/show that there are compressive forces in a static fluid.\n\n2. How do we prove/show that a fluid element does not move continuously under the influence of compressive forces.\n\n• What is the question?\n– JMac\nAug 4, 2017 at 1:30\n\nIn a static fluid compressive force per unit area is called pressure. First off let me clear a misunderstanding that you have viz. \"...a compressive force on one plane can be a shear force on another plane if that plane is oriented at right angle to the original plane\". This is simply incorrect. Stress is not a vector but a tensor, and pressure is simply the isotropic part of that tensor. Now you must be aware that a tensor is a linear map from vectors to vectors, which is to say that a tensor takes vector as an input and outputs another vector. Now stress tensor $\\tau$, by definition, takes area vector as its input, and outputs force vector (per unit area) acting on that area. Therefore to find force on any oriented plane located at $\\mathbf{x}$ and whose unit normal is $\\mathbf{n}$, you must take the stress tensor at that point $\\tau_\\mathbf{x}$ and feed $\\mathbf{n}$ into it to get the force-per-unit-area vector $\\mathbf{f}=\\tau_\\mathbf{x}(\\mathbf{n})$. The force vector on a plane with a different orientation, say $\\mathbf{n}'$, located at the same point, is $\\mathbf{f}'=\\tau_\\mathbf{x}(\\mathbf{n}')$, and in general $\\mathbf{f}'\\neq \\mathbf{f}$. In static fluid, the force vector (per unit area) is always normal to any plane, and therefore if you take two orthogonal planes the corresponding force vectors will also be orthogonal. That is why compressive force on one plane does not become shear force on another plane; the forces on the two planes are to be computed separately and there is no obvious relation between them.\n\nNow coming to your primary question: We can show that compressive stresses must exist in a static fluid by descending to the molecular level and using the definition of stress as momentum flux per unit area across an oriented plane due to molecules which are perpetually in motion. But I think you are seeking an explanation at the continuum level.\n\nIn a restricted sense, you \"... could as well say there is no compressive force in a static fluid...\". The reason is that usually (though not always) pressure differences are all that matter, and therefore you may consider any value of pressure to be your zero reference (just like in the case of potential energy). Pressure measured with respect to some such (arbitrary) zero reference is called \"gauge pressure\". But I think what you have in mind is \"absolute pressure\", in which the vaccuum is taken to be your reference for measurement of pressure in any other system.\n\nWhenever external forces act on a body it will cause stress inside the body (there could be mean motion/rotation as well, but that does not concern us). In an open container of fluid, even neglecting contribution due to gravity, there is the atmosphere pushing down on the fluid and container walls that are pushing from the sides. What if we go into deep space where there is no gravity and just consider a free blob of liquid not inside any container? There is still surface tension force to reckon. If the fluid blob becomes too big (e.g. stars) there are forces due to self gravitation.\n\nSo to summarize:\n\n1. How do we prove/show that there are compressive forces in a static fluid?\n\nWhen a body is acted upon by external forces it sets up stresses inside the body. If those stresses cannot be shear (because by definition, shear stresses induce motion in a fluid), then they must be compressive stresses.\n\n1. How do we prove/show that a fluid element does not move continuously under the influence of compressive forces?\n\nBecause presence of compressive stress does not imply presence of shear stress.\n\n• I am still not completely satisfied with the answers though. Could you provide a direct observation leading to the conclusion that a fluid element will not move continuously under compressive forces? Aug 4, 2017 at 18:23\n• @Geophysics You may un-accept the answer if you are not satisfied with it. May be you will get other answers. I am not sure what you mean by direct observation. How would you observe such a thing? We define fluids to be those that will deform continuously under shear stress. Can you think of a situation where compressive forces alone can set a fluid into continuous motion?\n– Deep\nAug 5, 2017 at 3:33\n• I tried but could not think of an example and I think that it is my problem: an indirect answer, like yours, though makes complete sense, gives me a hard time accepting. The same thing goes for textbooks. All of them that I have read only provide an example in which a shear stress is applied and the fluid moves and conclude that a fluid does not exert shear forces. However, none of them provide an example for normal stresses. Their conclusion about normal stresses is indirect just like yours. I just find it strange that nobody could provide a direct observation for that property of a fluid. Aug 5, 2017 at 5:40"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.932636,"math_prob":0.9193176,"size":3568,"snap":"2023-40-2023-50","text_gpt3_token_len":765,"char_repetition_ratio":0.1386083,"word_repetition_ratio":0.013355592,"special_character_ratio":0.21580717,"punctuation_ratio":0.08802309,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.989294,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T23:50:23Z\",\"WARC-Record-ID\":\"<urn:uuid:b3414169-50a4-4e34-af3b-859391572e4e>\",\"Content-Length\":\"169134\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c16c2bec-3ed7-4e5b-848d-b3e76aee6d06>\",\"WARC-Concurrent-To\":\"<urn:uuid:c7329cf2-8751-4500-b840-0bffae7c3ef1>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/350196/show-that-there-are-compressive-forces-in-a-static-fluid-not-by-arguing-that-the\",\"WARC-Payload-Digest\":\"sha1:LJLUH2F5GK4MXNXQAWLGE3DVSGC7ZM5H\",\"WARC-Block-Digest\":\"sha1:O4SM4U42W3FU7AU4KR74K5B6UTWVVDGP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679102697.89_warc_CC-MAIN-20231210221943-20231211011943-00878.warc.gz\"}"} |
http://mathonline.wikidot.com/the-group-action-of-conjugation-of-a-subgroup-on-a-group | [
"The Group Action of Conjugation of a Subgroup on a Group\n\n# The Group Action of Conjugation of a Subgroup on a Group\n\nLet $G$ be a group and let $H$ be a subgroup of $G$. We can define a group action of the subgroup $H$ on the set $G$ as follows. For each $h \\in H$ and for each $g \\in G$ let $(h, g) \\to hgh^{-1} \\in G$. We will now check that this is indeed a group action of the group $H$ on the set $G$.\n\nFor all $h_1, h_2 \\in H$ and for all $g \\in G$ we have that:\n\n(1)\n\\begin{align} \\quad h_1(h_2g) = h_1(h_2gh_2^{-1}) = h_1h_2gh_2^{-1}h_1^{-1} = (h_1h_2)g \\end{align}\n\nAnd for all $g \\in G$ we have that (where $e \\in H \\subseteq G$ is the identity):\n\n(2)\n\\begin{align} \\quad ge = geg^{-1} = gg^{-1} = e \\end{align}\n\nThus $(h, g) \\to hgh^{-1}$ is indeed a group action of the group $H$ on the set $G$."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.715195,"math_prob":0.9999883,"size":817,"snap":"2020-45-2020-50","text_gpt3_token_len":308,"char_repetition_ratio":0.1500615,"word_repetition_ratio":0.24528302,"special_character_ratio":0.38678092,"punctuation_ratio":0.05464481,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999937,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-25T02:26:38Z\",\"WARC-Record-ID\":\"<urn:uuid:cbde2902-8fe7-4dfd-a535-4531d9bbb394>\",\"Content-Length\":\"15058\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:498a222d-e073-452f-8e22-383c9c2aed4d>\",\"WARC-Concurrent-To\":\"<urn:uuid:57fe53c7-2d92-42cd-b5b3-2da7eeeaf727>\",\"WARC-IP-Address\":\"107.20.139.176\",\"WARC-Target-URI\":\"http://mathonline.wikidot.com/the-group-action-of-conjugation-of-a-subgroup-on-a-group\",\"WARC-Payload-Digest\":\"sha1:SNB7AMSEE5VQEHW2ANTBAWW3ZAVOTSYG\",\"WARC-Block-Digest\":\"sha1:RGP2A3NDTCOBPYRS7JM24PYM6OEYEWWW\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107885126.36_warc_CC-MAIN-20201025012538-20201025042538-00401.warc.gz\"}"} |
http://basesloadedumpiring.com/nfvk5yb/237019-properties-of-real-numbers-worksheet-pdf | [
"Worksheet On Real Numbers For Class 10 Pdf. And c are real numbers. Neucha Love Ya Like A Sister Take any two whole numbers and add them. Basic Number Properties The ideas behind the basic properties of real numbers are rather simple. merely reinforcement of previously learned properties of sets and real numbers. Close. 2. Calculations Using Significant Figures Worksheet A... Fha Streamline Worksheet Without Appraisal 2019, Abundance Of Isotopes Chem Worksheet 4 3 Answer Key, Chemical Bonding Worksheet Fill In The Blanks, Production Possibilities Frontier Worksheet Answers. The numbers increase from left to right and the point labeled 0 is the the point on a number line that corresponds to a real number is the of the number. Algebra Properties Let a, b, and c be real numbers, variables, or algebraic expressions. ... Find the Missing Numbers. Real numbers are simply the combination of rational and irrational numbers, in the number system. Al 2 o 3 22. Naming Ionic And Covalent Com... Polyatomic ions are periodic trends table worksheet periodic trends table worksheet by gemma warner savio staff 10 months ago 6 minutes 18 ... 21 posts related to writing formulas for ionic compounds worksheet 2. It cannot be both. Identify and apply the properties of real numbers closure commutative associative distributive identity inverse 1 which property is illustrated by the equation ax ay a x y. Addition. Holt Algebra 2 1-2 Properties of Real Numbers For all real numbers a and b, WORDS Distributive Property When you multiply a sum by a number, the result is the same whether you add and then multiply or whether you multiply each term by the number and add the products. Order Of Operations With Real Numbers Worksheet. Distributive property of multiplication worksheet - I. Distributive property of multiplication worksheet - II. example: 9 + 6 + 1 = 16 1 + 6 + 9 = 16 Associative Property of Addition You can group addends different ways, and the sum will not change. . Quadratic equations word problems worksheet. The Real Number System - Displaying top 8 worksheets found for this concept.. We define the real number system to be a set R together with an ordered pair of functions from R X R into R that satisfy the seven properties listed in this and the succeeding two sections of this chapter. px, Please allow access to the microphone This is a set of 4 properties of real numbers worksheets.Worksheet 1: Commutative, Associative and Distributive only. This lesson on Properties of Real Numbers is one that gets covered at the beginning of every Algebra course. True means that the statement is true for all real numbers. Commutative, Associatice and Distributive only.Worksheet 3: Matching. A.N.1: Identifying Properties: Identify and apply the properties of real numbers (closure, commutative, associative, distributive, identity, inverse) 1 Which property is illustrated by the equation ax+ay =a(x+y)? Commutative property of addition. U.S. National Standards. Which sentence is an example of the distributive property? Solving Systems Of Equations Algebraically Workshe... Converse Of The Pythagorean Theorem Worksheet. . Question 5 Which property of addition does the following expresion illustrate? Basic Number Properties Commutative Property a. On the other hand, imaginary numbers are the un-real numbers and cannot be represented on the number line. 4 + 5 = 9 (whole number) 8 + 4 = 12 (whole number) 90 + 0 = 90 (whole number) It is clear from the above examples that sum of any two whole numbers results in whole number. With Identity ©W P2p0 s1S2 g 5Keu6t 2aG ESBoPfltew VaermeP uL TL vCC. Special Elite Gurmukhi Just Me Again Down Here c. Some irrational numbers are integers. Focus on understanding solving equations as a process of reasoning and explaining the reasoning. Zero is the additive inverse of itself. Gloria Hallelujah Numbers: Free worksheets, handouts, esl printable exercises pdf and resources. ZIP (176.79 KB) This r squared creation is the complementary worksheet for our 1.6 PowerPoint, which students will complete as homework. Crafty Girls t Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 1 Name_____ Sets of Real Numbers … TRUE means that the statement is true for all real numbers. Main content: Properties of Real Numbers Other contents: Commutative, Associative Add to my workbooks (1) Download file pdf Embed in my website or blog Add to Google Classroom Add to Microsoft Teams Share through Whatsapp Hello Math Teachers! The printable properties worksheets for 3rd grade and 4th grade kids include commutative and associative properties of addition and multiplication. Observe the sum carefully. VT323 Black Ops One Integers and absolute value worksheets. ID: 1132291 Language: English School subject: Math Grade/level: 7/8 Age: 10-14 Main content: Properties Other contents: Add to my workbooks (4) Download file pdf Embed in … Commutative property of multiplication. When two numbers are multiplied together, the product is the same regardless of the order in which the numbers are multiplied. Sets of numbers in the real number system reals a real number is either a rational number or an irrational number. Addition Properties Commutative Property of Addition You can add numbers in any order. Unkempt Russo One 60 Definite And Indefinite Articles Spanish Worksheet... Qualitative Vs Quantitative Worksheet Answers. Worksheet 1 properties of real numbers instructions. Creepster Yanone Kaffeesatz 2000+ Worksheets available here and free to be downloaded! In general, all the arithmetic operations can be performed on these numbers and they can be represented in the number line, also. Hello Math Teachers! Holt Algebra 1.6 Properties of Real Numbers Worksheet (DOC & PDF) by . Baloo Paaji Addition. Look at the top of your web browser. Properties of Real Numbers When analyzing data or solving problems with real numbers, it can be helpful to understand the properties of real numbers. is a real number. •Ex: x = 5, then y = x + 6 is the same as y = 5 + 6. 2 4 7 0 11 3 rationals a rational number is any number that can be put in the form p q where p and q are integers and 0q. Real Numbers Worksheets and Quizzes Real Numbers Properties of Real Numbers What are Real Numbers? 13 1-4 Online Activities - Properties of Real Numbers. Properties of real numbers worksheet pdf. . Chewy Decimal place value worksheets. Jolly Lodger Fill in the missing numbers and find what property is used. Properties Of Atoms And The Periodic Table Workshe... Naming Other Organic Compounds Worksheet Answers, Fragments And Run On Sentences Worksheet Pdf, Qualitative Vs Quantitative Observations Worksheet, The Fall Of The House Of Usher Worksheet Answers, Genetics Practice Problems Worksheet Answers Pdf, Printable Three Branches Of Government Worksheet Pdf, Newton S Laws Of Motion Review Worksheet Answers, 7 2 Cell Structure Worksheet Answers Biology, Chemistry Temperature Conversion Worksheet. The sets of rational and irrational numbers together make up the set of real numbers.As we saw with integers, the real numbers can be divided into three subsets: negative real numbers, zero, and positive real numbers. Check my answers In the number system, real numbers are the combination of irrational and rational numbers. If a number is rational, then it must be a whole number. 1) associative 2) additive identity Keystone Review { Properties of Real Numbers Name: Date: 1. This quiz and worksheet will gauge your understanding of the properties of real numbers. Properties of Real Numbers Worksheets. 12 5 1 8 3 4 62713 irrationals an irrational number is a nonrepeating nonterminating decimal. Skills Worksheet Active Reading Section 1 Scientif... Six Types Of Chemical Reactions Worksheet, Simple And Compound Interest Worksheet Answers. Name_____ _____ Date_____ Properties of Real Numbers – Practice A Match each expression with one of the properties shown. Covers the following skills: Compare real numbers; locate real numbers on a number line. 16 11. Zero Property C. Commutative property D. Identity property Question 6 Which property of addition is used in the following? Numbers. If you ever need some worksheets to improve your children\\'s skills, download them from here. These imaginary numbers are typically used to describe complex numbers. 11 Fredericka the Great View Homework Help - Algebra 1 Honors Worksheet _1_Properties of Real Numbers _ Order of Operations_.pdf from ENGINEERIN 101 at Young Men's Preparatory Academy. ID: 1142029 Language: English School subject: Math Grade/level: Algebra1 Age: 12+ Main content: Properties of real numbers Other contents: Add to my workbooks (3) Download file pdf Embed in my website or blog Add to Google Classroom Cut and paste or write equations in their correct position in the table.Worksheet 2: Matching. FALSE means that there is at least one set of real numbers that makes it false. Shadows Into Light Two 70 Dancing Script Freckle Face What is the difference between commutative and associative property? The product of 1 a, a6=0, and its reciprocal is A. Additive Identity The sum of any number and is equal to the number. 12 Identify and apply the properties of real numbers closure commutative associative distributive identity inverse 1 which property is illustrated by the equation ax ay a x y. Satisfy Bangers Properties Real Numbers Addition and Multiplication . Real Numbers Worksheets: Operations with Real Numbers Worksheets Quizzes: Integers and Real Numbers Quiz Classifying Numbers Real Numbers and Integers Quiz Identifying Real and Imaginary Numbers Quiz Math Quizzes Integes. For any number , the sum of and is . 24 a. 1-4 Exit Quiz - Properties of Real Numbers. If you see a message asking for permission to access the microphone, please allow. Coming Soon There are specific instructions for each partner to not just \"grade\" each other's work and give the actual What do you want to do? 28 You can easily perform all arithmetic operations on these numbers and can represent them on the number line. (You add the part in parenthesis first.) Escolar Addends are grouped with parenthesis. Integers worksheet – Printable PDFs. Hello Math Teachers! Here is your free content for this lesson! a + 0 = a 6 + 0 = 6. a × 1 = a 6 × 1 = 6 Email my answers to my teacher, Font: Henny Penny This is an activity where two students work independently to add and subtract real numbers, then collaborate to check their answers. $1.95. Annie Use Your Telescope Focus on understanding solving equations as a process of reasoning and explaining the reasoning. d. b. Homework. Addition properties worksheets include special focus in each property of addition. If false, explain why. Kranky 12. THE REAL NUMBER SYSTEM 5 1.THE FIELD PROPERTIES. 1 C. 1 a2 D. 0 13. When two numbers are multiplied together the product is the same regardless of the order in which the numbers are multiplied. Includes key in PDF format.I use this works 1-4 Guide Notes SE - Properties of Real Numbers. Ubuntu We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, … Orbitron 9 Hello Math Teachers! Focus on understanding solving equations as a process of reasoning and explaining the reasoning. A. ab = ba B. a(bc) = (ab)c C. a(b+c) = ab+ac D. a1 = a 2. In Algebra 2 these are of the up-most importance because these properties are not only essential pieces to knowing what to do IN a problem, but they are also a lot of times listed in the Directions of the problem. Real numbers are simply the combination of rational and irrational numbers, in the number system. What is the multiplicative inverse of 1 5? Given any number n, we know that n is either rational or irrational. Classifying Real Numbers Worksheet. 8th Grade Math Worksheets and Answer key, Study Guides. 35 problems, 2-sided worksheet, on identifying Properties of Real Numbers using a fun puzzle and stating reasons for an Algebraic Proof. Terminating And Repeating Decimals Worksheet 8th G... Dna Mutations Practice Worksheet Answer Key Pdf, Secret Of Photo 51 Video Worksheet Answer Key, Thomas Paine Common Sense Worksheet Answers, 4th Grade Photosynthesis Diagram Worksheet. PDF printable integers math worksheets for children in: Pre-K, Kindergarten, 1 st grade, 2 nd grade, 3 rd grade, 4 th grade, 5 th grade, 6 th grade and 7 th grade. Lobster Every year a few more properties are added to the list to master. False means that there is at least one set of real numbers that makes it false. Ribeye Marrow Explore some of them for free! Real Numbers are closed (the result is also a real number) under addition and multiplication: Closure example. A sodium chlorine. Bubblegum Sans 18 1-4 Assignment - Properties of Real Numbers. Algebra 1 Honors Practice Identifying Properties of Real Numbers Identify the property shown. 3 5 8 or 5 3 8 b. Math Worksheet Properties Of Real Numbers. When one group believes it is competent check their work using random questions. 40 + = B. Multiplicative Identity The product of any number and is equal to the number… The LibreTexts libraries are Powered by MindTouch ® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Kindergarten shapes worksheets on identifying simple shapes keywords. Architects Daughter Comic Neue Properties of real numbers worksheet pdf. When two numbers are added, the sum is the same regardless of the order in which the numbers are added. 22 Fredoka One English vocabulary resources online The properties of real numbers are. IDENTITY PROPERTIES A. Decide whether each statement is TRUE or FALSE. Classifying Numbers In The Real Number System Graphic Organizer And Activity Real Number System Real Numbers Math School, Properties Of Real Numbers Handy Reference Poster Gra Real Numbers Solving Equations Algebra, Properties Worksheets Properties Of Mathematics Worksheets Mathematics Worksheets Math Addition Worksheets Math Worksheets, Real Numbers Math Number Sense Real Numbers Number Worksheets, Properties Of Real Numbers Sort Task Cards Real Numbers Task Cards Evaluating Expressions, Properties Of Integers With Worked Solutions Examples Videos Math Properties Associative Property Commutative Property, Properties Worksheets Free Math Worksheets Teaching Math Math Properties, Real Number Properties Real Number System Real Numbers Number System Math, Properties Of Real Numbers Foldable For Interactive Notebooks Math Properties Real Numbers Algebra Interactive Notebooks, Identifying Number Sets Worksheets Scientific Notation Word Problems Number Worksheets Algebra, Properties Worksheets Finding Identity Property Of Multiplication Worksheet Properties Of Multiplication Distributive Property Multiplication Worksheets, Properties Of Real Numbers Algebraic Expressions Distance Learning Algebraic Expressions Real Numbers Real Number System, Following Directions Real Numbers Math Vocabulary Real Numbers Activity Real Numbers Real Number System, Properties Worksheets Properties Of Mathematics Worksheets Mathematics Worksheets Math Properties Properties Of Multiplication, Classifying Real Numbers Coloring Activity Real Numbers Color Activities Common Core Math Middle School, Properties Of Real Numbers Matching Activity Real Numbers Real Numbers Activity Creative Math, Determining Distributive Property Worksheet Distributive Property Worksheets Number Worksheets. Associative Property Of Addition Math For First Graders Math Properties Associative Property. To link to this page, copy the following code to your site: More … Properties of Real Numbers - Word Docs & PowerPoints 1-1 Assignment - Properties of Real Numbers 1-1 Bellwork - Properties of real numbers 1-1 Exit Quiz - Properties of Real Numbers 1-1 Guided Notes SE - Properties of Real Numbers 1-1 Guided Notes TE - Properties of Real […] Covered By Your Grace Properties of Real Numbers identity property of addition_Adding 0 to a number leaves it unchanged identity property of multiplication_Multiplying a number by 1 leaves it unchanged multiplication property of 0_Multiplying a number by 0 gives 0 additive Inverse & definition of opposites_Adding a number to its opposite gives 0 o Every number has an opposite Closure property. Closure property. Includes key in PDF format.I use this works To make the task of counting easier, addition came about. Properties of the Real Numbers The following are the properties of addition and multiplication if x, y, and z are real numbers: Addition Multiplication Commutative x y=y x x⋅y=y⋅x Associative x y z=x y z x⋅y ⋅z=x⋅ y⋅z Identity x 0=x x⋅1=x Inverse There is a unique number −x such that x −x =0 If x≠0, there is a unique number … The pdf exercises best suit students of grade 1 through grade 7. The Real Number System - Displaying top 8 worksheets found for this concept.. They come in many forms, most commonly associated with children's school work assignments, tax forms, and accounting or other business environments. Decide whether each statement is true or false. Which property of real numbers is illustrated by the equation p 3+ p 3 = 0? We have thousands of printable worksheets such as Properties Of Real Numbers Worksheet With Answers Pdf/page/2 that … 20 Pacifico TRUE means that the statement is true for all real numbers. . 1-4 Guide Notes SE - Properties of Real Numbers. Ordering Real Numbers Worksheet 8th Grade Pdf. Atomic Structure Worksheet Teaching Chemistry Chemistry Worksheets Chemistry ... Answer key for chemistry matter 1. 4 + 5 = 9 (whole number) 8 + 4 = 12 (whole number) 90 + 0 = 90 (whole number) It is clear from the above examples that sum of any two whole numbers results in whole number. Additive inverse and identity worksheets included. 1-4 Exit Quiz - Properties of Real Numbers. Aldrich Grand Hotel Find the multiplicative inverse of each number. FALSE means that there is at least one set of real numbers that makes it false. 1-4 Bell Work - Properties of Real Numbers. 8 Real numbers. Writing electron configuration worksheet answer key. Some of the worksheets for this concept are Sets of numbers in the real number system, Components of the real number system, 6th number grade system, Sets of real numbers date period, Real numbers precalculus, Real numbers, Real numbers and number operations, Introduction to 1 real numbers and algebraic expressions. Holt Algebra 2 1-2 Properties of Real Numbers For all real numbers a and b, WORDS Associative Property The sum or product of three or more real numbers is the same regardless of the way the numbers are grouped. Determine which properties of real numbers that is applied in each statement in exercise 13 30. a×b is real 6 × 2 = 12 is real . Rancho Reenie Beanie Take any two whole numbers and add them. Pinyon Script Identifying equivalent algebraic expressions: Worksheet 8.1 Name ……………………………… Date ……………………………… Score Some of the worksheets displayed are practice 8 1 give the iupac name of each of the following bcpldtpbc note 201306 acc j naming organic c... Label them with their charge. Identify the square root of a perfect square to 400 or, if it is not a perfect square root, locate it as an irrational number between two consecutive positive integers. Schoolbell Amatic SC Remember that the real numbers are made up of all the rational and irrational numbers. . Closure Property of Multiplication The product of two real numbers is a real number. 35 problems, 2-sided worksheet, on identifying Properties of Real Numbers using a fun puzzle and stating reasons for an Algebraic Proof. A worksheet, in the word's original meaning, is a sheet of paper on which one performs work. Free Preschool Worksheets Color By Number Numbers 1 10 Pre Writing Worksheets Line T In 2020 Free Preschool Printables Shapes Worksheets Tracing Worksheets Preschool Shape is not fixed in space and recognise shapes in different orientations e understand the difference … t Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 1 Name_____ Sets of Real Numbers … •If a = b, then b may be substituted for a in any expression containing a. 10 ©W P2p0 s1S2 g 5Keu6t 2aG ESBoPfltew VaermeP uL TL vCC. 80 1 B. For example, the reciprocal of 5 is$\\frac{1}{5}\\$, and the oppostie number of 5 is -5. Which equation illustrates the multiplicative inverse property? Adding zero leaves the real number unchanged, likewise for multiplying by 1: Identity example. At the same time, the imaginary numbers are the un-real numbers, which cannot be expressed in the number line and is commonly used to represent a complex number. Focus on understanding solving equations as a process of reasoning and explaining the reasoning. This worksheet is practice for the students on how to use the Commutative, Associative, and Distributive Property. e 3 KAUl MlN erJi Hg 0hPt5sc Gr ae 2s Deirfv NeEd z.7 w qMua5d2e w Jw ViGtqhO qI3nvf ti hnziYt3eh FA 2l ug BeTb Wr0ag F1I. Fontdiner Swanky You may even think of it as “common sense” math because no complex analysis is really required. ID: 1273662 Language: English School subject: Math Grade/level: 7-9 Age: 11-15 Main content: Properties of Real Numbers Other contents: Properties of Real Numbers Add to my workbooks (1) Download file pdf Embed in my website or blog Add to Google Classroom 35 problems, 2-sided worksheet, on identifying Properties of Real Numbers using a fun puzzle and stating reasons for an Algebraic Proof. Real Numbers. Cherry Cream Soda 35 problems, 2-sided worksheet, on identifying Properties of Real Numbers using a fun puzzle and stating reasons for an Algebraic Proof. PROPERTIES OF REAL NUMBERS Let , , and be any real numbers 1. Mountains of Christmas 14 Addition. . Luckiest Guy Lobster Two a. Atomic structure with answer. 1 associative 2 commutative 3 distributive 4 identity 2 the statement 2 0 2 is an example of the use of which property of real. Real numbers can be pictured as points on a line called areal number line. The set of counting numbers was formed. Oswald Explore some of them for free! (3 + 9) + 8 = 3 + (9 + 8) b.14 • 1 = 14 SOLUTION a.Associative property of addition b.Identity property of multiplication. ID: 1066822 Language: English School subject: Math Grade/level: 7-12 Age: 12-18 Main content: Real Numbers Other contents: Add to my workbooks (2) Download file pdf Embed in my website or blog Add to Google Classroom You could orally count along with your child or perhaps your students, you could possibly introduce numbers and … All integers are rational. ...accomplish this. 16 Lesson Proper: Recall how the set of real numbers was formed and how the operations are performed. . Ionic Bonds Worksheets Answer Key Chemistry Worksheets... C nh 2n 1. Other Properties. What Are the Properties of Real Numbers? Open Sans SWBAT: identify and apply the commutative, associative, and distributive properties to simplify expressions 4 Algebra Regents Questions 1) The statement is an example of the use of which property of real numbers? Find the multiplicative inverse of each number. In general, all the arithmetic operations can be performed on these numbers and they can be represented in the number line, also. Decide whether each statement is TRUE or FALSE. Here we have discussed the critical properties of real numbers which help solve algebraic problems. Substitution Property of Equality •If numbers are equal, then substituting one in for the another does not change the equality of the equation. Section P.2 Properties of Real Numbers 17 Properties of Real Numbers Let a, b, and c represent real numbers. Kalam Arial a+b is real 2 + 3 = 5 is real. Estimating percent worksheets. Includes key in PDF format.I use this works The or additive inverse, of any number a is ºa. Exo 2 . The quiz will also assess your comprehension of concepts like classification and complex equations. Addition Properties Worksheets . Writing and evaluating expressions worksheet Rock Salt When two numbers are added the sum is the same regardless of the order in which the numbers are added. Shape tracing worksheet trace the shapes. Size: Basic number properties commutative property a. Real numbers 2) Put a check mark for each set that the number is a part of: Whole Numbers Integers Rational Numbers Irrational Numbers Real Numbers -7 ¾ 2 5 0.398 3) True or false? NUMBERS Worksheet 1 Properties of Real Numbers Instructions: Assume that a;b; and c are real numbers. 3 + 5 = 8 or 5 + 3 = 8 b. Multiplication. Indie Flower Limiting Reactant Worksheet Stoichiometry 6 Answer... Scientific Method Review Worksheet Fill In The Blank. Thus, is called the additive identity. to any real number, the sum is the number itself. r squared creation . For a real number, it reverses its sign: the opposite to a positive number is negative, and the opposite to a negative number is positive. Partner A has Partner B's worksheet answers, and vice versa. What is the general formula for a noncyclic alkane c h. Organic Chemistry Nomenclature Worksheet Week News Softwares Includes ... properties of real numbers worksheet with answers pdf, Classification Of Matter Worksheet Chemistry Answer Key, Writing Electron Configuration Worksheet Answer Key, Elements Compounds And Mixtures Worksheet Grade 8 Answer Key, Ionic Compound Formula Writing Worksheet Answers, Formulas And Nomenclature Binary Ionic Compounds Worksheet Answers."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.83425283,"math_prob":0.94044787,"size":25688,"snap":"2023-40-2023-50","text_gpt3_token_len":5591,"char_repetition_ratio":0.23294657,"word_repetition_ratio":0.16390291,"special_character_ratio":0.2120056,"punctuation_ratio":0.11531878,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.98634404,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T12:53:18Z\",\"WARC-Record-ID\":\"<urn:uuid:7ca9d19f-2d11-4f8b-aebd-3460a151c08f>\",\"Content-Length\":\"58662\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:57651c78-deb1-45b0-9d4e-182b6fac1d61>\",\"WARC-Concurrent-To\":\"<urn:uuid:5cdafcd6-ee03-4e91-8591-c4e7e4f9943c>\",\"WARC-IP-Address\":\"160.153.0.24\",\"WARC-Target-URI\":\"http://basesloadedumpiring.com/nfvk5yb/237019-properties-of-real-numbers-worksheet-pdf\",\"WARC-Payload-Digest\":\"sha1:SWZTZW7BRQ2MGMXY5XH3HSMROY7PUWZX\",\"WARC-Block-Digest\":\"sha1:O47NB32DMIIVAGDYHI2K6K23M65DPFTB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511369.62_warc_CC-MAIN-20231004120203-20231004150203-00293.warc.gz\"}"} |
https://www.lation.org/revelation/archives/date/2008/05 | [
"May 27, 2008\nHave you ever been so been so?\n\nSir Stephen Treadwell once said “it’s easy to come up with a good idea, but it’s harder not to.”\n\nMay 25, 2008\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\n1+1+1+2+2+2+2+2+1+1+1\n\nMay 23, 2008\n\n### Everybody needs a teacher. People like to talk about it, happens all the time. Just stop and think there’s a reason for it all. Something out of nothing, the only time is now. Real is now, the rest is later. Everyday there’s a way to make things better. Now not never we must work together. All we need is you and we. Caring is the key. Open the door and step into reality. Make consciousness a state of bliss and happiness our greatest gift. Make reason a season that begins without ends. Our time is money and survival our profit. Live the life of our dreams with all of our means.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"May 22, 2008",
null,
"May 16, 2008",
null,
"Vicks Plant, the plant of a raver.",
null,
"",
null,
"",
null,
"May 13, 2008\nbackwards can be used to obvious effect.",
null,
"May 11, 2008\n(nooone)\n\nI heard that The Big Bads inspired Eminem to do many record office skits. Detroit. Unh."
]
| [
null,
"http://www.lation.org/revelation/wp-content/uploads/2008/05/thedeadhippies.jpg",
null,
"http://www.lation.org/revelation/wp-content/uploads/2008/05/mel005.jpg",
null,
"http://www.lation.org/revelation/wp-content/uploads/2008/05/mel006.jpg",
null,
"http://www.lation.org/revelation/wp-content/uploads/2008/05/mel007.jpg",
null,
"http://www.lation.org/revelation/wp-content/uploads/2008/05/mel008.jpg",
null,
"http://www.lation.org/revelation/wp-content/uploads/2008/05/mel009.jpg",
null,
"http://www.lation.org/revelation/wp-content/uploads/2008/05/03.jpg",
null,
"http://www.lation.org/revelation/wp-content/uploads/2008/05/vicks.jpg",
null,
"http://www.lation.org/revelation/wp-content/uploads/2008/05/friends.jpg",
null,
"http://www.lation.org/revelation/wp-content/uploads/2008/05/living.jpg",
null,
"http://www.lation.org/revelation/wp-content/uploads/2008/05/future.jpg",
null,
"http://www.lation.org/revelation/wp-content/uploads/2008/05/sticky.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7382099,"math_prob":0.9984078,"size":1208,"snap":"2021-31-2021-39","text_gpt3_token_len":563,"char_repetition_ratio":0.32475084,"word_repetition_ratio":0.06818182,"special_character_ratio":0.45943707,"punctuation_ratio":0.07027027,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99674284,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-06T02:09:28Z\",\"WARC-Record-ID\":\"<urn:uuid:a54efe75-3a67-4c32-bd5e-368db0ff2576>\",\"Content-Length\":\"31387\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:699f1f18-6d3b-43b4-863c-a612ff5f545a>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5380552-c365-4027-b593-c2deb8203d09>\",\"WARC-IP-Address\":\"64.90.51.189\",\"WARC-Target-URI\":\"https://www.lation.org/revelation/archives/date/2008/05\",\"WARC-Payload-Digest\":\"sha1:QHW3G6RMEKOOSNZCLMMP4OOIVZMGHIHT\",\"WARC-Block-Digest\":\"sha1:5B2LXCFH7M2EQTRUGLNA2XBV5O6BFU3N\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152112.54_warc_CC-MAIN-20210806020121-20210806050121-00048.warc.gz\"}"} |
https://help.semmle.com/qldoc/javascript/semmle/javascript/Expr.qll/module.Expr.html | [
"# Module Expr\n\nProvides classes for working with expressions.\n\n## Import path\n\n`import semmle.javascript.Expr`\n\n## Imports\n\n javascript Provides classes for working with JavaScript programs, as well as JSON, YAML and HTML.\n\n## Classes\n\n AddExpr An addition or string-concatenation expression. ArithmeticExpr A binary arithmetic expression using `+`, `-`, `/`, `%` or `**`. ArrayComprehensionExpr An array comprehension expression. ArrayExpr An array literal. ArrowFunctionExpr An arrow function expression. AssignAddExpr A compound add-assign expression. AssignAndExpr A compound bitwise-‘and’-assign expression. AssignDivExpr A compound divide-assign expression. AssignExpExpr A compound exponentiate-assign expression. AssignExpr A simple assignment expression. AssignLShiftExpr A compound left-shift-assign expression. AssignModExpr A compound modulo-assign expression. AssignMulExpr A compound multiply-assign expression. AssignOrExpr A compound bitwise-‘or’-assign expression. AssignRShiftExpr A compound right-shift-assign expression. AssignSubExpr A compound subtract-assign expression. AssignURShiftExpr A compound unsigned-right-shift-assign expression. AssignXOrExpr A compound exclusive-‘or’-assign expression. Assignment An assignment expression, either compound or simple. AwaitExpr An `await` expression. BigIntLiteral A BigInt literal. BinaryExpr An expression with a binary operator. BitAndExpr A bitwise ‘and’ expression. BitNotExpr A bitwise negation expression. BitOrExpr A bitwise ‘or’ expression. BitwiseBinaryExpr A bitwise binary expression, that is, either a bitwise ‘and’, a bitwise ‘or’, or an exclusive ‘or’ expression. BitwiseExpr A bitwise expression using `&`, `|`, `^`, `~`, `<<`, `>>`, or `>>>`. BooleanLiteral A Boolean literal, that is, either `true` or `false`. CallExpr A function call expression. Comparison A comparison expression, that is, either an equality test (`==`, `!=`, `===`, `!==`) or a relational expression (`<`, `<=`, `>=`, `>`). CompoundAssignExpr A compound assign expression. ComprehensionBlock A comprehension block in a comprehension expression. ComprehensionExpr A comprehension expression, that is, either an array comprehension expression or a generator expression. ConditionalExpr A conditional expression. DecExpr A (pre or post) decrement expression. Decoratable A program element to which decorators can be applied, that is, a class, a property or a member definition. Decorator A decorator applied to a class, property or member definition. DeleteExpr A `delete` expression. DivExpr A division expression. DotExpr A dot expression. DynamicImportExpr A dynamic import expression. EqExpr An equality test using `==`. EqualityTest An equality test using `==`, `!=`, `===` or `!==`. ExpExpr An exponentiation expression. Expr An expression. ExprOrType A program element that is either an expression or a type annotation. ForInComprehensionBlock A `for`-`in` comprehension block in a comprehension expression. ForOfComprehensionBlock A `for`-`of` comprehension block in a comprehension expression. FunctionBindExpr A function-bind expression. FunctionExpr A (non-arrow) function expression. FunctionSentExpr A `function.sent` expression. GEExpr A greater-than-or-equal expression. GTExpr A greater-than expression. GeneratorExpr A generator expression. Identifier An identifier. ImmediatelyInvokedFunctionExpr An immediately invoked function expression (IIFE). ImportMetaExpr An `import.meta` expression. InExpr An `in` expression. IncExpr A (pre or post) increment expression. IndexExpr An index expression (also known as computed property access). InstanceofExpr An `instanceof` expression. InvokeExpr An invocation expression, that is, either a function call or a `new` expression. LEExpr A less-than-or-equal expression. LShiftExpr A left-shift expression using `<<`. LTExpr A less-than expression. Label A statement or property label, that is, an identifier that does not refer to a variable. LegacyLetExpr An old-style `let` expression of the form `let(vardecls) expr`. Literal A literal. LogAndExpr A logical ‘and’ expression. LogNotExpr A logical negation expression. LogOrExpr A logical ‘or’ expression. LogicalBinaryExpr A short-circuiting logical binary expression, that is, a logical ‘or’ expression, a logical ‘and’ expression, or a nullish-coalescing expression. LogicalExpr A logical expression using `&&`, `||`, or `!`. MethodCallExpr A method call expression. ModExpr A modulo expression. MulExpr A multiplication expression. NEqExpr An inequality test using `!=`. NegExpr An arithmetic negation expression (also known as unary minus). NewExpr A `new` expression. NonStrictEqualityTest A non-strict equality test using `!=` or `==`. NullLiteral A `null` literal. NullishCoalescingExpr A nullish coalescing ‘??’ expression. NumberLiteral A numeric literal. ObjectExpr An object literal, containing zero or more property definitions. OptionalChainRoot INTERNAL: This class should not be used by queries. OptionalUse A call or member access that evaluates to `undefined` if its base operand evaluates to `undefined` or `null`. ParExpr A parenthesized expression. PlusExpr A unary plus expression. PostDecExpr A postfix decrement expression. PostIncExpr A postfix increment expression. PreDecExpr A prefix decrement expression. PreIncExpr A prefix increment expression. PropAccess A property access, that is, either a dot expression of the form `e.f` or an index expression of the form `e[p]`. Property A property definition in an object literal, which may be either a value property, a property getter, or a property setter. PropertyAccessor A property getter or setter in an object literal. PropertyGetter A property getter in an object literal. PropertySetter A property setter in an object literal. RShiftExpr A right-shift expression using `>>`. RegExpLiteral A regular expression literal. RelationalComparison A relational comparison using `<`, `<=`, `>=`, or `>`. SeqExpr A sequence expression (also known as comma expression). ShiftExpr A shift expression. SpreadElement A spread element. SpreadProperty A spread property in an object literal. StrictEqExpr A strict equality test using `===`. StrictEqualityTest A strict equality test using `!==` or `===`. StrictNEqExpr A strict inequality test using `!==`. StringLiteral A string literal, either single-quoted or double-quoted. SubExpr A subtraction expression. ThisExpr A `this` expression. TypeofExpr A `typeof` expression. URShiftExpr An unsigned right-shift expression using `>>>`. UnaryExpr An expression with a unary operator. UpdateExpr An update expression, that is, an increment or decrement expression. ValueProperty A value property definition in an object literal. VoidExpr A `void` expression. XOrExpr An exclusive ‘or’ expression. YieldExpr A `yield` expression."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.51563156,"math_prob":0.9476068,"size":6032,"snap":"2020-24-2020-29","text_gpt3_token_len":1332,"char_repetition_ratio":0.27422032,"word_repetition_ratio":0.016107382,"special_character_ratio":0.1780504,"punctuation_ratio":0.13691129,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9582884,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-15T01:40:37Z\",\"WARC-Record-ID\":\"<urn:uuid:7bca1c0c-bedb-43f6-b08c-d94497f7033f>\",\"Content-Length\":\"30651\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4dcdacfd-890c-4b7a-948d-a3b91a1668f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:48fd19d3-8d94-4449-9554-da97becf7f5e>\",\"WARC-IP-Address\":\"104.26.8.225\",\"WARC-Target-URI\":\"https://help.semmle.com/qldoc/javascript/semmle/javascript/Expr.qll/module.Expr.html\",\"WARC-Payload-Digest\":\"sha1:F2IVYNGGAGJGNQ73TSGRPQNZFN5D7CC2\",\"WARC-Block-Digest\":\"sha1:6X5CTIKDMZSHUDQSVXS5SNIITQIVW4XF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657154789.95_warc_CC-MAIN-20200715003838-20200715033838-00487.warc.gz\"}"} |
https://www.worldofitech.com/java-programming-concurrenthashmap/ | [
"# Java ConcurrentHashMap",
null,
"Contents\n\n## Java ConcurrentHashMap\n\nIn this tutorial, we will learn about the Java ConcurrentHashMap class and its tasks with the help of examples.\n\nThe ConcurrentHashMap class of the Java collections framework gives a thread-safe map. That is, various strings can get to the map immediately without influencing the consistency of sections on a map.\n\nIt implements the ConcurrentMap interface.\n\n## Make a ConcurrentHashMap\n\nTo make a simultaneous hashmap, we should import the java.util.concurrent.ConcurrentHashMap package first. When we import the package, here is the way we can make simultaneous hashmaps in Java.\n\n``````// ConcurrentHashMap with capacity 8 and load factor 0.6\nConcurrentHashMap<Key, Value> numbers = new ConcurrentHashMap<>(8, 0.6f);``````\n\nIn the above code, we have created a concurrent hashmap named numbers.\n\nHere,\n\n• Key – a unique identifier used to associate each element (value) in a map\n• Value – elements associated by keys in a map\n\nNotice the part new ConcurrentHashMap<>(8, 0.6). Here, the first parameter is capacity and the second parameter is loadFactor\n\n• capacity – The capacity of this map is 8. Meaning, it can store 8 entries.\n• loadFactor – The load factor of this map is 0.6. This means, whenever our hash table is filled by 60%, the entries are moved to a new hash table of double the size of the original hash table.\n\nIt’s possible to create a concurrent hashmap without defining its capacity and load factor. For example,\n\n``````// ConcurrentHashMap with default capacity and load factor\nConcurrentHashMap<Key, Value> numbers1 = new ConcurrentHashMap<>();``````\n\nBy default,\n\n• the capacity of the map will be 16\n• the load factor will be 0.75\n\n## Creating ConcurrentHashMap from Other Maps\n\nHere is how we can create a concurrent hashmap containing all the elements of other maps.\n\n``````import java.util.concurrent.ConcurrentHashMap;\nimport java.util.HashMap;\n\nclass Main {\npublic static void main(String[] args) {\n\n// Creating a hashmap of even numbers\nHashMap<String, Integer> evenNumbers = new HashMap<>();\nevenNumbers.put(\"Two\", 2);\nevenNumbers.put(\"Four\", 4);\nSystem.out.println(\"HashMap: \" + evenNumbers);\n\n// Creating a concurrent hashmap from other map\nConcurrentHashMap<String, Integer> numbers = new ConcurrentHashMap<>(evenNumbers);\nnumbers.put(\"Three\", 3);\nSystem.out.println(\"ConcurrentHashMap: \" + numbers);\n}\n}``````\n\nOutput\n\n``````HashMap: {Four=4, Two=2}\nConcurrentHashMap: {Four=4, Two=2, Three=3}``````\n\n## Methods of ConcurrentHashMap\n\nThe ConcurrentHashMap class provides methods that allow us to perform various operations on the map.\n\n## Insert Elements to ConcurrentHashMap\n\n• put() – inserts the specified key/value mapping to the map\n• putAll() – inserts all the entries from specified map to this map\n• putIfAbsent() – inserts the specified key/value mapping to the map if the specified key is not present in the map]\n\nFor example,\n\n``````import java.util.concurrent.ConcurrentHashMap;\n\nclass Main {\npublic static void main(String[] args) {\n// Creating ConcurrentHashMap of even numbers\nConcurrentHashMap<String, Integer> evenNumbers = new ConcurrentHashMap<>();\n\n// Using put()\nevenNumbers.put(\"Two\", 2);\nevenNumbers.put(\"Four\", 4);\n\n// Using putIfAbsent()\nevenNumbers.putIfAbsent(\"Six\", 6);\nSystem.out.println(\"ConcurrentHashMap of even numbers: \" + evenNumbers);\n\n//Creating ConcurrentHashMap of numbers\nConcurrentHashMap<String, Integer> numbers = new ConcurrentHashMap<>();\nnumbers.put(\"One\", 1);\n\n// Using putAll()\nnumbers.putAll(evenNumbers);\nSystem.out.println(\"ConcurrentHashMap of numbers: \" + numbers);\n}\n}``````\n\nOutput\n\n``````ConcurrentHashMap of even numbers: {Six=6, Four=4, Two=2}\nConcurrentHashMap of numbers: {Six=6, One=1, Four=-4, Two=2}\n``````\n\n## Access ConcurrentHashMap Elements\n\n### 1. Using entrySet(), keySet() and values()\n\n• entrySet() – returns a set of all the key/value mapping of the map\n• keySet() – returns a set of all the keys of the map\n• values() – returns a set of all the values of the map\n\nFor example,\n\n``````import java.util.concurrent.ConcurrentHashMap;\n\nclass Main {\npublic static void main(String[] args) {\nConcurrentHashMap<String, Integer> numbers = new ConcurrentHashMap<>();\n\nnumbers.put(\"One\", 1);\nnumbers.put(\"Two\", 2);\nnumbers.put(\"Three\", 3);\nSystem.out.println(\"ConcurrentHashMap: \" + numbers);\n\n// Using entrySet()\nSystem.out.println(\"Key/Value mappings: \" + numbers.entrySet());\n\n// Using keySet()\nSystem.out.println(\"Keys: \" + numbers.keySet());\n\n// Using values()\nSystem.out.println(\"Values: \" + numbers.values());\n}\n}``````\n\nOutput\n\n``````ConcurrentHashMap: {One=1, Two=2, Three=3}\nKey/Value mappings: [One=1, Two=2, Three=3]\nKeys: [One, Two, Three]\nValues: [1, 2, 3]``````\n\n#### 2. Using get() and getOrDefault()\n\n• get() – Returns the worth related to the predetermined key. Returns null if the key isn’t found.\n• getOrDefault() – Returns the worth related to the predetermined key. Returns the predefined default value if the key isn’t found.\n\nFor instance,\n\n``````import java.util.concurrent.ConcurrentHashMap;\n\nclass Main {\npublic static void main(String[] args) {\n\nConcurrentHashMap<String, Integer> numbers = new ConcurrentHashMap<>();\nnumbers.put(\"One\", 1);\nnumbers.put(\"Two\", 2);\nnumbers.put(\"Three\", 3);\nSystem.out.println(\"ConcurrentHashMap: \" + numbers);\n\n// Using get()\nint value1 = numbers.get(\"Three\");\nSystem.out.println(\"Using get(): \" + value1);\n\n// Using getOrDefault()\nint value2 = numbers.getOrDefault(\"Five\", 5);\nSystem.out.println(\"Using getOrDefault(): \" + value2);\n}\n}``````\n\nOutput\n\n``````ConcurrentHashMap: {One=1, Two=2, Three=3}\nUsing get(): 3\nUsing getOrDefault(): 5``````\n\n## Eliminate ConcurrentHashMap Elements\n\n• remove(key) – returns and eliminates the passage related to the predetermined key from the map\n• remove(key, value) – eliminates the passage from the map just if the predefined key planned to the predetermined worth and return a boolean value\n\nFor instance,\n\n``````import java.util.concurrent.ConcurrentHashMap;\n\nclass Main {\npublic static void main(String[] args) {\n\nConcurrentHashMap<String, Integer> numbers = new ConcurrentHashMap<>();\nnumbers.put(\"One\", 1);\nnumbers.put(\"Two\", 2);\nnumbers.put(\"Three\", 3);\nSystem.out.println(\"ConcurrentHashMap: \" + numbers);\n\n// remove method with single parameter\nint value = numbers.remove(\"Two\");\nSystem.out.println(\"Removed value: \" + value);\n\n// remove method with two parameters\nboolean result = numbers.remove(\"Three\", 3);\nSystem.out.println(\"Is the entry {Three=3} removed? \" + result);\n\nSystem.out.println(\"Updated ConcurrentHashMap: \" + numbers);\n}\n}\n``````\n\nOutput\n\n``````ConcurrentHashMap: {One=1, Two=2, Three=3}\nRemoved value: 2\nIs the entry {Three=3} removed? True\nUpdated ConcurrentHashMap: {One=1}``````\n\n## Bulk ConcurrentHashMap Operations\n\nThe ConcurrentHashMap class provides different bulk operations that can be applied safely to concurrent maps.\n\n### 1. forEach() Method\n\nThe forEach() method iterates over our entries and executes the specified function.\n\nIt includes two parameters.\n\n• parallelismThreshold – It specifies that after how many elements operations in a map are executed in parallel.\n• transformer – This will transform the data before the data is passed to the specified function.\n\nFor example,\n\n``````import java.util.concurrent.ConcurrentHashMap;\n\nclass Main {\npublic static void main(String[] args) {\n\nConcurrentHashMap<String, Integer> numbers = new ConcurrentHashMap<>();\nnumbers.put(\"One\", 1);\nnumbers.put(\"Two\", 2);\nnumbers.put(\"Three\", 3);\nSystem.out.println(\"ConcurrentHashMap: \" + numbers);\n\n// forEach() without transformer function\nnumbers.forEach(4, (k, v) -> System.out.println(\"key: \" + k + \" value: \" + v));\n\n// forEach() with transformer function\nSystem.out.print(\"Values are \");\nnumbers.forEach(4, (k, v) -> v, (v) -> System.out.print(v + \", \"));\n}\n}``````\n\nOutput\n\n``````ConcurrentHashMap: {One = 1, Two = 2, Three = 3}\nkey: One value: 1\nkey: Two value: 2\nkey: Three value: 3\nValues are 1, 2, 3,``````\n\nIn the above program, we have used equal limit 4. This implies if the map contains 4 sections, the activity will be executed in parallel.\n\n#### Variation of forEach() Method\n\n• forEachEntry() – executes the specified function for each entry\n• forEachKey() – executes the specified function for each key\n• forEachValue() – executes the specified function for each value\n\n## 2. search() Method\n\nThe search() technique look through the map dependent on the predefined capacity and returns the coordinated passage.\n\nHere, the predetermined capacity figures out what section is to be looked.\n\nIt likewise incorporates a discretionary boundary parallelThreshold. The equal limit determines that after the number of components in the guide the activity is executed in equal.\n\nFor instance,\n\n``````import java.util.concurrent.ConcurrentHashMap;\n\nclass Main {\npublic static void main(String[] args) {\n\nConcurrentHashMap<String, Integer> numbers = new ConcurrentHashMap<>();\nnumbers.put(\"One\", 1);\nnumbers.put(\"Two\", 2);\nnumbers.put(\"Three\", 3);\nSystem.out.println(\"ConcurrentHashMap: \" + numbers);\n\n// Using search()\nString key = numbers.search(4, (k, v) -> {return v == 3 ? k: null;});\nSystem.out.println(\"Searched value: \" + key);\n\n}\n}``````\n\nOutput\n\n``````ConcurrentHashMap: {One=1, Two=2, Three=3}\nSearched value: Three``````\n##### Variants of search() Method\n• searchEntries() – search function is applied to key/value mappings\n• searchKeys() – search function is only applied to the keys\n• searchValues() – search function is only applied to the values\n\n## 3. Reduce() Method\n\nThe reduce() technique aggregates (gather together) every section in a guide. This can be utilized when we need all the sections to play out a typical assignment, such as adding all the values of a map.\n\nIt incorporates two parameters.\n\n• parallelismThreshold – It indicates that after the number of components, activities in a guide are executed in parallel.\n• transformer – This will change the data before the data is passed to the predetermined capacity.\n\nFor instance,\n\n``````import java.util.concurrent.ConcurrentHashMap;\n\nclass Main {\npublic static void main(String[] args) {\n\nConcurrentHashMap<String, Integer> numbers = new ConcurrentHashMap<>();\nnumbers.put(\"One\", 1);\nnumbers.put(\"Two\", 2);\nnumbers.put(\"Three\", 3);\nSystem.out.println(\"ConcurrentHashMap: \" + numbers);\n\n// Using search()\nint sum = numbers.reduce(4, (k, v) -> v, (v1, v2) -> v1 + v2);\nSystem.out.println(\"Sum of all values: \" + sum);\n\n}\n}``````\n\nOutput\n\n``````ConcurrentHashMap: {One=1, Two=2, Three=3}\nSum of all values: 6``````\n\nIn the above program, notice the statement\n\n``numbers.reduce(4, (k, v) -> v, (v1, v2) -> v1+v2);``\n\nHere,\n\n• 4 is a parallel threshold\n• (k, v) -> v is a transformer function. It transfers the key/value mappings into values only.\n• (v1, v2) -> v1+v2 is a reducer function. It gathers together all the values and adds all values.\n\nVariants of reduce() Method\n\n• reduceEntries() – returns the result of gathering all the entries using the specified reducer function\n• reduceKeys() – returns the result of gathering all the keys using the specified reducer function\n• reduceValues() – returns the result of gathering all the values using the specified reducer function\n\n## ConcurrentHashMap vs HashMap\n\nHere are some of the differences between ConcurrentHashMap and HashMap,\n\n• ConcurrentHashMap is a thread-safe collection. That is, multiple threads can access and modify it at the same time.\n• ConcurrentHashMap provides methods for bulk operations like forEach(), search() and reduce().\n\n## Why ConcurrentHashMap?\n\n• The ConcurrentHashMap class allows numerous strings to get to its entrances simultaneously.\n• By default, the simultaneous hashmap is separated into 16 segments. This is the motivation behind why 16 strings are permitted to simultaneously change the guide simultaneously. Nonetheless, quite a few strings can get to the guide at a time.\n• The putIfAbsent() strategy won’t abrogate the passage in the guide if the predefined key as of now exists.\n• It gives its own synchronization.\n\nThanks for reading! We hope you found this tutorial helpful and we would love to hear your feedback in the Comments section below. And show us what you’ve learned by sharing your photos and creative projects with us.",
null,
"### Java ConcurrentMap Interface",
null,
"### Java Set Interface",
null,
"",
null,
""
]
| [
null,
"https://i0.wp.com/www.worldofitech.com/wp-content/uploads/2020/11/Java-ConcurrentHashMap.png",
null,
"https://i0.wp.com/www.worldofitech.com/wp-content/uploads/2020/11/Java-ConcurrentMap-Interface_1.png",
null,
"https://i0.wp.com/www.worldofitech.com/wp-content/uploads/2020/11/Java-Set-Interface.png",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20970%20250'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%200%200'%3E%3C/svg%3E",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6057657,"math_prob":0.94249874,"size":11913,"snap":"2023-40-2023-50","text_gpt3_token_len":2725,"char_repetition_ratio":0.20555882,"word_repetition_ratio":0.18712395,"special_character_ratio":0.24989507,"punctuation_ratio":0.19813085,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98584527,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,3,null,3,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T16:45:24Z\",\"WARC-Record-ID\":\"<urn:uuid:a041288c-714a-4fd8-b77f-7c0888b963e1>\",\"Content-Length\":\"333549\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8bb57edc-db95-416a-9404-3c9a12aff1b9>\",\"WARC-Concurrent-To\":\"<urn:uuid:4ee6dac4-4b7a-4fc2-be53-8fb3620d2371>\",\"WARC-IP-Address\":\"172.67.199.237\",\"WARC-Target-URI\":\"https://www.worldofitech.com/java-programming-concurrenthashmap/\",\"WARC-Payload-Digest\":\"sha1:IIXFVBZKDF4UEGQYH52KOJRP6XDVCV6L\",\"WARC-Block-Digest\":\"sha1:ACMECCJA7U56WGOJMRDHF2KQGHNZ7S2V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511386.54_warc_CC-MAIN-20231004152134-20231004182134-00104.warc.gz\"}"} |
https://www.semanticscholar.org/paper/Dirac's-Cosmology-and-Mach's-Principle-Dicke/a03df322ace791a5fd598c5accc4ce01821defaa | [
"# Dirac's Cosmology and Mach's Principle\n\n```@article{Dicke1961DiracsCA,\ntitle={Dirac's Cosmology and Mach's Principle},\nauthor={Robert H. Dicke},\njournal={Nature},\nyear={1961},\nvolume={192},\npages={440-441}\n}```\n• R. Dicke\n• Published 1 November 1961\n• Physics\n• Nature\nTHE dimensionless gravitational coupling constant with mp the mass of some elementary particle, for definiteness taken as the proton, is such a small number that its significance has long been questioned. Thus Eddington1 considered that all the dimensionless physical constants, including this one, could be evaluated as simple mathematical expressions. Dirac2 considered that such an odd number must be related to other numbers of similar size, characterizing the structure of the universe. However…\n238 Citations\nThe proton half life and the Dirac hypothesis\nMany notable physicists have been fascinated by the ubiquity of large dimensionless numbers formed from the physical parameters controlling the large scale structure of the Universe1–10. For example,\nConstants and cosmology: The nature and origin of fundamental constants in astrophysics and particle physics\nWe ask about the nature and origin of the fundamental constants of astrophysics and particle physics, notably the speed of light c, the gravitational constant G, Planck's constant h, and the\nStoney Scale and Large Number Coincidences\nThe Stoney scale, its characteristics and theoretical tendencies are argued to be consistent with Einstein’s theory of gravitational ether and with the Stochastic Electrodynamic theory of\nCOSMOLOGICAL COINCIDENCES IN THE EXPANDING UNIVERSE\nIn the study of dimensionless combinations of fundamental physical constants and cosmological quantities, it was found that some of them reach enormous values. In addition, the order of magnitude of\nThe Numbers Universe: An Outline of the Dirac/Eddington Numbers as Scaling Factors\nThe large number coincidences that fascinated theorists such as Eddington and Dirac are shown here to be a specific example of a general set of scaling factors defining universes in which fundamental\nThe Fine-Structure Constant: From Eddington’s Time to Our Own\nOf all the fundamental constants none has drawn more interest, seemed more intriguing, and excited more speculation than the fine-structure constant α which is defined as the square of the\nCosmogony and the magnitude of the dimensionless gravitational coupling constant\nSIMPLE arguments involving gravitational fragmentation indicate that for the Universe to contain galaxies and stable nuclear-burning stars, the dimensionless gravitational coupling constant αg(=\nExact cosmological solution with particle creation in JBD theory\n• Physics\n• 1978\nExact solutions are sought by taking the generated particles of spin 1/2 (according to the creation rate of Schäfer and Dehnen ) as matter sources of the Cosmological equations of JBD theory.\nVarying Constants, Gravitation and Cosmology\n• J. Uzan\n• Physics\nLiving reviews in relativity\n• 2011\nThe relations between the constants, the tests of the local position invariance and of the universality of free fall are detailed, and the unification mechanisms and the relation between the variation of different constants are described."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.87758136,"math_prob":0.89723563,"size":4007,"snap":"2022-05-2022-21","text_gpt3_token_len":869,"char_repetition_ratio":0.1383962,"word_repetition_ratio":0.003407155,"special_character_ratio":0.19790366,"punctuation_ratio":0.06539075,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96224743,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-21T02:42:23Z\",\"WARC-Record-ID\":\"<urn:uuid:b7ac7df1-f593-47b4-b671-dea5b4e73e11>\",\"Content-Length\":\"247345\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b7916961-7325-48ef-a011-b3ec1cffb528>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc32e94b-76ca-433f-bc79-50d8b42b35c7>\",\"WARC-IP-Address\":\"13.32.208.76\",\"WARC-Target-URI\":\"https://www.semanticscholar.org/paper/Dirac's-Cosmology-and-Mach's-Principle-Dicke/a03df322ace791a5fd598c5accc4ce01821defaa\",\"WARC-Payload-Digest\":\"sha1:WKLH35RV55ARN4YDWMWD4GVSPJPNM7RJ\",\"WARC-Block-Digest\":\"sha1:E7NP4D6OSDYKHKUACALG3VILA3D3PMPY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662534773.36_warc_CC-MAIN-20220521014358-20220521044358-00062.warc.gz\"}"} |
https://www.bartleby.com/solution-answer/chapter-17-problem-21ps-chemistry-and-chemical-reactivity-9th-edition/9781133949640/determine-the-volume-in-ml-of-100-m-naoh-that-must-be-added-to-250-ml-of-050-m-ch3co2h-to/fe9fd6d1-a2cd-11e8-9bb5-0ece094302b6 | [
"",
null,
"",
null,
"",
null,
"# Determine the volume (in mL) of 1.00 M NaOH that must be added to 250 mL of 0.50 M CH 3 CO 2 H to produce a buffer with a pH of 4.50.",
null,
"### Chemistry & Chemical Reactivity\n\n9th Edition\nJohn C. Kotz + 3 others\nPublisher: Cengage Learning\nISBN: 9781133949640\n\n#### Solutions\n\nChapter\nSection",
null,
"### Chemistry & Chemical Reactivity\n\n9th Edition\nJohn C. Kotz + 3 others\nPublisher: Cengage Learning\nISBN: 9781133949640\nChapter 17, Problem 21PS\nTextbook Problem\n9 views\n\n## Determine the volume (in mL) of 1.00 M NaOH that must be added to 250 mL of 0.50 M CH3CO2H to produce a buffer with a pH of 4.50.\n\nInterpretation Introduction\n\nInterpretation:\n\nThe volume of 1.00M, NaOH has to be calculated when it is mixed with 0.50M,CH3COOH to get the buffer solution of pH value equals to 4.5.\n\nConcept introduction:\n\nTitration is a quantitative method to determine the quantity of an acid or base in a solution. This method is used to determine the concentration an acid in the solution by titrating it against a base. There are four types of acid-base titrations.\n\n(1) Strong acid-Strong base, in this type of titration a strong acid is titrated against a strong base for example, HCl is titrated against NaOH.\n\n(2) Strong acid-Weak base, in this type of titration a strong acid is titrated against a weak base for example, HCl is titrated against NH4OH.\n\n(3) Weak acid-Strong base, in this type of titration a weak acid is titrated against a strong base for example, CH3COOH is titrated against NaOH.\n\n(4) Weak acid-Weak base, in this type of titration a weak acid is titrated against a weak base for example, CH3COOH is titrated against NH4OH.\n\nFor weak acid-strong base titration the pH value can be calculated at various points before and after equivalence point. The equilibrium established during the titration of CH3COOH with NaOH. The equilibrium can be represented as,\n\nCH3COOH(aq)+NaOH(aq)H2O(l)+CH3COONa(aq)\n\nCalculation of pH at various points is done as follows,\n\n(1) The pH value before the titration can be calculated by using the Ka and its relation with H3O+ ion concentration.\n\nKa=[H3O+](eq)[A](eq)[HA](eq) (1)\n\n(2) The pH calculation just before the equivalence point,\n\nAs the addition of NaOH is done there will be formation of buffer solution CH3COOH/CH3COO. The pH calculation for buffer solution is done by using Henderson-Hesselbalch equation.\n\npH=pKa+log[conjugatebase][acid] (2)\n\nAt the midpoint of the titration, when concentration of acid and its conjugate base are equal pH value at midpoint will be given as;\n\npH=pKa+log[conjugatebase][acid]\n\nSubstitute, [conjugatebase]for[acid].\n\npH=pKa+log[conjugatebase][conjugatebase]=pKa+log(1)=pKa+0=pKa\n\nTherefore, pH value at midpoint is equal to pKa.\n\n(3) The pH calculation the equivalence point.\n\nAt equivalence point all the acid will be neutralized, and there will be only OH ion and CH3COO. The OH will be produced due to the hydrolysis of acetate ion at equivalence point. The hydrolysis equilibrium is represented as,\n\nCH3COO(aq)+H2O(l)OH(aq)+CH3COOH(aq)\n\nBy using the value of Kb for the acetate ion, concentration of OH can be calculated. Thus the value of pH is greater than 7 at equivalence point for the weak acid- strong base titrations.\n\nThe relation between Ka and Kb for weak acid and its conjugate base is given as,\n\nKw=(Ka)(Kb) (3)\n\n(4) The pH calculation after the equivalence point.\n\nAfter the equivalence point there will be excess of OH ion in the solution and there will be very less amount of CH3COO ion. The amount of CH3COO produce can be neglected with respect to excess amount of OH.\n\nConcentration of OH after equivalence point will be calculated by using the expression,\n\nconcentration=numberofmolestotalvolumeofsolution (4)\n\n### Explanation of Solution\n\nThe volume of volume of NaOH used is calculated below.\n\nGiven:\n\nRefer to table 16.2 in the textbook for the value of Ka.\n\nThe value of Ka for acetic acid is 1.8×105.\n\nThe pKa value is calculated as follows;\n\npKa=log(Ka)\n\nSubstitute, 1.8×105 for Ka.\n\npKa=log(1.8×105)=4.74\n\nTherefore, pKa value is 4.74.\n\nThe initial concentration of CH3COOH is 0.50molL1.\n\nThe initial concentration of NaOH is 1.00molL1.\n\nThe volume of CH3COOH is 250mL.\n\nConversion of 250mL into L.\n\n(250mL)(1L1000mL)=0.250L\n\nLet the volume of NaOH added xmL.\n\nConversion of xmL into L.\n\n(xmL)(1L1000mL)=0.00xL\n\nThe total volume after the reaction is calculated as,\n\ntotalvolume=volumeofCH3COOH(L) + volume of NaOH(L)\n\ntotalvolume = 0.250(L)+0.00x(L)=(0.250+0.00x)L\n\nTherefore, total volume after reaction is (0.250+0.00x)L.\n\nThe calculation of moles is done by using the expression,\n\nNumberof moles=concentration(molL1)volume(L)\n\nThe ICE table (1) for the reaction between NaOH and CH3COOH is given below,\n\nEquationCH3COOH(aq)+NaOH(aq)H2O(l)+CH3COONa(aq)Initial(mol)0.125(0.00x)0Change(mol)0.00x0.00x0.00xAfterreaction(mol)(0.1250.00x)00.00x\n\nFrom ICE table (1),\n\nAfter the reaction there are only two species present, which are CH3COOH and its conjugate base CH3COONa. There is a formation of buffer takes place.\n\nNumber of moles of acetic acid left after reaction is (0.1250.00x).\n\nNumber of moles of acetate ion produced after the reaction is 0.00xmol.\n\nApproximation, the value of 0.00x is very small on comparison to 0.125 . Therefore, 0.00x can be neglected with respect to 0.125\n\nSo, the Number of moles of acetic acid left after reaction are 0.125mol.\n\nConcentration calculation is done by using the expression,\n\nconcentration = Numberof molestotal volume(molL1)\n\nSubstitute, 0\n\n### Still sussing out bartleby?\n\nCheck out a sample textbook solution.\n\nSee a sample solution\n\n#### The Solution to Your Study Problems\n\nBartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!\n\nGet Started",
null,
""
]
| [
null,
"https://www.bartleby.com/static/search-icon-white.svg",
null,
"https://www.bartleby.com/static/close-grey.svg",
null,
"https://www.bartleby.com/static/solution-list.svg",
null,
"https://www.bartleby.com/isbn_cover_images/9781133949640/9781133949640_largeCoverImage.gif",
null,
"https://www.bartleby.com/isbn_cover_images/9781133949640/9781133949640_largeCoverImage.gif",
null,
"https://www.bartleby.com/static/logo-full-footer.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7737354,"math_prob":0.992472,"size":3925,"snap":"2020-10-2020-16","text_gpt3_token_len":894,"char_repetition_ratio":0.16194849,"word_repetition_ratio":0.15878378,"special_character_ratio":0.18929936,"punctuation_ratio":0.08446456,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9974225,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-29T01:00:55Z\",\"WARC-Record-ID\":\"<urn:uuid:8576f66d-2782-4866-928d-cc3811efecde>\",\"Content-Length\":\"757112\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:42c1a47e-e7ff-4b49-bc6f-74ce050ba9be>\",\"WARC-Concurrent-To\":\"<urn:uuid:59af1067-d5d4-430d-875a-2426eb00fcb4>\",\"WARC-IP-Address\":\"99.84.102.2\",\"WARC-Target-URI\":\"https://www.bartleby.com/solution-answer/chapter-17-problem-21ps-chemistry-and-chemical-reactivity-9th-edition/9781133949640/determine-the-volume-in-ml-of-100-m-naoh-that-must-be-added-to-250-ml-of-050-m-ch3co2h-to/fe9fd6d1-a2cd-11e8-9bb5-0ece094302b6\",\"WARC-Payload-Digest\":\"sha1:LYARQ3IV5NBHPDP6657RCJCV6H7DE5WK\",\"WARC-Block-Digest\":\"sha1:RVZ32Z3264BH5Q3MLWQFSO7SWUO4KHKB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370493121.36_warc_CC-MAIN-20200328225036-20200329015036-00175.warc.gz\"}"} |
https://www.teachoo.com/4345/1251/Ex-6.2--5---Find-intervals-f(x)--2x3---3x2---36x---7/category/Find-intervals-of-increasing-decreasing/ | [
"Find intervals of increasing/decreasing\n\nChapter 6 Class 12 Application of Derivatives\nConcept wise",
null,
"",
null,
"",
null,
"Introducing your new favourite teacher - Teachoo Black, at only ₹83 per month\n\n### Transcript\n\nEx 6.2, 5 Find the intervals in which the function f given by f (𝑥) = 2𝑥3 – 3𝑥2 – 36𝑥 + 7 is (a) strictly increasing (b) strictly decreasingf(𝑥) = 2𝑥3 – 3𝑥2 – 36𝑥 + 7 Calculating f’(𝒙) f’(𝑥) = 6𝑥2 – 6𝑥 – 36 + 0 f’(𝑥) = 6 (𝑥2 – 𝑥 – 6 ) f’(𝑥) = 6(𝑥^2 – 3𝑥 + 2𝑥 – 6) f’(𝑥) = 6(𝑥(𝑥 − 3) + 2 (𝑥 − 3)) f’(𝒙) = 6(𝒙 – 3) (𝒙 + 2) Putting f’(x) = 0 6(𝑥+2)(𝑥 –3)=0 (𝑥+2)(𝑥 –3)=0 So, x = −2 and x = 3 Plotting points on number line Hence, f is strictly increasing in (−∞ ,−𝟐) & (𝟑 ,∞) f is strictly decreasing in (−𝟐, 𝟑)",
null,
""
]
| [
null,
"https://d1avenlh0i1xmr.cloudfront.net/7b0851fa-9e91-4bde-b457-db833863f058/slide9.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/9d3147fe-a75a-44ef-873a-57990bdff02d/slide10.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/8b74fd8a-64bd-4cde-bef2-a5ea9800cd93/slide11.jpg",
null,
"https://www.teachoo.com/static/misc/Davneet_Singh.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7260078,"math_prob":0.99996793,"size":891,"snap":"2022-27-2022-33","text_gpt3_token_len":463,"char_repetition_ratio":0.14205186,"word_repetition_ratio":0.086021505,"special_character_ratio":0.4500561,"punctuation_ratio":0.093596056,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99932253,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,5,null,5,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-07T00:42:15Z\",\"WARC-Record-ID\":\"<urn:uuid:b8462b0d-957b-4b34-a55c-411fdd95c712>\",\"Content-Length\":\"155148\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11fba8b6-7df4-4ab0-96fa-f4b057cfbca5>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8b7c108-bacd-4c66-9e37-7af6280753fb>\",\"WARC-IP-Address\":\"35.175.60.16\",\"WARC-Target-URI\":\"https://www.teachoo.com/4345/1251/Ex-6.2--5---Find-intervals-f(x)--2x3---3x2---36x---7/category/Find-intervals-of-increasing-decreasing/\",\"WARC-Payload-Digest\":\"sha1:QFWFFOX6UMC62MM7WLBTHO3RDZAIMI27\",\"WARC-Block-Digest\":\"sha1:MSIU66N32EM7WCV3KWMHM7WRNYWH7BTC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104683020.92_warc_CC-MAIN-20220707002618-20220707032618-00141.warc.gz\"}"} |
http://echochamber.me/viewtopic.php?f=11&t=42218 | [
"## ruby, Feynman diagrams, lambdas\n\nA place to discuss the implementation and style of computer programs.\n\nModerators: phlip, Moderators General, Prelates\n\nsdedeo\nPosts: 36\nJoined: Tue Apr 08, 2008 10:52 pm UTC\nContact:\n\n### ruby, Feynman diagrams, lambdas\n\nI am re-writing some code of mine, that computes Feynman(-like) diagrams on a graph. (It is a linked cluster expansion, if you are curious, which is sort of like a Feynman diagram but where the background system has no coupling between points, and the points themselves are discrete -- the underlying system is a crystal, say. Questions on physics quite welcome!)\n\nI learned ruby last year, and I am curious to see if I can actually do the things ruby is meant to help us do. I have working code that does what I describe below, but it is very slow, and I would love to learn from some ruby (or \"FP\") gurus how to take my relationship with execution time \"to the next level.\"\n\nIn the end, it seems to be a question of how to \"pre-compile\" a lambda?\n\nHere is the essential problem. You have a large graph, V, which you can represent as a (symmetric) matrix, V[i,j], telling you which points are connected to each other. It might be, for example, a 10^3 lattice. You also have a (much smaller) \"diagram\", call it D, which you can also represent as a (symmetric) matrix, D[a,b]. A diagram might be, for example, a three-vertex loop:\n\nCode: Select all\n\n`[ 0 1 1 1 0 1 1 1 0 ]`\n\nTo \"apply\" the diagram D to the graph V means to sum up a product of V[i,j] elements in a particular fashion. In the case of the loop above, you sum:\n\nCode: Select all\n\n`V[i,j]*V[j,k]*V[k,i]`\n\nover all values of i, j and k. In other words, D gives the patterns of the indicies in the sum.\n\nI thought a long time about how to do this in the general case. Here was my solution.\n\nFirst, we need a good way to sum over an arbitrary number of indicies (the dimension of the diagram D); you want to be able to pass a block to the center of all the loops. I built the following (m_inner is the dimension of the graph V):\n\nCode: Select all\n\n` def all_sum(n_left,m_inner,block,running=[]) # sums from 0 to m_inner-1, n_left times (pass n, at the top level, # where n is the number of verticies in the diagram D) # (returns an *object* -- a lambda -- that can be executed by call) # at the center of the loop, passes the call structure through (i.e., # on the 3rd iteration of the first loop, 2nd iteration of the second, the block # is passed [2,1] ) if n_left == 1 then # (if there's only one sum to do, do it) lambda { counter = 0 m_inner.times { |i| counter += block.call(running+[i]) } counter } else # (do the lower sum) lambda { counter = 0 m_inner.times { |i| counter += all_sum(n_left-1,m_inner,block,running+[i]).call } counter } end end`\n\nThen, of course, you need to define the block at the center. That's just the product defined by the diagram D. Here's the block I pass:\n\nCode: Select all\n\n` inner_summand = lambda { |v_index| running_prod=1.0 (@n-1).times { |i| (i+1).upto(@n-1) { |j| running_prod *= v.net[v_index[i],v_index[j]]**(@g[i,j]) } } running_prod }`\n\nWhere @n is the dimension of the diagram D (e.g., in the three-loop above, it's 3), @g is the matrix of the diagram D, and v.net is the matrix associated with the graph V.\n\nIf all that makes sense, here is the issue. This is extremely slow. Is there an obvious way to speed it up? It seems like my inner block is being executed over and over again, but there should be a way to speed it up?\n\nsdedeo\nPosts: 36\nJoined: Tue Apr 08, 2008 10:52 pm UTC\nContact:\n\n### instance_eval\n\nI poked around a little more, trying to see how to do the thing I thought I knew how to do in LISP -- just make the program write a program already! So you can pass a string to instance_eval, and it allows your code to write code.\n\nThis speeds things up until it is as fast as if you had written the code itself. A time test of the first thing I tried (in the post above), then of using instance-eval to write the inner_summand and passing that as a block to some clever lambdas, then of just having instance-eval write the whole thing, then of just cutting and pasting the particular function.\n\nThis is for a 5x5x5 cube (with sort of a torus topology), with a three-vertex loop.\n\nstandard apply: 34 seconds\nwith instance-eval function of inner_summand: 21 seconds\nwith instance-eval function of loops and inner_sumand: 13 seconds\njust typing out the function for the particular graph in question: 13 seconds\n\nThe lesson is: if you want speed, try to do an instance-eval instead of being clever with lambda.\n\nCode: Select all\n\n` def apply_faster(v) # does the index summations corresponding to the graph in question, over the J_ij matrix v. # define the summations themselves; we'll have 25 spare indicies lying around to use dummy_index=\"abcdefghijklmnopqrstuwxyz\" front_sum = \"counter = 0\\n\" @n.times { |i| front_sum += \"#{v.n}.times { |#{dummy_index[i,1]}| \\n\" } front_sum += \"counter += \" vertex_product = \"\" (@n-1).times { |i| (i+1).upto(@n-1) { |j| vertex_product += \"v.net[#{dummy_index[i,1]},#{dummy_index[j,1]}]\" if @g[i,j] > 1 then vertex_product += \"**#{@g[i,j]}*\" else vertex_product += \"*\" end } } vertex_product.chop! front_sum += \"#{vertex_product}\\n\" @n.times { |i| front_sum += \"}\\n\" } front_sum += \"counter\\n\" instance_eval %{ def all_sum_fast(v) #{front_sum} end } print \"#{front_sum}\\n\" all_sum_fast(v) end`\n\nsdedeo\nPosts: 36\nJoined: Tue Apr 08, 2008 10:52 pm UTC\nContact:\n\n### Re: ruby, Feynman diagrams, lambdas\n\nAnd, the final code twerk -- the most significant of all. I ran the profiler, and noticed a lot of Kernel#kind_of? calls, as well as some calls to GSL. I converted all of the objects in the function to NArray objects (from GSL ones.) Still, many calls that seemed to reference the GSL libraries. Somehow the duck typing of ruby was worried I would pass GSL objects in.\n\nI re-wrote the rest of the code to avoid GSL references, and now the entire computation runs in 3 seconds.\n\nSo, a final question would be: is it possible to \"unload\" a library for a while, or at least to make ruby pretend it does not exist? I would like to use GSL later, but it creates overhead even when the functionality of GSL is not needed.\n\nBeldraen\nPosts: 1\nJoined: Thu Jul 30, 2009 2:10 pm UTC\n\n### Re: ruby, Feynman diagrams, lambdas\n\nThe issue you are finding is due to initially writing to the wrong domain of the solution. Traditional coding is about maintainable code--stuff that is readable; however, you're stating specifically that the domain you want is speed. Those are two different techniques.\n\nRuby is an interpretive, immutable, garbage collected language. All calls, methods, arrays will have a performance hit.\n• The point of lambdas is flexibility of tying code dynamically at the cost of all the lookups necessary to find and execute.\n• In mutliple dimension arrays, the variable and subscripts have to be interpreted and resolved. Notably, these are dynamic constructs so internally they are linked lists of linked lists. They are not a flat memory allocation with a simple offset lookup.\n• The repeated use of temporary obects (especially strings) flood the garbage collector. Strings are immutable. If you really want to create one long big string from a bunch of short strings, throw the shorts ones into an array and join them at them end. Array.join is optimized to do this by creating one big string long enough for it all and copies into place.\n\nSo, what is found here is that coding for a specific domain can require specific knowledge of the development environment's design. If you want real speed, in this case you can really take advantage of Ruby's ability to build code. Your final algorythm is deterministic. You know exactly every input variable and the final matrix has a specify definition, so use meta code to def a method in a class that does exactly what you want. The code would generate something like this to be evalutated:\n\nCode: Select all\n\n`Class MyFastMatrix def Do_Matrix_D_3_by_3_V_10_by_10(vec_d, vec_v) # Loop through and grab all references into local variables so we never have to look them up again d_0_by_0 = vec_d[0,0] d_0_by_1 = vec_d[0,1] . . . v_9_by_9 = vec_v[9,9] # Output specific code for each result dynamically into result local variables r_0_by_0 = d_0_by_0 + d_0_by_1 ... blah, blah, blah . . . # Assemble result into array and return it return [ [r_0_by_0], ...end`\n\nAll distant lookups are done once, nothing goes into garbage handling until its over, each calculation relies on the simplest lookup, there are no loops, and the routine, once created, can be used over since it's now actually in the class definition. This is trading coding space for speed.\n\nthoughtfully\nPosts: 2253\nJoined: Thu Nov 01, 2007 12:25 am UTC\nLocation: Minneapolis, MN\nContact:\n\n### Re: ruby, Feynman diagrams, lambdas\n\nThis is exactly the kind of thing Numerical Python is intended to help you with. It offloads the math to C, and is very fast. Read the bit in the manual about ufuncs.\n\nSorry it isn't Ruby",
null,
"",
null,
"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.\n-- Antoine de Saint-Exupery\n\nsdedeo\nPosts: 36\nJoined: Tue Apr 08, 2008 10:52 pm UTC\nContact:\n\n### Re: ruby, Feynman diagrams, lambdas\n\nFor those who are interested, the paper that relies (in part) on some of these computations is now out.\n\nFor those with University subscriptions, it can be found at http://rsif.royalsocietypublishing.org/content/early/2012/02/28/rsif.2011.0840.abstract, while the arXiv (open access) version is at http://arxiv.org/abs/1109.2648.\n\nWe had a lot of fun working on this, and it was interesting to try to use ruby to accomplish a task that had, many years ago, been done in restricted cases by hand; an example is http://www.springerlink.com/content/n664krj72h632354/. It took perhaps two weeks to debug the code, and what was perhaps most amazing was to find, at the end, that the group in 1963 had, indeed, done the calculations without error.\n\nSagekilla\nPosts: 382\nJoined: Fri Aug 21, 2009 1:02 am UTC\nLocation: Long Island, NY\n\n### Re: ruby, Feynman diagrams, lambdas\n\nFor the problem you described above, summing V[i, j] V[j, k] V[k, i],\nthis is actually equivalent to perform Tr(V * V * V)\n\nWhere * is the normal matrix-matrix product. On Mathematica, this is\nsomething like 2300 times faster to perform. It may be faster in numerical\nlibraries like NumPy as well.\n\nJust a thought.\nhttp://en.wikipedia.org/wiki/DSV_Alvin#Sinking wrote:Researchers found a cheese sandwich which exhibited no visible signs of decomposition, and was in fact eaten.\n\nsdedeo\nPosts: 36\nJoined: Tue Apr 08, 2008 10:52 pm UTC\nContact:\n\n### Re: ruby, Feynman diagrams, lambdas\n\nIn that case, yes. In other cases, the summations become more complicated -- e.g., a two-loop graph might be\n\nV[i,j] V[j,k] V[k,p] V[p, i] V[k,i]\n\nI seem to remember that graphs like these, with multiple loops, could not be transformed into \"ordinary\" matrix multiplication problems (but would be curious to hear if you or others had interesting suggestions -- it would indeed allow one to piggy-back on the standard, more parallelizable algorithms.)\n\nSagekilla\nPosts: 382\nJoined: Fri Aug 21, 2009 1:02 am UTC\nLocation: Long Island, NY\n\n### Re: ruby, Feynman diagrams, lambdas\n\nDo you have any useful symmetries in your matrix? Sometimes those help.\nIt may also help to store the transposed form of your matrix.\n\nFor the sum:\n\nV[i, j] V[j, k] V[k, p] V[p, i] V[k, i]\n\nInstead of doing it (perhaps) as:\n\nCode: Select all\n\n`for (i ... ) for (j ...) for (k ...) for (p ...) sum <- V[i, j] V[j, k] V[k, p] V[p, i] V[k, i]`\n\nYou can do:\n\nCode: Select all\n\n`for (i ... ) for (j ...) for (k ...) for (p ...) sum <- V[i, j] V[j, k] V[k, p] Vt[i, p] Vt[i, k]`\n\nWhere Vt refers to the transpose of V.\n\nMore generally, it would be useful to use the transposed arrays where you're going with\nthe stride of the array rather than against it. No matter how you slice it, it's a N^k problem\nfor a k-product summation. But if you can utilize the ordering of the matrix to your advantage\nthen you'll get the benefit of spatial locality.\nhttp://en.wikipedia.org/wiki/DSV_Alvin#Sinking wrote:Researchers found a cheese sandwich which exhibited no visible signs of decomposition, and was in fact eaten."
]
| [
null,
"http://echochamber.me/images/smilies/icon_sad.gif",
null,
"http://thinkingplanet.net/~cfuller/img/sig.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9219185,"math_prob":0.7889968,"size":2840,"snap":"2019-26-2019-30","text_gpt3_token_len":801,"char_repetition_ratio":0.114950635,"word_repetition_ratio":0.041198503,"special_character_ratio":0.2919014,"punctuation_ratio":0.14132105,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98063016,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-20T10:40:14Z\",\"WARC-Record-ID\":\"<urn:uuid:51f5e331-bdd8-4a55-9d90-e1d873c2930b>\",\"Content-Length\":\"56835\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b010310e-cf87-466e-9e82-60fc109174a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:204b2de8-347a-4c2b-b9ca-806072859697>\",\"WARC-IP-Address\":\"104.196.146.194\",\"WARC-Target-URI\":\"http://echochamber.me/viewtopic.php?f=11&t=42218\",\"WARC-Payload-Digest\":\"sha1:S2DZQ2AS5HUDE26DTGQYRFFPDH4RXZVJ\",\"WARC-Block-Digest\":\"sha1:JUVCZLQTRQ2ZMM7JLIZJLBNWZJCDKODW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526506.44_warc_CC-MAIN-20190720091347-20190720113347-00127.warc.gz\"}"} |
https://crypto.stackexchange.com/questions/103462/semantic-security-equivalent-to-real-random-semantic-security | [
"# Semantic Security equivalent to Real/Random Semantic Security\n\nI'm reading Boneh and Shoup's book \"A Graduate Course in Applied Cryptography.\" Im doing one of the questions at the end of the stream ciphers chapter. I'm not sure how to do this problem:\n\nLet $$\\mathcal{E} = (E,D)$$ be a cipher defined over $$\\mathcal{K}, \\mathcal{M}, \\mathcal{C}$$. Assume that one can efficiently generate messages from the message space $$\\mathcal{M}$$ at random. We define an attack game between an adversary $$\\mathcal{A}$$ and a challenger as follows. The adversary selects a message $$m \\in \\mathcal{M}$$ and sends $$m$$ to the challenger. The challenger computes $$b \\leftarrow \\{0,1\\}, k \\leftarrow \\mathcal{K}, m_0 \\leftarrow m, m_1 \\xleftarrow{\\\\\\} \\mathcal{M}, c \\leftarrow E(k,m_b)$$ and sends the cipher text $$c$$ to $$\\mathcal{A}$$ who then computes and outputs a bit $$\\hat{b}$$. Define $$\\mathcal{A}$$'s advantage to be $$|Pr[\\hat{b} = b] - 1/2]|$$ and we say $$\\mathcal{E}$$ is real/random secure if this advantage is negligible for all efficient adversaries.\n\nMy attempt:\n\nFor one direction, let $$\\mathcal{E}$$ be semantically secure and let $$B$$ be a real/random adversary. We construct a semantic security adversary $$\\mathcal{A}$$ with $$B$$ as a subroutine.\n\n$$\\mathcal{A}$$ selects $$m_0, m_1 \\in \\mathcal{M}$$ as per the semantic security game. The challenger responds, sending $$\\mathcal{A}$$ $$E(k, m_b)$$ with $$b$$ chosen at random to be $$0$$ or $$1$$. Now $$\\mathcal{A}$$ plays the role of challenger to $$B$$, and sends inputs $$m_0$$ and $$c$$ to $$B$$. (Intuitively $$\\mathcal{A}$$ is asking $$B$$ if $$c$$ is an encryption of $$m_0$$ or some other random message). With these inputs, $$B$$ outputs a bit $$\\hat{b}$$. $$\\mathcal{A}$$ outputs what $$B$$ outputs.\n\nI don't know how to relate $$\\mathcal{A}$$'s semantic security advantage to $$B$$'s real/random semantic security advantage. And I am not sure how to construct a real/random adversary using a semantic security adversary for the other direction.\n\nThank you for any help!"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.75775105,"math_prob":0.9999964,"size":1907,"snap":"2023-40-2023-50","text_gpt3_token_len":552,"char_repetition_ratio":0.16868103,"word_repetition_ratio":0.0,"special_character_ratio":0.28736234,"punctuation_ratio":0.09863014,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000069,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T18:10:37Z\",\"WARC-Record-ID\":\"<urn:uuid:6098aa60-89da-4493-a484-1b5104737c28>\",\"Content-Length\":\"155055\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f09ae352-ca85-422e-98ee-137cd9a1fdf4>\",\"WARC-Concurrent-To\":\"<urn:uuid:2a9f7f47-f0d5-4f24-95a2-f9aeaaa5e939>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/103462/semantic-security-equivalent-to-real-random-semantic-security\",\"WARC-Payload-Digest\":\"sha1:KKDLJ3WMS4ZQYDUHVPO26CGVMD32QAMU\",\"WARC-Block-Digest\":\"sha1:PVESJARJZZS2OD6FNJVJJC5HGOWCPKNI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100135.11_warc_CC-MAIN-20231129173017-20231129203017-00892.warc.gz\"}"} |
https://dsp.stackexchange.com/questions/48088/rc-circuit-frequency-response | [
"# RC circuit frequency response\n\nI am developing a project where I must analyze an incoming signal that was acquired from a microcontroller. The objective is to obtain the main frequency of the incoming signal.\n\nAt first, I’m analyzing an RC step response signal. So, basically, I want to obtain the frequency response, from the characteristic charging curve of an RC circuit.\n\nMy mentor told me that I could achieve this with the FFT. He told me that if I applied it twice at the incoming signal, I would obtain the frequency response curve. I think I did, but my only issue is the frequency axis. They’re completely messed up. Even when I apply the method to obtain the frequencies, it goes bad. (I know it's wrong because my cut-off frequency is around 11 Hz)\n\n1. Why is it that applying the FFT twice to the signal give the frequency response?\n\n2. Does anyone know how I can obtain the correct frequency axis?\n\nTaking the FFT twice is similar to a method called \"cepstrum\". It finds the spacing between harmonics. You need to measure an oscillating signal to apply this. I would go back to your mentor for more clarification.\n\nHere is a solution for accurately measuring your charging rate in your diagram: least square fitting to inverse exponential function\n\nCed\n\nFollowup:\n\nI was very impressed by these articles.\n\nFrom RC Charging Circuit in the section labeled: \"RC Time Constant, Tau\"\n\n$$V_c = V_s ( 1 - e^{-t/(RC)} )$$\n\nSo when $t = R C$ then $\\frac{V_c}{ V_s} \\approx .63$\n\nSince you already know $V_s$, the technique I referenced earlier is not necessary. Looking at your graph $R C \\approx .07$.\n\nFrom Passive Low Pass Filter in the section labeled: \"Cut-off Frequency and Phase Shift\"\n\n$$f_c = \\frac{ 1 }{ 2 * \\pi * R C }$$\n\nYour cutoff frequency can then be found: $f_c \\approx 2.27$\n\nFrom the section labeled: \"RC Low Pass Filter Circuit\"\n\n$$X_c = \\frac{1}{ 2 \\pi f C }$$\n\n$$V_{out} = V_{in} \\cdot \\frac{X_c}{ \\sqrt{ R^2 + X_c^2 } }$$\n\nDivide the numerator and denominator by $X_c$:\n\n$$\\frac{ V_{out} }{ V_{in} } = \\frac{1}{ \\sqrt{ \\left( \\frac{R}{X_c} \\right)^2 + 1 } }$$\n\nSubstitute in $X_c$ and simplify:\n\n$$\\frac{ V_{out} }{ V_{in} } = \\frac{1}{ \\sqrt{ \\left( 2 \\pi f R C \\right)^2 + 1 } }$$\n\nNow take the log (base 10):\n\n$$\\log \\left( \\frac{ V_{out} }{ V_{in} } \\right) = -\\frac{1}{2} \\log \\left( \\left( 2 \\pi f R C \\right)^2 + 1 \\right)$$\n\nFrom the section labeled: \"Low Pass Filter Summary\"\n\n$$Gain_{db} = 20 * \\log \\left( \\frac{ V_{out} }{ V_{in} } \\right)$$\n\nPlug in the log of your voltage ratio to get:\n\n$$Gain_{db} = -10 \\log \\left( \\left( 2 \\pi f R C \\right)^2 + 1 \\right)$$\n\nYour cutoff frequency is below the audible range of Hz, so you are going to attenuate all your frequency. The higher the frequency, the greater the attenuation. To find you fundamental frequency I recommend using a FFT and the frequency calculations I present in my blog articles. You can find the link on my profile page.\n\n• I will. Just to clarify it for me, do you think there is a way to obtain a Bode plot of an RC step response, without having its transfer function? I think, ultimately, this is my question – tbarros Mar 25 '18 at 12:44\n• I'm a little outside my comfort zone answering that. Check this reference out: electronics-tutorials.ws/filter/filter_2.html – Cedron Dawg Mar 25 '18 at 13:06\n• Thank you anyways! I've been fighting with this for a long time, but now I have a starting point! – tbarros Mar 25 '18 at 13:39\n• I have worked the math. Your RC is about .07 and your cutoff frequency is about 2.27 Hz (assuming your time scale is in second on your first chat). Do you want me to post the equations or would that be a spoiler? – Cedron Dawg Mar 25 '18 at 15:58\n• I'd be very much grateful,if you could – tbarros Mar 26 '18 at 11:13"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.76537275,"math_prob":0.98758173,"size":2013,"snap":"2019-43-2019-47","text_gpt3_token_len":625,"char_repetition_ratio":0.12692882,"word_repetition_ratio":0.12433863,"special_character_ratio":0.3512171,"punctuation_ratio":0.07651715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99924886,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T10:26:56Z\",\"WARC-Record-ID\":\"<urn:uuid:4cb0e1ad-5446-4a5f-8628-c8a72f18abc7>\",\"Content-Length\":\"142432\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:58be28a2-11c2-4a52-b381-10ac76d21721>\",\"WARC-Concurrent-To\":\"<urn:uuid:36435a8e-505c-4e6b-a56b-ec0137ea2935>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/48088/rc-circuit-frequency-response\",\"WARC-Payload-Digest\":\"sha1:D6FHSQFUK6LKQR2ZZW6PS3HOMLXUBFL4\",\"WARC-Block-Digest\":\"sha1:GS5XWY3PXNFHHIN66N4SKPDAJPXHGND5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668618.8_warc_CC-MAIN-20191115093159-20191115121159-00095.warc.gz\"}"} |
https://jazzinsideandout.com/archive.php?id=b76b21-multiple-linear-regression-example-problems-with-solutions | [
"Multiple linear regression is an extension of simple linear regression used to predict an outcome variable (y) on the basis of multiple distinct predictor variables (x).. With three predictor variables (x), the prediction of y is expressed by the following equation: y = b0 + b1*x1 + b2*x2 + b3*x3\n\nFor example, the model can be written in the general form using , and as follows: Estimating Regression Models Using Least Squares. Multiple linear regression analysis can be used to test whether there is a causal link between those variables.\n\nSecondly, multiple linear regression can be used to forecast values: Consider a multiple linear regression model with predictor variables: A dependent variable is modeled as a function of several independent variables with corresponding coefficients, along with the constant term. Multiple regression generally explains the relationship between multiple independent or predictor variables and one dependent or criterion variable. Multivariate regression is a simple extension of multiple regression. MULTIPLE REGRESSION EXAMPLE For a sample of n = 166 college students, the following variables were measured: Y = height X1 = mother’s height (“momheight”) X2 = father’s height (“dadheight”) X3 = 1 if male, 0 if female (“male”) Our goal is to predict student’s height using the mother’s and father’s heights, and sex, where sex is However, multiple linear regression does not prove that the causal direction is from anxiety to personality or the other way around.\n\nAll multiple linear regression models can be expressed in the following general form: where denotes the number of terms in the model. The critical assumption of the model is that the conditional mean function is linear: E(Y|X) = α +βX. Multiple Linear Regression The population model • In a simple linear regression model, a single response measurement Y is related to a single predictor (covariate, regressor) X for each observation. Multiple regression is used to predicting and exchange the values of one variable based on the collective value of more than one value of predictor variables. download: multiple regression examples and solutions pdf Best of all, they are entirely free to find, use and download, so there is no cost or stress at all. multiple regression examples and solutions PDF may not make exciting reading, but multiple"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8694681,"math_prob":0.99584496,"size":2347,"snap":"2020-34-2020-40","text_gpt3_token_len":456,"char_repetition_ratio":0.18992744,"word_repetition_ratio":0.005449591,"special_character_ratio":0.19812527,"punctuation_ratio":0.091133006,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99929625,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T12:14:44Z\",\"WARC-Record-ID\":\"<urn:uuid:0c174c42-80e8-46aa-bc2d-35d5aba021b8>\",\"Content-Length\":\"22602\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:306a4c8c-ce6a-4188-8b60-7f21a70d153b>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe7e1c64-02e3-4180-933d-9ead6f933e2b>\",\"WARC-IP-Address\":\"74.124.193.158\",\"WARC-Target-URI\":\"https://jazzinsideandout.com/archive.php?id=b76b21-multiple-linear-regression-example-problems-with-solutions\",\"WARC-Payload-Digest\":\"sha1:ITUUBERB4JV5V5UB6QIMNIVKZ3TYBQS5\",\"WARC-Block-Digest\":\"sha1:YLIOV2HSYIMW2YGPVQPHNSBZWZ7PJDMZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400279782.77_warc_CC-MAIN-20200927121105-20200927151105-00552.warc.gz\"}"} |
https://mathsteacher.blog/category/ncert-class-10-ex-2-3/ | [
"## NCERT Solutions Class 10 Maths – Polynomials | Chapter 2.3 Question 5\n\nClass 10 Maths Polynomials Exercise 2.3 Question – 5 (All 3 parts) Introduction of the Class 10 Maths Polynomials Exercise 2.3 Question 5 (all 3 parts) is given in below snapshot. To understand step-by-step explanation in more details then please watch our YouTube video given below. Image: Class 10 Maths Chapter 2 Polynomials Ex 2.3 […]\n\n## NCERT Solutions Class 10 Maths – Polynomials | Chapter 2.3 Question 4\n\nClass 10 Maths Polynomials Exercise 2.3 Question – 4 Introduction of the Class 10 Maths Polynomials Exercise 2.3 Question 4 is given in below snapshot. To understand step-by-step explanation in more details then please watch our YouTube video given below. Image: Class 10 Maths Chapter 2 Polynomials Ex 2.3 Question 4 More Videos:\n\n## NCERT Solutions Class 10 Maths – Polynomials | Chapter 2.3 Question 3\n\nClass 10 Maths Polynomials Exercise 2.3 Question – 3 Introduction of the Class 10 Maths Polynomials Exercise 2.3 Question 3 is given in below snapshot. To understand step-by-step explanation in more details then please watch our YouTube video given below. Image: Class 10 Maths Chapter 2 Polynomials Ex 2.3 Question 3 More Videos:\n\n## NCERT Solutions Class 10 Maths – Polynomials | Chapter 2.3 Question 2\n\nClass 10 Maths Polynomials Exercise 2.3 Question – 2 (All 3 parts solutions) Introduction of the Class 10 Maths Polynomials Exercise 2.3 Question 2 (all 3 parts) is given in below snapshot. To understand step-by-step explanation in more details then please watch our YouTube video given below. Class 10 Maths Chapter 2 (Polynomials) Ex 2.3 […]\n\n## NCERT Solutions Class 10 Maths – Polynomials | Chapter 2.3 Question 1\n\nClass 10 Maths Polynomials Exercise 2.3 Question – 1 (All 3 parts solutions) Introduction of the Class 10 Maths Polynomials Exercise 2.3 Question 1 (all 3 parts) is given in below snapshot. To understand step-by-step explanation in more details then please watch our YouTube video given below. Class 10 Maths Chapter 2 (Polynomials) Ex 2.3 […]\n\n## NCERT Solutions Class 10 Maths – Polynomials | Chapter 2.3 Introduction\n\nClass 10 Maths Polynomials Exercise 2.3 Introduction (Division algorithm for polynomials) Introduction of the Class 10 Maths Polynomials Exercise 2.3 Introduction is given in below snapshot. To understand step-by-step explanation in more details then please watch our YouTube video given below. More Videos:"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7813047,"math_prob":0.8164002,"size":408,"snap":"2019-51-2020-05","text_gpt3_token_len":105,"char_repetition_ratio":0.16089109,"word_repetition_ratio":0.09375,"special_character_ratio":0.2647059,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9942796,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T08:32:28Z\",\"WARC-Record-ID\":\"<urn:uuid:bea992f4-ca7c-4fd2-8fa6-a55b036e52db>\",\"Content-Length\":\"175435\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8b447618-8311-4d20-b9d4-cb2631cb3321>\",\"WARC-Concurrent-To\":\"<urn:uuid:fefac8dd-2f09-4f39-b22a-284ea6a24fe2>\",\"WARC-IP-Address\":\"192.0.78.244\",\"WARC-Target-URI\":\"https://mathsteacher.blog/category/ncert-class-10-ex-2-3/\",\"WARC-Payload-Digest\":\"sha1:IXCTOFXLFEK6YGQZI36ZD73DDTIPZLSI\",\"WARC-Block-Digest\":\"sha1:R642MB724LFM647KAZ43YDP5RFJUJAAZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250598217.23_warc_CC-MAIN-20200120081337-20200120105337-00478.warc.gz\"}"} |
https://9512.net/read/1bc63aec856c0db933a60ae7.html | [
"9512.net\n\n# Large margin hidden Markov models for automatic speech recognition. Neural Information Proc\n\nLarge Margin Hidden Markov Models for Automatic Speech Recognition\n\nFei Sha Computer Science Division University of California Berkeley, CA 94720-1776 [email protected]\n\nLawrence K. Saul Department of Computer Science and Engineering University of California (San Diego) La Jolla, CA 92093-0404 [email protected]\n\nAbstract\nWe study the problem of parameter estimation in continuous density hidden Markov models (CD-HMMs) for automatic speech recognition (ASR). As in support vector machines, we propose a learning algorithm based on the goal of margin maximization. Unlike earlier work on max-margin Markov networks, our approach is speci?cally geared to the modeling of real-valued observations (such as acoustic feature vectors) using Gaussian mixture models. Unlike previous discriminative frameworks for ASR, such as maximum mutual information and minimum classi?cation error, our framework leads to a convex optimization, without any spurious local minima. The objective function for large margin training of CD-HMMs is de?ned over a parameter space of positive semide?nite matrices. Its optimization can be performed ef?ciently with simple gradient-based methods that scale well to large problems. We obtain competitive results for phonetic recognition on the TIMIT speech corpus.\n\n1 Introduction\nAs a result of many years of widespread use, continuous density hidden Markov models (CDHMMs) are very well matched to current front and back ends for automatic speech recognition (ASR) . Typical front ends compute real-valued feature vectors from the short-time power spectra of speech signals. The distributions of these acoustic feature vectors are modeled by Gaussian mixture models (GMMs), which in turn appear as observation models in CD-HMMs. Viterbi decoding is used to solve the problem of sequential classi?cation in ASR—namely, the mapping of sequences of acoustic feature vectors to sequences of phonemes and/or words, which are modeled by state transitions in CD-HMMs. The simplest method for parameter estimation in CD-HMMs is the Expectation-Maximization (EM) algorithm. The EM algorithm is based on maximizing the joint likelihood of observed feature vectors and label sequences. It is widely used due to its simplicity and scalability to large data sets, which are common in ASR. A weakness of this approach, however, is that the model parameters of CDHMMs are not optimized for sequential classi?cation: in general, maximizing the joint likelihood does not minimize the phoneme or word error rates, which are more relevant metrics for ASR. Noting this weakness, many researchers in ASR have studied alternative frameworks for parameter estimation based on conditional maximum likelihood , minimum classi?cation error and maximum mutual information . The learning algorithms in these frameworks optimize discriminative criteria that more closely track actual error rates, as opposed to the EM algorithm for maximum likelihood estimation. These algorithms do not enjoy the simple update rules and relatively fast convergence of EM, but carefully and skillfully implemented, they lead to lower error rates [13, 20].\n\nRecently, in a new approach to discriminative acoustic modeling, we proposed the use of “large margin GMMs” for multiway classi?cation . Inspired by support vector machines (SVMs), the learning algorithm in large margin GMMs is designed to maximize the distance between labeled examples and the decision boundaries that separate different classes . Under mild assumptions, the required optimization is convex, without any spurious local minima. In contrast to SVMs, however, large margin GMMs are very naturally suited to problems in multiway (as opposed to binary) classi?cation; also, they do not require the kernel trick for nonlinear decision boundaries. We showed how to train large margin GMMs as segment-based phonetic classi?ers, yielding signi?cantly lower error rates than maximum likelihood GMMs . The integrated large margin training of GMMs and transition probabilities in CD-HMMs, however, was left as an open problem. We address that problem in this paper, showing how to train large margin CD-HMMs in the more general setting of sequential (as opposed to multiway) classi?cation. In this setting, the GMMs appear as acoustic models whose likelihoods are integrated over time by Viterbi decoding. Experimentally, we ?nd that large margin training of HMMs for sequential classi?cation leads to signi?cant improvement beyond the frame-based and segment-based discriminative training in . Our framework for large margin training of CD-HMMs builds on ideas from many previous studies in machine learning and ASR. It has similar motivation as recent frameworks for sequential classi?cation in the machine learning community [1, 6, 17], but differs in its focus on the real-valued acoustic feature representations used in ASR. It has similar motivation as other discriminative paradigms in ASR [3, 4, 5, 11, 13, 20], but differs in its goal of margin maximization and its formulation of the learning problem as a convex optimization over positive semide?nite matrices. The recent margin-based approach of is closest in terms of its goals, but entirely different in its mechanics; moreover, its learning is limited to the mean parameters in GMMs.\n\n2 Large margin GMMs for multiway classi?cation\nBefore developing large margin HMMs for ASR, we brie?y review large margin GMMs for multiway classi?cation . The problem of multiway classi?cation is to map inputs x ∈ ?d to labels y ∈ {1, 2, . . . , C}, where C is the number of classes. Large margin GMMs are trained from a set of labeled examples {(xn , yn )}N . They have many parallels to SVMs, including the goal of margin n=1 maximization and the use of a convex surrogate to the zero-one loss . Unlike SVMs, where classes are modeled by half-spaces, in large margin GMMs the classes are modeled by collections of ellipsoids. For this reason, they are more naturally suited to problems in multiway as opposed to binary classi?cation. Sections 2.1–2.3 review the basic framework for large margin GMMs: ?rst, the simplest setting in which each class is modeled by a single ellipsoid; second, the formulation of the learning problem as a convex optimization; third, the general setting in which each class is modeled by two or more ellipsoids. Section 2.4 presents results on handwritten digit recognition. 2.1 Parameterization of the decision rule The simplest large margin GMMs model each class by a single ellipsoid in the input space. The ellipsoid for class c is parameterized by a centroid vector ?c ∈ ?d and a positive semide?nite matrix Ψc ∈ ?d×d that determines its orientation. Also associated with each class is a nonnegative scalar offset θc ≥ 0. The decision rule labels an example x ∈ ?d by the class whose centroid yields the smallest Mahalanobis distance: (1) y = argmin (x??c )T Ψc (x??c ) + θc .\nc\n\nThe decision rule in eq. (1) is merely an alternative way of parameterizing the maximum a posterior (MAP) label in traditional GMMs with mean vectors ?c , covariance matrices Ψ?1 , and prior class c probabilities pc , given by y = argminc { pc N (?c , Ψ?1 ) }. c The argument on the right hand side of the decision rule in eq. (1) is nonlinear in the ellipsoid parameters ?c and Ψc . As shown in , however, a useful reparameterization yields a simpler expression. For each class c, the reparameterization collects the parameters {?c , Φc , θc } in a single enlarged matrix Φc ∈ ?(d+1)×(d+1) : Φc = Ψc ??T Ψc c ?Ψc ?c ?T Ψc ?c + θc c . (2)\n\nNote that Φc is positive semide?nite. Furthermore, if Φc is strictly positive de?nite, the parameters {?c , Ψc , θc } can be uniquely recovered from Φc . With this reparameterization, the decision rule in eq. (1) simpli?es to: x y = argmin z T Φc z . (3) where z = 1 c The argument on the right hand side of the decision rule in eq. (3) is linear in the parameters Φc . In what follows, we will adopt the representation in eq. (3), implicitly constructing the “augmented” vector z for each input vector x. Note that eq. (3) still yields nonlinear (piecewise quadratic) decision boundaries in the vector z. 2.2 Margin maximization Analogous to learning in SVMs, we ?nd the parameters {Φc } that minimize the empirical risk on the training data—i.e., parameters that not only classify the training data correctly, but also place the decision boundaries as far away as possible. The margin of a labeled example is de?ned as its distance to the nearest decision boundary. If possible, each labeled example is constrained to lie at least one unit distance away from the decision boundary to each competing class: ?c = yn , z T (Φc ? Φyn ) z n ≥ 1. n (4)\n\nFig. 1 illustrates this idea. Note that in the “realizable” setting where these constraints can be simultaneously satis?ed, they do not uniquely determine the parameters {Φc }, which can be scaled to yield arbitrarily large margins. Therefore, as in SVMs, we propose a convex optimization that selects the “smallest” parameters that satisfy the large margin constraints in eq. (4). In this case, the optimization is an instance of semide?nite programming : min c trace(Ψc ) s.t. 1 + z T (Φyn ? Φc )z n ≤ 0, n Φc ? 0, c = 1, 2, . . . , C ?c = yn , n = 1, 2, . . . , N (5)\n\nNote that the trace of the matrix Ψc appears in the above objective function, as opposed to the trace of the matrix Φc , as de?ned in eq. (2); minimizing the former imposes the scale regularization only on the inverse covariance matrices of the GMM, while the latter would improperly regularize the mean vectors as well. The constraints Φc ? 0 restrict the matrices to be positive semide?nite. The objective function must be modi?ed for training data that lead to infeasible constraints in eq. (5). As in SVMs, we introduce nonnegative slack variables ξnc to monitor the amount by which the margin constraints in eq. (4) are violated . The objective function in this setting balances the margin violations versus the scale regularization: min c trace(Ψc ) nc ξnc + γ s.t. 1 + z T (Φyn ? Φc )z n ≤ ξnc , n ξnc ≥ 0, ?c = yn , n = 1, 2, . . . , N Φc ? 0, c = 1, 2, . . . , C (6)\n\nwhere the balancing hyperparameter γ > 0 is set by some form of cross-validation. This optimization is also an instance of semide?nite programming. 2.3 Softmax margin maximization for multiple mixture components Lastly we review the extension to mixture modeling where each class is represented by multiple ellipsoids . Let Φcm denote the matrix for the mth ellipsoid (or mixture component) in class c. We imagine that each example xn has not only a class label yn , but also a mixture component label mn . Such labels are not provided a priori in the training data, but we can generate “proxy” labels by ?tting GMMs to the examples in each class by maximum likelihood estimation, then for each example, computing the mixture component with the highest posterior probability. In the setting where each class is represented by multiple ellipsoids, the goal of learning is to ensure that each example is closer to its “target” ellipsoid than the ellipsoids from all other classes. Speci?cally, for a labeled example (xn , yn , mn ), the constraint in eq. (4) is replaced by the M constraints: ?c = yn , ?m, z T (Φcm ? Φyn mn )z n ≥ 1, n (7)\n\nFigure 1: Decision boundary in a large margin GMM: labeled examples lie at least one unit of distance away.\n\nwhere M is the number of mixture components (assumed, for simplicity, to be the same for each class). We fold these multiple constraints into a single one by appealing to the “softmax” inequality: minm am ≥ ? log m e?am . Speci?cally, using the inequality to derive a lower bound on minm z T Φcm z n , we replace the M constraints in eq. (7) by the stricter constraint: n ?c = yn , ? log\nm\n\nWe will use a similar technique in section 3 to handle the exponentially many constraints that arise in sequential classi?cation. Note that the inequality in eq. (8) implies the inequality of eq. (7) but not vice versa. Also, though nonlinear in the matrices {Φcm }, the constraint in eq. (8) is still convex. The objective function in eq. (6) extends straightforwardly to this setting. It balances a regularizing term that sums over ellipsoids versus a penalty term that sums over slack variables, one for each constraint in eq. (8). The optimization is given by: min cm trace(Ψcm ) nc ξnc + γ T T s.t. 1 + z n Φyn mn z n + log m e?zn Φcm zn ≤ ξnc , ξnc ≥ 0, ?c = yn , n = 1, 2, . . . , N Φcm ? 0, c = 1, 2, . . . , C, m = 1, 2, . . . , M\n\nThis optimization is not an instance of semide?nite programming, but it is convex. We discuss how to perform the optimization ef?ciently for large data sets in appendix A. 2.4 Handwritten digit recognition We trained large margin GMMs for multiway classi?cation of MNIST handwritten digits . The MNIST data set has 60000 training examples and 10000 test examples. Table 1 shows that the large margin GMMs yielded signi?cantly lower test error rates than GMMs trained by maximum likelihood estimation. Our best results are comparable to the best SVM results (1.0-1.4%) on deskewed images that do not make use of prior knowledge. For our best model, with four mixture components per digit class, the core training optimization over all training examples took ?ve minutes on a PC. (Multiple runs of this optimization on smaller validation sets, however, were also required to set two hyperparameters: the regularizer for model complexity, and the termination criterion for early stopping.)\n\n3 Large margin HMMs for sequential classi?cation\nIn this section, we extend the framework in the previous section from multiway classi?cation to sequential classi?cation. Particularly, we have in mind the application to ASR, where GMMs are used to parameterize the emission densities of CD-HMMs. Strictly speaking, the GMMs in our framework cannot be interpreted as emission densities because their parameters are not constrained to represent normalized distributions. Such an interpretation, however, is not necessary for their use as discriminative models. In sequential classi?cation by CD-HMMs, the goal is to infer the correct hidden state sequence y = [y1 , y2 , . . . , yT ] given the observation sequence X = [x1 , x2 , . . . , xT ]. In the application to ASR, the hidden states correspond to phoneme labels, and the observations are\n\nn\n\ni\n\ng\n\nr\n\na m y r a d n u o b n o i s i c e d\n\nmixture 1 2 4 8\n\nEM 4.2% 3.4% 3.0% 3.3%\n\nmargin 1.4% 1.4% 1.2% 1.5%\n\nTable 1: Test error rates on MNIST digit recognition: maximum likelihood versus large margin GMMs.\n\ne?zn Φcm zn ? z T Φyn mn z n ≥ 1. n\n\nT\n\n(8)\n\n(9)\n\nacoustic feature vectors. Note that if an observation sequence has length T and each label can belong to C classes, then the number of incorrect state sequences grows as O(C T ). This combinatorial explosion presents the main challenge for large margin methods in sequential classi?cation: how to separate the correct hidden state sequence from the exponentially large number of incorrect ones. The section is organized as follows. Section 3.1 explains the way that margins are computed for sequential classi?cation. Section 3.2 describes our algorithm for large margin training of CD-HMMs. Details are given only for the simple case where the observations in each hidden state are modeled by a single ellipsoid. The extension to multiple mixture components closely follows the approach in section 2.3 and can be found in [14, 16]. Margin-based learning of transition probabilities is likewise straightforward but omitted for brevity. Both these extensions were implemented, however, for the experiments on phonetic recognition in section 3.3. 3.1 Margin constraints for sequential classi?cation We start by de?ning a discriminant function over state (label) sequences of the CD-HMM. Let a(i, j) denote the transition probabilities of the CD-HMM, and let Φs denote the ellipsoid parameters of state s. The discriminant function D(X, s) computes the score of the state sequence s = [s1 , s2 , . . . , sT ] on an observation sequence X = [x1 , x2 , . . . , xT ] as:\nT\n\nD(X, s) =\nt\n\nlog a(st?1 , st ) ?\nt=1\n\nz T Φst z t . t\n\n(10)\n\nThis score has the same form as the log-probability log P (X, s) in a CD-HMM with Gaussian emission densities. The ?rst term accumulates the log-transition probabilities along the state sequence, while the second term accumulates “acoustic scores” computed as the Mahalanobis distances to each state’s centroid. In the setting where each state is modeled by multiple mixture components, the acoustic scores from individual Mahalanobis distances are replaced with “softmax” distances of T the form log M e?zt Φst m zt , as described in section 2.3 and [14, 16]. m=1 We introduce margin constraints in terms of the above discriminant function. Let H(s, y) denote the Hamming distance (i.e., the number of mismatched labels) between an arbitrary state sequence s and the target state sequence y. Earlier, in section 2 on multiway classi?cation, we constrained each labeled example to lie at least one unit distance from the decision boundary to each competing class; see eq. (4). Here, by extension, we constrain the score of each target sequence to exceed that of each competing sequence by an amount equal to or greater than the Hamming distance: ?s = y, D(X, y) ? D(X, s) ≥ H(s, y) (11) Intuitively, eq. (11) requires that the (log-likelihood) gap between the score of an incorrect sequence s and the target sequence y should grow in proportion to the number of individual label errors. The appropriateness of such proportional constraints for sequential classi?cation was ?rst noted by . 3.2 Softmax margin maximization for sequential classi?cation The challenge of large margin sequence classi?cation lies in the exponentially large number of constraints, one for each incorrect sequence s, embodied by eq. (11). We will use the same softmax inequality, previously introduced in section 2.3, to fold these multiple constraints into one, thus considerably simplifying the optimization required for parameter estimation. We ?rst rewrite the constraint in eq. (11) as: ?D(X, y) + max{H(s, y) + D(X, s)} ≤ 0\ns=y\n\n(12)\n\nWe obtain a more manageable constraint by substituting a softmax upper bound for the max term and requiring that the inequality still hold: ?D(X, y) + log\ns=y\n\neH(s,y)+D(X,s) ≤ 0\n\n(13)\n\nNote that eq. (13) implies eq. (12) but not vice versa. As in the setting for multiway classi?cation, the objective function for sequential classi?cation balances two terms: one regularizing the scale of\n\nthe GMM parameters, the other penalizing margin violations. Denoting the training sequences by {X n , y n }N and the slack variables (one for each training sequence) by ξn ≥ 0, we obtain the n=1 following convex optimization: min cm trace(Ψcm ) n ξn + γ s.t. ?D(X n , y n ) + log s=yn eH(s,yn )+D(X n ,s) ≤ ξn , ξn ≥ 0, n = 1, 2, . . . , N Φcm ? 0, c = 1, 2, . . . , C, m = 1, 2, . . . , M\n\n(14)\n\nIt is worth emphasizing several crucial differences between this optimization and previous ones [4, 11, 20] for discriminative training of CD-HMMs for ASR. First, the softmax large margin constraint in eq. (13) is a differentiable function of the model parameters, as opposed to the “hard” maximum in eq. (12) and the number of classi?cation errors in the MCE training criteria . The constraint and its gradients with respect to GMM parameters Φcm and transition parameters a(·, ·) can be computed ef?ciently using dynamic programming, by a variant of the standard forward-backward procedure in HMMs . Second, due to the reparameterization in eq. (2), the discriminant function D(X n , y n ) and the softmax function are convex in the model parameters. Therefore, the optimization eq. (14) can be cast as convex optimization, avoiding spurious local minima . Third, the optimization not only increases the log-likelihood gap between correct and incorrect state sequences, but also drives the gap to grow in proportion to the number of individually incorrect labels (which we believe leads to more robust generalization). Finally, compared to the large margin framework in , the softmax handling of exponentially large number of margin constraints makes it possible to train on larger data sets. We discuss how to perform the optimization ef?ciently in appendix A. 3.3 Phoneme recognition We used the TIMIT speech corpus [7, 9, 12] to perform experiments in phonetic recognition. We followed standard practices in preparing the training, development, and test data. Our signal processing front-end computed 39-dimensional acoustic feature vectors from 13 mel-frequency cepstral coef?cients and their ?rst and second temporal derivatives. In total, the training utterances gave rise to roughly 1.2 million frames, all of which were used in training. We trained baseline maximum likelihood recognizers and two different types of large margin recognizers. The large margin recognizers in the ?rst group were “low-cost” discriminative CD-HMMs whose GMMs were merely trained for frame-based classi?cation. In particular, these GMMs were estimated by solving the optimization in eq. (8), then substituted into ?rst-order CD-HMMs for sequence decoding. The large margin recognizers in the second group were fully trained for sequential classi?cation. In particular, their CD-HMMs were estimated by solving the optimization in eq. (14), generalized to multiple mixture components and adaptive transition parameters [14, 16]. In all the recognizers, the acoustic feature vectors were labeled by 48 phonetic classes, each represented by one state in a ?rst-order CD-HMM. For each recognizer, we compared the phonetic state sequences obtained by Viterbi decoding to the “ground-truth” phonetic transcriptions provided by the TIMIT corpus. For the purpose of computing error rates, we followed standard conventions in mapping the 48 phonetic state labels down to 39 broader phone categories. We computed two different types of phone error rates, one based on Hamming distance, the other based on edit distance. The former was computed simply from the percentage of mismatches at the level of individual frames. The latter was computed by aligning the Viterbi and ground truth transcriptions using dynamic programming and summing the substitution, deletion, and insertion error rates from the alignment process. The “frame-based” phone error rate computed from Hamming distances is more closely tracked by our objective function for large margin training, while the “string-based” phone error rate computed from edit distances provides a more relevant metric for ASR. Tables 2 and 3 show the results of our experiments. For both types of error rates, and across all model sizes, the best performance was consistently obtained by large margin CD-HMMs trained for sequential classi?cation. Moreover, among the two different types of large margin recognizers, utterance-based training generally yielded signi?cant improvement over frame-based training. Discriminative learning of CD-HMMs is an active research area in ASR. Two types of algorithms have been widely used: maximum mutual information (MMI) and minimum classi?cation er-\n\nmixture (per state) 1 2 4 8\n\nbaseline (EM) 45% 45% 42% 41%\n\nmargin (frame) 37% 36% 35% 34%\n\nmargin (utterance) 30% 29% 28% 27%\n\nmixture (per state) 1 2 4 8\n\nbaseline (EM) 40.1% 36.5% 34.7% 32.7%\n\nmargin (frame) 36.3% 33.5% 32.6% 31.0%\n\nmargin (utterance) 31.2% 30.8% 29.8% 28.2%\n\nTable 2: Frame-based phone error rates, from Hamming distance, of different recognizers. See text for details.\n\nTable 3: String-based phone error rates, from edit distance, of different recognizers. See text for details.\n\nror . In , we compare the large margin training proposed in this paper to both MMI and MCE systems for phoneme recognition trained on the exact same acoustic features. There we ?nd that the large margin approach leads to lower error rates, owing perhaps to the absence of local minima in the objective function and/or the use of margin constraints based on Hamming distances.\n\n4 Discussion\nDiscriminative learning of sequential models is an active area of research in both ASR [10, 13, 20] and machine learning [1, 6, 17]. This paper makes contributions to lines of work in both communities. First, in distinction to previous work in ASR, we have proposed a convex, margin-based cost function that penalizes incorrect decodings in proportion to their Hamming distance from the desired transcription. The use of the Hamming distance in this context is a crucial insight from the work of in the machine learning community, and it differs profoundly from merely penalizing the log-likelihood gap between incorrect and correct transcriptions, as commonly done in ASR. Second, in distinction to previous work in machine learning, we have proposed a framework for sequential classi?cation that naturally integrates with the infrastructure of modern speech recognizers. Using the softmax function, we have also proposed a novel way to monitor the exponentially many margin constraints that arise in sequential classi?cation. For real-valued observation sequences, we have shown how to train large margin HMMs via convex optimizations over their parameter space of positive semide?nite matrices. Finally, we have demonstrated that these learning algorithms lead to improved sequential classi?cation on data sets with over one million training examples (i.e., phonetically labeled frames of speech). In ongoing work, we are applying our approach to large vocabulary ASR and other tasks such as speaker identi?cation and visual object recognition.\n\nA\n\nSolver\n\nThe optimizations in eqs. (5), (6), (9) and (14) are convex: speci?cally, in terms of the matrices that parameterize large margin GMMs and HMMs, the objective functions are linear, while the constraints de?ne convex sets. Despite being convex, however, these optimizations cannot be managed by off-the-shelf numerical optimization solvers or generic interior point methods for problems as large as the ones in this paper. We devised our own special-purpose solver for these purposes. For simplicity, we describe our solver for the optimization of eq. (6), noting that it is easily extended to eqs. (9) and (14). To begin, we eliminate the slack variables and rewrite the objective function in terms of the hinge loss function: hinge(z) = max(0, z). This yields the objective function: L=\nn,c=yn\n\nhinge 1 + z T (Φyn ?Φc )z n + γ n\nc\n\ntrace(Ψc ),\n\n(15)\n\nwhich is convex in terms of the positive semide?nite matrices Φc . We minimize L using a projected subgradient method , taking steps along the subgradient of L, then projecting the matrices {Φc } back onto the set of positive semide?nite matrices after each update. This method is guaranteed to converge to the global minimum, though it typically converges very slowly. For faster convergence, we precede this method with an unconstrained conjugate gradient optimization in the square-root matrices {?c }, where Φc = ?c ?T . The latter optimization is not convex, but in practice it rapidly c converges to an excellent starting point for the projected subgradient method.\n\nAcknowledgment\nThis work was supported by the National Science Foundation under grant Number 0238323. We thank F. Pereira, K. Crammer, and S. Roweis for useful discussions and correspondence. Part of this work was conducted while both authors were af?liated with the University of Pennsylvania.\n\nReferences\n Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden markov support vector machines. In T. Fawcett and N. Mishra, editors, Proceedings of the Twentieth International Conference (ICML 2003), pages 3–10, Washington, DC, USA, 2003. AAAI Press. D. P. Bertsekas. Nonlinear programming. Athena Scienti?c, 2nd edition, 1999. P. S. Gopalakrishnan, D. Kanevsky, A. N? das, and D. Nahamoo. An inequality for rational functions with a applications to some statistical estimation problems. IEEE Trans. Info. Theory, 37(1):107—113, 1991. B.-H. Juang and S. Katagiri. Discriminative learning for minimum error classi?cation. IEEE Trans. Sig. Proc., 40(12):3043–3054, 1992. S. Kapadia, V. Valtchev, and S. Young. MMI training for continuous phoneme recognition on the TIMIT database. In Proc. of ICASSP 93, volume 2, pages 491–494, Minneapolis, MN, 1993. J. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random ?elds: Probabilisitc models for segmenting and labeling sequence data. In Proc. 18th International Conf. on Machine Learning (ICML 2001), pages 282–289. Morgan Kaufmann, San Francisco, CA, 2001. L. F. Lamel, R. H. Kassel, and S. Seneff. Speech database development: design and analsysis of the acoustic-phonetic corpus. In L. S. Baumann, editor, Proceedings of the DARPA Speech Recognition Workshop, pages 100–109, 1986. Y. LeCun, L. Jackel, L. Bottou, A. Brunot, C. Cortes, J. Denker, H. Drucker, I. Guyon, U. Muller, E. Sackinger, P. Simard, and V. Vapnik. Comparison of learning algorithms for handwritten digit recognition. In F. Fogelman and P. Gallinari, editors, Proceedings of the International Conference on Arti?cial Neural Networks, pages 53–60, 1995. K. F. Lee and H. W. Hon. Speaker-independent phone recognition using hidden Markov models. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(11):1641–1648, 1988. X. Li, H. Jiang, and C. Liu. Large margin HMMs for speech recognition. In Proceedings of ICASSP 2005, pages 513–516, Philadelphia, 2005. A. N? das. A decision-theoretic formulation of a training problem in speech recognition and a comparison a of training by unconditional versus conditional maximum likelihood. IEEE Transactions on Acoustics, Speech and Signal Processing, 31(4):814–817, 1983. T. Robinson. An application of recurrent nets to phone probability estimation. IEEE Transactions on Neural Networks, 5(2):298–305, 1994. J. L. Roux and E. McDermott. Optimization methods for discriminative training. In Proceedings of Nineth European Conference on Speech Communication and Technology (EuroSpeech 2005), pages 3341–3344, Lisbon, Portgual, 2005. F. Sha. Large margin training of acoustic models for speech recognition. PhD thesis, University of Pennsylvania, 2007. F. Sha and L. K. Saul. Large margin Gaussian mixture modeling for phonetic classi?cation and recognition. In Proceedings of ICASSP 2006, pages 265–268, Toulouse, France, 2006. F. Sha and L. K. Saul. Comparison of large margin training to other discriminative methods for phonetic recognition by hidden Markov models. In Proceedings of ICASSP 2007, Hawaii, 2007. B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In S. Thrun, L. Saul, and B. Sch¨ lkopf, editors, Advances in Neural Information Processing Systems (NIPS 16). MIT Press, Camo bridge, MA, 2004. L. Vandenberghe and S. P. Boyd. Semide?nite programming. SIAM Review, 38(1):49–95, March 1996. V. Vapnik. Statistical Learning Theory. Wiley, N.Y., 1998. P. C. Woodland and D. Povey. Large scale discriminative training of hidden Markov models for speech recognition. Computer Speech and Language, 16:25–47, 2002. S. J. Young. Acoustic modelling for large vocabulary continuous speech recognition. In K. Ponting, editor, Computational Models of Speech Pattern Processing, pages 18–39. Springer, 1999.",
null,
"更多相关文章:\n...Markov Models for Automatic Speech Recognition_....pdf\nHidden Markov Models for Automatic Speech Recognition_电子/电路_工程科技_专业资料。JunlfcaisniergadAuotn12 )87 orao hnc gnei n tmai (0 63 MeEn ...\nA Tutorial on Hidden Markov Models and Selected App....pdf\nHidden Markov models for... 25页 免费 Speech Recognition...Large margin hidden mark... 119页 免费 Factor ...a markov process to automatic speech recognit...\n2011Context-Dependent Pre-trained Deep Neural Networks.pdf\n2011Context-Dependent Pre-trained Deep Neural Networks..., large margin hidden Markov model (HMM) ...models for automatic speech recognition,” Neuro...\n...USING NEURAL NETWORKS AND HIDDEN MARKOV MODELS.pdf\nNEURAL NETWORKS AND HIDDEN MARKOV MODELS_专业资料...for robust distant-talking speech recognition 26, ...Large margin hidden Ma... 8页 免费 1 Time...\nBoth Hidden Markov Models and Neural Networks have.pdf\nand used successfully for speech recognition tasks....neural models (i.e. one state PNNs) were ...Large margin hidden Ma... 8页 免费 ...\nHIDDEN MARKOV MODELS FOR DNA SEQUENCING.pdf\nHIDDEN MARKOV MODELS FOR DNA SEQUENCING_专业资料。...cial Neural Networks (ANNs) can capture the ...Large margin hidden ma... 119页 免费 Hidden...\nHidden Markov models merging acoustic and articulat....pdf\nfor robust speech recognition systems where visual ...Neural Networks or of Hidden Markov Models ...Large margin hidden Ma... 8页 免费 ...\n...for Acoustic Modeling in Speech Recognition The ....pdf\nDeep Neural Networks for Acoustic Modeling in Speech...of the art by a large margin. We To reduce ...hidden Markov models for speech recognition,” ...\nAn Empirical Exploration of Hidden Markov Models Fr....pdf\nFrom Spelling Recognition to Speech Recogn_专业资料...Hidden Markov models for automatic spelling ...Large margin hidden ma... 119页 免费 ...\n...and recognition using hidden markov models.pdf\nFor instance, in 6], a neural network extracts...the large variability of complex temporal signals....Markov Models Hidden with 1 i N ; 1 j N 2...\n25.2 Description of Hidden Markov Models.pdf\nHidden Markov Models and give a method for ...neural nets in that it is susceptible to local ...of Markov process to automatic speech recognition....\nA Statistical Approach to Speech Recognition.pdf\nA Statistical Approach to Speech Recognition_专业资料。In this talk I will describe one approach for training hidden Markov models (HMMs) for automatic ...\nHidden Markov Models and Selectively Trained Neural....pdf\nHIDDEN MARKOV MODELS AND SELECTIVELY TRAINED NEURAL...cial neural networks for the recognition of ...Large margin hidden Ma... 8页 免费 ...\n\nneural network (DNN) hidden Markov models for ...HMMs on large vocabulary speech recognition tasks....when the degree of sparseness is high (i.e.,...\n...the Use of Markov Models and Arti cial Neural Ne....pdf\nUse of Markov Models and Arti cial Neural Networks for Speech Recog_专业...Large margin hidden ma... 119页 免费 Large margin hidden Ma... 8...\n...using Hidden Markov Models and Neural Networks.pdf\nNeural networks, Hidden Markov Models, speech, cursive...large pool of J di erent basic Gaussian densIII...(i.e. number of di erent recognition there ...\nhidden markov models for....pdf\nHidden Markov Models for Endpoint Detection in Plasma...recognition methods such as neural networks are ...On entering state i a duration time di is ...\n...of Environmental noise events by Hidden Markov Models.pdf\nEnvironmental noise events by Hidden Markov Models_...methodologies for the automatic recognition of noise...s or hybrids neural networks/HMM’s because ...\nmarkov models and hidden....pdf\nI Markov Models and Hidden Markov Models: A ...automatic speech recognition (ASR) here; for the...Neural Computation 9.227{270. Deller, J., J. ...\n...classification based on hidden Markov models.pdf\nHe uses hidden Markov models and a neural net ...These have been used in speech recognition ...not large enough for a system based on ...",
null,
"更多相关标签:"
]
| [
null,
"https://9512.net/pic/sl.gif",
null,
"https://9512.net/pic/down.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8809766,"math_prob":0.9486748,"size":31402,"snap":"2019-13-2019-22","text_gpt3_token_len":7779,"char_repetition_ratio":0.14182432,"word_repetition_ratio":0.05954466,"special_character_ratio":0.24020763,"punctuation_ratio":0.17037863,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97174275,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-20T17:54:57Z\",\"WARC-Record-ID\":\"<urn:uuid:18e9a4aa-fab5-41f2-8686-f8c16fc781e4>\",\"Content-Length\":\"54193\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7c7f467-2d62-45b7-8898-4ffbcf42405c>\",\"WARC-Concurrent-To\":\"<urn:uuid:431b2e00-2a80-4e84-a53b-af8fa56e4e65>\",\"WARC-IP-Address\":\"103.112.211.232\",\"WARC-Target-URI\":\"https://9512.net/read/1bc63aec856c0db933a60ae7.html\",\"WARC-Payload-Digest\":\"sha1:WRAHTMG2MII2U3NAD3RMYZZKNLT3WBMO\",\"WARC-Block-Digest\":\"sha1:JSZP6QF7IQL7CHOZ3NAL5TPQ2W3BTKLG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202450.64_warc_CC-MAIN-20190320170159-20190320192159-00294.warc.gz\"}"} |
https://byjus.com/question-answer/let-p-be-the-point-of-intersection-of-the-common-tangents-to-the-parabola-y/ | [
"",
null,
"",
null,
"Question\n\n# Let P be the point of intersection of the common tangents to the parabola y2=12x and the hyperbola 8x2−y2=8. If S and S′ denote the foci of the hyperbola where S lies on the positive x−axis then P divides SS′ in a ratio14:1313:115:42:1\n\nSolution\n\n## The correct option is C 5:4Equation of parabola y2=12x So equation of its tangent : y=3x+3m Equation of hyperbola x21−y28=1 Eccentricity of hyperbola e=√1+8=3 S(ae,0)=S(3,0) & S′(−ae,0)=S′(−3,0) And equation of its tangent : y=mx±√m2−8 Both tangent are coomon tangents Therefore, 9m2=m2−8 Let m2=t t2−8t−9=0 ⇒t=m2=9, −1( not possible) ⇒m=±3 ∴y=3x+1y=−3x−1 Therefore point of intersection of common tangents P(−13,0) Let P divides SS′ in a ratio of m:n ∴P(−13, 0)=P(−3m+3nm+n, 0) ⇒−m−n=−9m+9n⇒mn=54",
null,
"",
null,
"Suggest corrections",
null,
"",
null,
"",
null,
""
]
| [
null,
"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDQiIGhlaWdodD0iNDQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMzIiIGhlaWdodD0iMzIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjQiIGhlaWdodD0iMjQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6036132,"math_prob":0.99727094,"size":507,"snap":"2021-43-2021-49","text_gpt3_token_len":259,"char_repetition_ratio":0.1471173,"word_repetition_ratio":0.028985508,"special_character_ratio":0.40039447,"punctuation_ratio":0.099236645,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000002,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T23:03:11Z\",\"WARC-Record-ID\":\"<urn:uuid:91645e63-8d6e-42b2-95ec-aba94240ac6e>\",\"Content-Length\":\"543369\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02faeff9-551e-429c-9d1c-abcb20c4b663>\",\"WARC-Concurrent-To\":\"<urn:uuid:f508db2d-b620-41c1-b7de-48b5c7e3275f>\",\"WARC-IP-Address\":\"162.159.129.41\",\"WARC-Target-URI\":\"https://byjus.com/question-answer/let-p-be-the-point-of-intersection-of-the-common-tangents-to-the-parabola-y/\",\"WARC-Payload-Digest\":\"sha1:NQYV5DRQSGEAHE4JAWZYQOC3QNMXTKQZ\",\"WARC-Block-Digest\":\"sha1:NMPLHKVCGQ74PYKZ363CNPK37VE2I2JQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358323.91_warc_CC-MAIN-20211127223710-20211128013710-00485.warc.gz\"}"} |
https://intl.siyavula.com/read/maths/grade-10-mathematical-literacy/personal-income-expenditure-and-budget/08-personal-income-expenditure-and-budget-06 | [
"We think you are located in United States. Is this correct?\n\n# End of chapter activity\n\nExercise 8.6\n\nChuma writes down the following percentages for each of the items in her budget:\n\n Clothes $$\\text{40}\\%$$ Entertainment",
null,
"$$\\text{30}\\%$$ Fixed savings account $$\\text{10}\\%$$ Transport",
null,
"$$\\text{5}\\%$$ Donations $$\\text{5}\\%$$ Tuck shop spending $$\\text{10}\\%$$ Total $$\\text{100}\\%$$\n\nIf she has earned an income of $$\\text{R}\\,\\text{500}$$ in a particular month, calculate exactly how much money she can allocate to each of the above items.\n\nClothes: $$\\text{R}\\,\\text{200}$$, Entertainment: $$\\text{R}\\,\\text{150}$$. Fixed savings account: $$\\text{R}\\,\\text{50}$$. Transport: $$\\text{R}\\,\\text{25}$$. Donations: $$\\text{R}\\,\\text{25}$$. Tuck shop spending: $$\\text{R}\\,\\text{50}$$.\n\nAmanda budgeted her monthly expenses as follows:\n\n Clothes",
null,
"$$\\text{25}\\%$$ Entertainment $$\\text{40}\\%$$ Transport $$\\text{10}\\%$$ Tuck shop spending",
null,
"$$\\text{15}\\%$$ Donations $$\\text{5}\\%$$ Unforeseen costs $$\\text{5}\\%$$ Total $$\\text{100}\\%$$\n\nIf she has $$\\text{R}\\,\\text{1 800}$$ to spend this month, how much money can she allocate to each expenditure item?\n\nClothes: $$\\text{R}\\,\\text{450}$$. Entertainment: $$\\text{R}\\,\\text{720}$$. Transport: $$\\text{R}\\,\\text{180}$$. Tuckshop spending: $$\\text{R}\\,\\text{270}$$. Donations: $$\\text{R}\\,\\text{90}$$. Unforeseen costs: $$\\text{R}\\,\\text{90}$$.\n\nLook at the family budget for the month of December 2013, for the Philander family. There are two adults and two children (both in school) in the family.\n\n Item Expenditure Income Total income less total cost Fixed Variable Mrs Philander's salary $$\\text{R}\\,\\text{9 500}$$ Mr Philander's salary a) Additional income b) Bond repayment c) Food d) Edgars clothing account payment e) School fees f) Transport g) Entertainment h) Savings i) Car repayment $$\\text{R}\\,\\text{1 300}$$ Municipality rates j) Electricity $$\\text{R}\\,\\text{200}$$ k) Vodacom contract cost l) i. l) ii. Telkom account m) i. m) ii. Total ? ? ? ? Surplus or deficit? ?\n\nComplete the above budget of the family by calculating the following:\n\nMr Philander's income: He works $$\\text{20}$$ days per month at a rate of $$\\text{R}\\,\\text{500}$$ per day.\n\n$$\\text{20}$$ $$\\times$$ $$\\text{R}\\,\\text{500}$$ = $$\\text{R}\\,\\text{10 000}$$\n\nAdditional income: Mr Philander owns additional property which he hires out to people at a fixed charge of $$\\text{R}\\,\\text{2 500}$$ per month.\n\n$$\\text{R}\\,\\text{2 500}$$\n\nThe monthly bond repayments are fixed at $$\\text{R}\\,\\text{5 550}$$ per month.\n\n$$\\text{R}\\,\\text{5 500}$$\n\nThe average amount spent on food each month comes to $$\\text{R}\\,\\text{2 500}$$. Mrs Philander believes that this should be increased by $$\\text{10}\\%$$ due to recent food price increases.",
null,
"$$\\text{R}\\,\\text{2 500}$$ + $$\\text{R}\\,\\text{250}$$ = $$\\text{R}\\,\\text{2 750}$$\n\nMr Philander pays Edgars an amount of $$\\text{R}\\,\\text{800}$$ per month,. However, since he bought his children their school uniforms on account, he estimates that this amount will increase by a further $$\\text{12}\\%$$.\n\n$$\\text{R}\\,\\text{800}$$ + $$\\text{R}\\,\\text{96}$$ = $$\\text{R}\\,\\text{896}$$\n\nThe school fees are $$\\text{R}\\,\\text{1 200}$$ per child per month.\n\n$$\\text{R}\\,\\text{1 200}$$ $$\\times$$ $$\\text{2}$$ = $$\\text{R}\\,\\text{2 400}$$\n\nTransport costs are as follows: For the children: taxi fare per child = $$\\text{R}\\,\\text{5,00}$$ per trip to school and another $$\\text{R}\\,\\text{5,00}$$ each for the trip home. There are $$\\text{20}$$ school days in a month. Mr Philander first drives his wife to work and then goes to work himself. In the evenings he would pick her up and then drive home again. They both work $$\\text{20}$$ days per month. Mr Philander has noticed that his car uses an average of $$\\text{4}$$ litres of petrol per day each time he does this. On the other $$\\text{10}$$ days of the month, his car uses an average of $$\\text{3}$$ litres per day. The cost of petrol is $$\\text{R}\\,\\text{10,50}$$ per litre. Calculate the total amount that should be budgeted for transport.",
null,
"Taxi fare: $$\\text{R}\\,\\text{10}$$ per day $$\\times$$ $$\\text{2}$$ children $$\\times$$ $$\\text{20}$$ days = $$\\text{R}\\,\\text{400}$$. Petrol: ($$\\text{20}$$ $$\\times$$ $$\\text{4}$$ litres $$\\times$$ $$\\text{R}\\,\\text{10,50}$$) + ($$\\text{10}$$ $$\\times$$ $$\\text{3}$$ litres $$\\times$$ $$\\text{R}\\,\\text{10,50}$$) = $$\\text{R}\\,\\text{840}$$ + $$\\text{R}\\,\\text{315}$$ = $$\\text{R}\\,\\text{1 155}$$\n\nThe amount budgeted for entertainment is estimated at $$\\text{5}\\%$$ of the combined income of Mr and Mrs Philander.\n\nTotal salaries = $$\\text{R}\\,\\text{19 500}$$. $$\\text{5}\\%$$ of this is $$\\text{R}\\,\\text{975}$$.\n\nSavings are currently $$\\text{5}\\%$$ of Mrs Philander's income.\n\n$$\\text{5}\\%$$ of $$\\text{R}\\,\\text{9 500}$$ = $$\\text{R}\\,\\text{475}$$.\n\nThe amount budgeted for municipal rates is $$\\text{5}\\%$$ of the total income earned by the Philander household.\n\nTotal income = salaries + additional income = $$\\text{R}\\,\\text{19 500}$$ +$$\\text{R}\\,\\text{2 500}$$ = $$\\text{R}\\,\\text{22 000}$$. $$\\text{5}\\%$$ of this is $$\\text{R}\\,\\text{1 100}$$.\n\nThe fixed component of the electricity account is currently $$\\text{R}\\,\\text{200}$$ per month. The variable component is calculated as follows:The average amount of electricity consumed by the Philander household is $$\\text{550}$$ kilowatt hours per month at a rate of $$\\text{R}\\,\\text{0,50}$$ per kilowatt hour.\n\n($$\\text{550}$$ $$\\times$$ $$\\text{R}\\,\\text{0,50}$$) =$$\\text{R}\\,\\text{275}$$\n\nVodacom contract cell phone account:\n\nFixed component: $$\\text{R}\\,\\text{135}$$ per month\n\n$$\\text{R}\\,\\text{135}$$\n\nVariable component: $$\\text{R}\\,\\text{0,80}$$ per minute of airtime used during peak time. An average of $$\\text{100}$$ minutes of airtime per month is used during peak time. Off peak minutes are charged at a rate of $$\\text{R}\\,\\text{0,40}$$ per minute. An average of $$\\text{200}$$ minutes per month is used during this time.\n\n($$\\text{100}$$ $$\\times$$ $$\\text{R}\\,\\text{0,80}$$) + ($$\\text{200}$$ $$\\times$$ $$\\text{R}\\,\\text{0,40}$$) = $$\\text{R}\\,\\text{160}$$\n\nTelkom account:\n\nFixed component is $$\\text{R}\\,\\text{400}$$ per month.\n\n$$\\text{R}\\,\\text{400}$$\n\nVariable component: $$\\text{R}\\,\\text{0,50}$$ per minute during normal time. An average of $$\\text{350}$$ minutes is spent each month on the phone during this time. Call more time is calculated at $$\\text{R}\\,\\text{7}$$ per night. The children use the phone an average of $$\\text{20}$$ nights per month during this time.",
null,
"($$\\text{350}$$ $$\\times$$ $$\\text{R}\\,\\text{0,50}$$) + ($$\\text{R}\\,\\text{7}$$ $$\\times$$ $$\\text{20}$$) = $$\\text{R}\\,\\text{315}$$.",
null,
"The total for fixed expenses is $$\\text{R}\\,\\text{12 456}$$. The total for variable expenses is $$\\text{R}\\,\\text{5 630}$$. So the total for all expenses is $$\\text{R}\\,\\text{18 086}$$. The total income for the household is $$\\text{R}\\,\\text{22 000}$$, so yes - they are within budget, because their income is greater than their total expenditure and they have a surplus of money."
]
| [
null,
"https://intl.siyavula.com/read/maths/grade-10-mathematical-literacy/personal-income-expenditure-and-budget/images/08-personal-income-expenditure-budget/gd-0025.jpg",
null,
"https://intl.siyavula.com/read/maths/grade-10-mathematical-literacy/personal-income-expenditure-and-budget/images/08-personal-income-expenditure-budget/gd-0026.jpg",
null,
"https://intl.siyavula.com/read/maths/grade-10-mathematical-literacy/personal-income-expenditure-and-budget/images/08-personal-income-expenditure-budget/gd-0027.jpg",
null,
"https://intl.siyavula.com/read/maths/grade-10-mathematical-literacy/personal-income-expenditure-and-budget/images/08-personal-income-expenditure-budget/gd-0028.jpg",
null,
"https://intl.siyavula.com/read/maths/grade-10-mathematical-literacy/personal-income-expenditure-and-budget/images/08-personal-income-expenditure-budget/gd-0029.jpg",
null,
"https://intl.siyavula.com/read/maths/grade-10-mathematical-literacy/personal-income-expenditure-and-budget/images/08-personal-income-expenditure-budget/gd-0030.png",
null,
"https://intl.siyavula.com/read/maths/grade-10-mathematical-literacy/personal-income-expenditure-and-budget/images/08-personal-income-expenditure-budget/gd-0031.png",
null,
"https://intl.siyavula.com/read/maths/grade-10-mathematical-literacy/personal-income-expenditure-and-budget/images/08-personal-income-expenditure-budget/gd-0032.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9168333,"math_prob":1.0000029,"size":3826,"snap":"2020-45-2020-50","text_gpt3_token_len":1260,"char_repetition_ratio":0.2974882,"word_repetition_ratio":0.016129032,"special_character_ratio":0.4022478,"punctuation_ratio":0.13189772,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999715,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-24T07:22:54Z\",\"WARC-Record-ID\":\"<urn:uuid:f3ef573e-4f89-4f1f-97a9-6efe3589f023>\",\"Content-Length\":\"36144\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:44f21d61-a784-4ebd-80ff-4fd0737a059e>\",\"WARC-Concurrent-To\":\"<urn:uuid:f40df18f-de3b-4757-a6be-49d1d093f7ad>\",\"WARC-IP-Address\":\"197.221.50.110\",\"WARC-Target-URI\":\"https://intl.siyavula.com/read/maths/grade-10-mathematical-literacy/personal-income-expenditure-and-budget/08-personal-income-expenditure-and-budget-06\",\"WARC-Payload-Digest\":\"sha1:QWAV4NZKWLADBRBO7V3FML353WMYF7DN\",\"WARC-Block-Digest\":\"sha1:OOGIQPTMH4E7PS2CAXQ6HQJKISN4KQWU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141171126.6_warc_CC-MAIN-20201124053841-20201124083841-00482.warc.gz\"}"} |
https://www.statisticsviews.com/details/book/10666587/Applied-Probabilistic-Calculus-for-Financial-Engineering-An-Introduction-Using-R.html | [
"# Applied Probabilistic Calculus for Financial Engineering: An Introduction Using R\n\n## Books",
null,
"Illustrates how R may be used successfully to solve problems in quantitative finance\n\nApplied Probabilistic Calculus for Financial Engineering: An Introduction Using R provides R recipes for asset allocation and portfolio optimization problems. It begins by introducing all the necessary probabilistic and statistical foundations, before moving on to topics related to asset allocation and portfolio optimization with R codes illustrated for various examples. This clear and concise book covers financial engineering, using R in data analysis, and univariate, bivariate, and multivariate data analysis. It examines probabilistic calculus for modeling financial engineering—walking the reader through building an effective financial model from the Geometric Brownian Motion (GBM) Model via probabilistic calculus, while also covering Ito Calculus. Classical mathematical models in financial engineering and modern portfolio theory are discussed—along with the Two Mutual Fund Theorem and The Sharpe Ratio. The book also looks at R as a calculator and using R in data analysis in financial engineering. Additionally, it covers asset allocation using R, financial risk modeling and portfolio optimization using R, global and local optimal values, locating functional maxima and minima, and portfolio optimization by performance analytics in CRAN.\n\n• Covers optimization methodologies in probabilistic calculus for financial engineering\n• Answers the question: What does a \"Random Walk\" Financial Theory look like?\n• Covers the GBM Model and the Random Walk Model\n• Examines modern theories of portfolio optimization, including The Markowitz Model of Modern Portfolio Theory (MPT), The Black-Litterman Model, and The Black-Scholes Option Pricing Model\n\nApplied Probabilistic Calculus for Financial Engineering: An Introduction Using R s an ideal reference for professionals and students in economics, econometrics, and finance, as well as for financial investment quants and financial engineers.\n\nPreface\n\nDedication\n\nChapter 1: Introduction to Financial Engineering\n\n1 Introduction to Financial Engineering\n\n1.1 What is Financial Engineering?\n\n1.2 The Meaning of the Title of this Book\n\n1.3 The Continuing Challenge in Financial Engineering\n\n1.4 “Financial Engineering 101”: Modern Portfolio Theory\n\n1.5 Asset Class Assumptions Modeling\n\n1.6 Typical Examples of Proprietary Investment Funds\n\n1.7 The Dow Jones Industrial Average (DJIA) and Inflation\n\n1.8 Some Less Commendable Stock Investment Approaches\n\n1.9 Developing Tools for Financial Engineering Analysis Solutions to Exercises in Chapter 1:\n\nChapter 2: Probabilistic Calculus for Modeling Financial Engineering\n\n2.1 Introduction to Financial Engineering\n\n2.2 Mathematical Modeling in Financial Engineering\n\n2.3 Building an Effective Financial Model from GBM via Probabilistic Calculus\n\n2.4 A Continuous Financial Model Using Probabilistic Calculus (Stochastic Calculus, Ito Calculus)\n\n2.5 Numerical Examples of Representation of Financial Data Using R\n\nChapter 3: Classical Mathematical Models in Financial Engineering and Modern Portfolio Theory\n\n3.0 An Introduction to the Cost of Money in the Financial Market\n\n3.1 Modern Theories of Portfolio Optimization\n\n3.2 The Black-Litterman Model\n\n3.3 The Black-Scholes Option Pricing Model\n\nChapter 4: Data Analysis Using R Programming\n\n4.1 Data and Processing\n\n4.2 Beginning R\n\n4.3 R as a Calculator\n\n4.4 Using R in Data Analysis in Financial Engineering\n\n4.5 Univariate, Bivariate, and Multivariate Data Analysis\n\nAppendix 1: Documentation for the plot function\n\nSpecial References for Chapter 4\n\nChapter 5: Assets Allocation Using R\n\n5.1 Risk Aversion and the Assets Allocation Process\n\n5.2 Classical Assets Allocation Approaches\n\n5.3 Allocation with Time Varying Risk Aversion\n\n5.4 Variable Risk Preference Bias\n\n5.5 A Unified Approach for Time Varying Risk Aversion\n\n5.6 Assets Allocation Worked Examples\n\nChapter 6: Financial Risk Modeling and Portfolio Optimization Using R\n\n6.1 Introduction to the Optimization Process\n\n6.2 Optimization Methodologies in Probabilistic Calculus for Financial Engineering\n\n6.3 Financial Risk Modeling and Portfolio Optimization\n\nReferences\n\nIndex\n\n## Books & Journals\n\n### Books",
null,
"#### Common Errors in Statistics (and How to Avoid Them), 4th Edition",
null,
"#### Practitioner's Guide to Using Research for Evidence-Based Practice, 2nd Edition",
null,
"View all\n\n### Journals",
null,
"#### Significance",
null,
"#### Statistica Neerlandica",
null,
"View all"
]
| [
null,
"https://media.wiley.com/product_data/coverImage/12/11193876/1119387612.jpg",
null,
"https://media.wiley.com/product_data/coverImage/94/11182943/1118294394.jpg",
null,
"https://media.wiley.com/product_data/coverImage/13/11181367/1118136713.jpg",
null,
"https://media.wiley.com/product_data/coverImage/16/11183153/1118315316.jpg",
null,
"https://www.statisticsviews.com/common/images/thumbnails/small/1389ad71abd.gif",
null,
"https://www.statisticsviews.com/common/images/thumbnails/small//1389e6d7a6f.gif",
null,
"https://onlinelibrary.wiley.com/cms/asset/3cf2f255-8b56-4c3d-8b27-0385d1c56808/sta4.v9.1.cover.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7941225,"math_prob":0.68917817,"size":4354,"snap":"2020-34-2020-40","text_gpt3_token_len":904,"char_repetition_ratio":0.1878161,"word_repetition_ratio":0.04276316,"special_character_ratio":0.18787321,"punctuation_ratio":0.11218837,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99149215,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,2,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-06T01:36:45Z\",\"WARC-Record-ID\":\"<urn:uuid:6b8e2b5d-ce7e-4087-8421-7ccb3816e843>\",\"Content-Length\":\"50730\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:77daef43-1336-4b9e-bdc8-6f1fbac3709d>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac2cc4aa-c901-4f87-bdfe-c690f2abc5c0>\",\"WARC-IP-Address\":\"54.165.139.157\",\"WARC-Target-URI\":\"https://www.statisticsviews.com/details/book/10666587/Applied-Probabilistic-Calculus-for-Financial-Engineering-An-Introduction-Using-R.html\",\"WARC-Payload-Digest\":\"sha1:U3NJOQW6ISNV7IDKJN3TYSQAP6NPC7DL\",\"WARC-Block-Digest\":\"sha1:QBYAQI2TFIDOXC4CCHSSFWUS4FQHBX2D\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735990.92_warc_CC-MAIN-20200806001745-20200806031745-00125.warc.gz\"}"} |
http://www.apphysicsresources.com/2012/11/ap-physics-c-multiple-choice-practice_16.html | [
"## Pages\n\n`“Life is like riding a bicycle. To keep your balance you must keep moving.”–Albert Einstein`\n\n## Friday, November 16, 2012\n\n### AP Physics C - Multiple Choice Practice Questions on One Dimensional Motion\n\n“Maturity is often more absurd than youth and very frequently is most unjust to youth.”\nThomas A. Edison\n\nToday’s post covers a few multiple choice practice questions related to one dimensional kinematics meant for AP Physics C aspirants.\n(1) Water drops from a leaking overhead tank falls on the ground 5 m below, at regular intervals, the 11th drop just beginning to fall when the first drop strikes the ground. What will be the height of the 9th drop when the first drop strikes the ground? (Acceleration due to gravity = 10 ms–2)\n(a) 4.9 m\n(b) 4.8 m\n(c) 4.6 m\n(d) 4.4 m\n(e) 4.2 m\n\nThe time t taken by a drop to fall through 5 m is given by the relevant equation of motion,\n5 = 0 + ½ gt2\nSince g = 10 ms–2 the above equation gives t = 1 second\nWhen the first drop strikes the ground after falling freely for one second, the 9th drop has fallen freely for one-fifth of one second, as is evident from the adjoining figure (remembering that the drops fall at regular intervals of time). Therefore, the 9th drop has fallen through a distance s given by\ns = 0 + ½ g(0.2)2 = (½)×10×0.04 = 0.2 m\nTherefore the height of the 9th drop when the first drop strikes the ground is 5 m – 0.2 m = 4.8 m.\n(2) An object projected vertically upwards with initial velocity u attains maximum height in 5 s. The ratio of the distance traveled by the object in the 1st second and the 6th second is (Acceleration due to gravity = 10 ms–2)\n(a) 6 : 1\n(b) 8 : 1\n(c) 9 : 1\n(d) 10: 1\n(e) 11 : 1\nThe velocity of projection (u) of the object is given by the equation of uniformly accelerated linear motion,\n0 = u gt\n[We have used the equation, v = u + at with usual notations].\nSubstituting for g and t we have\n0 = u – 10×5\nTherefore u = 50 ms–1\nThe distance s1 traveled by the object in the 1st second is given by\ns1 = (50×1) – (½ ×10×12) = 45 m\n[We have used the equation, s = ut + ½ at2 with usual notations]\nAt the end of 5 seconds the object is at the highest point of its trajectory where its velocity is zero. Therefore, the distance s6 traveled by the object during the next one second (6th second) is given by\ns6 = 0 + ½ ×10×12 = 5 m.\n[We have used the equation of motion, s = ut + ½ gt2 with usual notations]\nThe ratio of the distance traveled by the object in the 1st second and the 6th second is\ns1 : s6 = 45 : 5 = 9 : 1\n(3) A ball projected vertically upwards with initial velocity u reaches maximum height h in t sec. What is the total time taken by the ball (from the instant of projection) to reach a height h/4 while returning?\n(a) 1.75t\n(b) 1.65t\n(c) 1.5t\n(d) 1.4t\n(e) 1.3t\nFrom the equation of motion, v2 = u2 + 2as with usual notations, we have for the upward journey\n0 = u2 2gh\nTherefore h = u2/2g\nFrom the equation of motion, v = u + at with usual notations, we have for the upward journey\n0 = u gt so that u = gt\nSubstituting for u in the expression for h we have\nh = gt2/2 …………. (i)\nWe now use the equation of motion, s = ut + ½ gt2 for the free fall of the ball from the maximum height h to the height h/4.\nSince distance of fall is 3h/4) we have\n3h/4 = 0 + ½ gt12 where t1 is the time of fall from the maximum height h to the height h/4.\nSubstituting for h from equation (i) we have\ngt2/8 = gt12/2\nThis gives t1 = t/2\nThe total time time taken by the ball (from the instant of projection) to reach a height h/4 while returning is t + t1 = t + (t/2) = 3t/2 = 1.5 t.\n\n(4) The velocity-time graph of an object moving along the x-direction is shown in the figure. What is the displacement of the object when it moves with the maximum acceleration?\n(a) 12 m\n(b) 8 m\n(c) 6 m\n(d) 4.8 m\n(e) 4.2 m\nThe object has maximum acceleration from 10 sec to 12 sec since the slope of the velocity-time graph is maximum during this interval. The displacement is given by the area under the velocity-time graph for the interval from 10 sec to 12 sec which is 8 m.\n[The displacement of the object when it moves with the maximum acceleration can be calculated using the equation, s = ut + ½ gt2 as well:\nAs is evident from the velocity-time graph, the maximum acceleration a is given by\na = Change of velocity/Time = (5 – 3)/(12 – 10) = 1 ms–2\nSince the initial velocity u = 3 ms–1 and the time interval t = 2 s, we have\ns = (3×2) + (½)×1×22 = 8 m].\n\n(5) A particle moving along the x-axis is initially at the origin with velocity 2 ms–1. If the acceleration a of the particle is given by a = 6t, the position of the particle after 4 seconds is\n(a) 24 m\n(b) 48 m\n(c) 72 m\n(d) 96 m\n(e) 120 m\nThe velocity v of the particle is given by\nv = adt = ∫6t dt = 3t2 + C where C is the constant of integration which we can find from the initial condition.\nInitially (at time t = 0) the particle has velocity 2 ms–1. Therefore from the above expression for velocity we have\nC = 2 ms–1\nThus the expression for velocity becomes\nv = 3t2 + 2\nThe position x of the particle is given by\nx = v dt = ∫(3t2 + 2)dt = t3 + 2t + C’ where C’ is the constant of integration in this case.\nInitially (at t = 0) since the particle is at the origin (where x = 0) we obtain (from the above expression for v)\nC’ = 0\nTherefore, the expression for the position x becomes\nx = t3 + 2t\nThe position x’ at t = 4 seconds is given by\nx’ = 43 + (2×4) = 72 m\n\nYou can find a few more useful questions (with solution) in this section here."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.87398094,"math_prob":0.9992231,"size":5307,"snap":"2019-51-2020-05","text_gpt3_token_len":1696,"char_repetition_ratio":0.13011503,"word_repetition_ratio":0.15884477,"special_character_ratio":0.31976634,"punctuation_ratio":0.065915,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99919516,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T19:03:10Z\",\"WARC-Record-ID\":\"<urn:uuid:23e5b9e3-6cbc-493b-92ea-9f527d25874e>\",\"Content-Length\":\"131937\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4d6b75c3-a3fc-4c01-9b5d-228e9e9a70be>\",\"WARC-Concurrent-To\":\"<urn:uuid:e31fee7a-daaa-4ca9-8722-c6bb1bee0745>\",\"WARC-IP-Address\":\"172.217.9.211\",\"WARC-Target-URI\":\"http://www.apphysicsresources.com/2012/11/ap-physics-c-multiple-choice-practice_16.html\",\"WARC-Payload-Digest\":\"sha1:HRICGOXBRZONVAMANBG2KYWYQJGNSJBL\",\"WARC-Block-Digest\":\"sha1:HBM2WN2MF7NJ7J65RBR534Q7XLXXJEDG\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541309137.92_warc_CC-MAIN-20191215173718-20191215201718-00279.warc.gz\"}"} |
https://tex.stackexchange.com/questions/212831/curly-brackets-spanning-multiple-lines-no-math-env | [
"Curly brackets spanning multiple lines (no math env)\n\nFor language learning purposes I would like to use curly brackets with an selection of objects (multiple lines of text, no math environment like this).",
null,
"But the code\n\n\\documentclass[12pt]{extarticle}\n\\usepackage[a4paper,verbose]{geometry}\n\\usepackage{fontspec}\n\\usepackage{empheq}\n\n\\setmainfont[Ligatures=TeX]{Gentium Plus}\n\n\\begin{document}\n\n\\begin{empheq}[left=\\empheqlbrace,right=\\empheqrbrace]{align*}\ntomatoes \\\\\nonions \\\\\ncucumbers\n\\end{empheq}\nin the market.\n\n\\end{document}\n\nproduces",
null,
"In addition the words within the brackets should be left aligned.\n\n• To expand on David's answer: you need an alignment environment that aligns to the left (align* alternates between right and left alignment). You also need something to switch to text mode, like \\text{tomatoes}. David used a tabular environment to perform both of these functions.\n– Dan\nNov 18 '14 at 20:57\n• Thank you for this clarification, Dan. I use a math environment but within that I switch to a text environment. Nov 19 '14 at 14:46",
null,
"$\\left\\{ \\begin{tabular}{@{}l@{}} tomatoes \\\\ onions \\\\ cucumbers \\end{tabular} \\right\\}$"
]
| [
null,
"https://i.stack.imgur.com/1mgIK.png",
null,
"https://i.stack.imgur.com/giIU1.png",
null,
"https://i.stack.imgur.com/d6vPQ.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6473543,"math_prob":0.650522,"size":755,"snap":"2022-05-2022-21","text_gpt3_token_len":200,"char_repetition_ratio":0.10386152,"word_repetition_ratio":0.0,"special_character_ratio":0.20662251,"punctuation_ratio":0.084745765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96056443,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-26T06:39:54Z\",\"WARC-Record-ID\":\"<urn:uuid:f0c7e97f-bdc9-45f4-bff7-b987c92fa62e>\",\"Content-Length\":\"137348\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cdfade81-0609-4e67-96f9-2573028b7e66>\",\"WARC-Concurrent-To\":\"<urn:uuid:c88ee465-c07e-4639-9fa4-db6181a38aab>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/212831/curly-brackets-spanning-multiple-lines-no-math-env\",\"WARC-Payload-Digest\":\"sha1:AFDTBWEJCYM6NO6GP4FTB6AM4MHOH6QT\",\"WARC-Block-Digest\":\"sha1:G7X4KOUBKSULUHMDYZ4KUDJJNHBAJVQP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304915.53_warc_CC-MAIN-20220126041016-20220126071016-00232.warc.gz\"}"} |
https://grahapada.com/geometry-worksheet-congruent-triangles/ | [
"# 21 Lovely Geometry Worksheet Congruent Triangles Graphics\n\nPosted on\n\nprintable congruent triangles worksheet new free printable congruent printable worksheet congruent triangles valid 18 new triangle printable congruent triangles worksheet valid free printable printable congruent triangles worksheet new worksheet ideas 20 printable worksheet congruent triangles inspirationa free collection of math worksheets congruent angles printable worksheet congruent triangles valid free printable free printable congruent triangles worksheets best geometry worksheet ideas 20 geometry worksheet congruent triangles printable congruent triangles worksheet save free printable",
null,
"Printable Congruent Triangles Worksheet New Free Printable Congruent from geometry worksheet congruent triangles , source:portaldefe.co",
null,
"Printable Worksheet Congruent Triangles Valid 18 New Triangle from geometry worksheet congruent triangles , source:portaldefe.co",
null,
"Printable Congruent Triangles Worksheet Valid Free Printable from geometry worksheet congruent triangles , source:portaldefe.co",
null,
"Printable Congruent Triangles Worksheet New Worksheet Ideas 20 from geometry worksheet congruent triangles , source:portaldefe.co",
null,
"Printable Worksheet Congruent Triangles Inspirationa Free from geometry worksheet congruent triangles , source:portaldefe.co\n\ncongruent triangles worksheet grade 5 myscres triangle area worksheet with answers best angle relationships puzzle triangle area worksheet with answers valid area triangle printable printable congruent triangles worksheet valid free printable printable worksheet congruent triangles fresh worksheet congruent printable worksheet congruent triangles best congruence printable congruent triangles worksheet refrence free printable worksheet ideas 20 geometry worksheet congruent triangles free printable congruent triangles worksheets refrence collection free printable congruent triangles worksheets best 17 awesome\n\nfree printable congruent triangles worksheets valid worksheets free printable congruent triangles worksheets fresh geometry free printable congruent triangles worksheets best triangle pdf triangle area worksheet with answers save gcse maths geometry free printable congruent triangles worksheets best triangle collection of congruent triangles worksheet grade 5 triangle area worksheet with answers refrence mass volume density congruent angles worksheet gallery worksheet for kids maths printing my geometry students loved this this classifying triangles card collection of math worksheets congruent angles"
]
| [
null,
"https://grahapada.com/wp-content/uploads/2018/09/geometry-worksheet-congruent-triangles-elegant-printable-congruent-triangles-worksheet-new-free-printable-congruent-of-geometry-worksheet-congruent-triangles.jpg",
null,
"https://grahapada.com/wp-content/uploads/2018/09/geometry-worksheet-congruent-triangles-unique-printable-worksheet-congruent-triangles-valid-18-new-triangle-of-geometry-worksheet-congruent-triangles.jpg",
null,
"https://grahapada.com/wp-content/uploads/2018/09/geometry-worksheet-congruent-triangles-inspirational-printable-congruent-triangles-worksheet-valid-free-printable-of-geometry-worksheet-congruent-triangles.jpg",
null,
"https://grahapada.com/wp-content/uploads/2018/09/geometry-worksheet-congruent-triangles-awesome-printable-congruent-triangles-worksheet-new-worksheet-ideas-20-of-geometry-worksheet-congruent-triangles.jpg",
null,
"https://grahapada.com/wp-content/uploads/2018/09/geometry-worksheet-congruent-triangles-beautiful-printable-worksheet-congruent-triangles-inspirationa-free-of-geometry-worksheet-congruent-triangles.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6028262,"math_prob":0.7199558,"size":2469,"snap":"2019-26-2019-30","text_gpt3_token_len":402,"char_repetition_ratio":0.36308315,"word_repetition_ratio":0.27586207,"special_character_ratio":0.13122721,"punctuation_ratio":0.0477707,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9955813,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-23T00:43:05Z\",\"WARC-Record-ID\":\"<urn:uuid:0c308a8a-8289-4f15-8d0e-0aee40761ee6>\",\"Content-Length\":\"64854\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa0b764b-8dcb-41c3-bdd7-d6261b574adb>\",\"WARC-Concurrent-To\":\"<urn:uuid:0237db47-10a5-457b-bab0-33f706e11811>\",\"WARC-IP-Address\":\"104.27.144.73\",\"WARC-Target-URI\":\"https://grahapada.com/geometry-worksheet-congruent-triangles/\",\"WARC-Payload-Digest\":\"sha1:VE7AP2ICT2HKVK46WPK7T5SYEOYRFVGZ\",\"WARC-Block-Digest\":\"sha1:IRXWQOZGEJGCVJYBDC7O4BNM6FUROCFF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195528635.94_warc_CC-MAIN-20190723002417-20190723024417-00185.warc.gz\"}"} |
https://ask.pinoybix.org/48/find-the-volume-generated-rotating-circle-6x-about-the-axis | [
"47 views\nFind the volume (in cubic units) generated by rotating a circle x^2 + y^2 + 6x + 4y + 12 = 0 about the y-axis.\nin Math | 47 views"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7245483,"math_prob":0.99789745,"size":2345,"snap":"2019-51-2020-05","text_gpt3_token_len":813,"char_repetition_ratio":0.13669372,"word_repetition_ratio":0.305618,"special_character_ratio":0.29381663,"punctuation_ratio":0.05957447,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.960222,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-05T18:05:43Z\",\"WARC-Record-ID\":\"<urn:uuid:ec0b8589-7946-4aa9-a71a-8b3c1afe4dc0>\",\"Content-Length\":\"47948\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:acb42abd-3900-4457-aa68-4e090ff6aa6a>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c8c4bd6-7cbd-4dd0-ad53-24a9e9a50e45>\",\"WARC-IP-Address\":\"104.24.99.66\",\"WARC-Target-URI\":\"https://ask.pinoybix.org/48/find-the-volume-generated-rotating-circle-6x-about-the-axis\",\"WARC-Payload-Digest\":\"sha1:P6UIHMRLM34G26QZWEFJDJOBO7MYMSDW\",\"WARC-Block-Digest\":\"sha1:Q22GVGXHMXBCYAYTWVBT7WRQO5DS732U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540481281.1_warc_CC-MAIN-20191205164243-20191205192243-00342.warc.gz\"}"} |
http://20bits.com/article/interview-questions-shuffling-an-array | [
"## Interview Questions: Shuffling an Array\n\nby Jesse Farmer on Tuesday, April 15, 2008\n\nThis is part of my interview question series. It's about shuffling arrays.\n\n### The Question\n\nYou have an array A of size N. Write a routine that shuffles the array in-place. The only restrictions are that all possible permutations of A must be possible and equally likely.\n\nThis interview question serves as a test for basic algorithm construction. There's a canonical solution that's not too difficult to arrive at if you've never seen it before, so it's a good combination of \"what do you know?\" and \"what can you do?\"\n\n### Workin' it out\n\nI'm going to create my solution in Ruby because that's the language the company that asked me this question used.\n\nThe first solution most people arrive at is subtly wrong. Jeff Atwood made the mistake in his blog post. The algorithm, in words, goes like this: iterate through each item in the array, pick another element at random, and swap the two.\n\nIn Ruby the above algorithm would look like this.\n\n```class Array\ndef shuffle_naive!\nn = size\nuntil n == 0\nk = rand(size) #This is the line which proves our undoing\nn = n -1\nself[n], self[k] = self[k], self[n]\nend\nend\nend```\n\nThis solution seems correct if not optimal, but there's a subtle problem: not all outcomes are equally likely.\n\nThe root cause of this is because this algorithm is drawing from a sample space of size NN, while the sample space of all permutations on an N-element array is only N!.\n\nThat is, for the naive shuffle, for each of the N steps in the iteration we make one of N decisions for a total of NN possible outcomes.\n\nBut NN > N! for all N > 1 and, more importantly, N! is not a divisor of NN. This means we're going to prefer at least one of the permutations more than the others, so the algorithm doesn't select among the possible permutations uniformly.\n\n### KFC, KFY\n\nThe \"best\" solution is the Knuth-Fischer-Yates shuffle. Here it is in Ruby\n\n```class Array\ndef shuffle!\nn = size\nuntil n == 0\nk = rand(n) #You can see I'm doing rand(n) rather than rand(size)\nn = n - 1\nself[n], self[k] = self[k], self[n]\nend\nself\nend\nend```\n\nThis works because it's an iterative version of an essentially recursive algorithm. If we know how to shuffle an array of size N-1 then shuffling an array of size N is easy — first shuffle the sub-array consisting of the first N-1 elements and then randomly swap in the last element to any of the N slots.\n\nThere's a proper inductive proof in there if you're so inclined, but it's not particularly illuminating."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9306168,"math_prob":0.9421683,"size":3001,"snap":"2020-24-2020-29","text_gpt3_token_len":682,"char_repetition_ratio":0.10643978,"word_repetition_ratio":0.026217228,"special_character_ratio":0.22659114,"punctuation_ratio":0.09800664,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98219657,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-05T11:02:03Z\",\"WARC-Record-ID\":\"<urn:uuid:f49f719d-5eee-406f-9a89-ebd6e02142fa>\",\"Content-Length\":\"7558\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:749c0f0c-388e-41f8-b972-a0e71797ab97>\",\"WARC-Concurrent-To\":\"<urn:uuid:aebb382b-45bf-49cf-958f-c208200c5111>\",\"WARC-IP-Address\":\"104.26.4.147\",\"WARC-Target-URI\":\"http://20bits.com/article/interview-questions-shuffling-an-array\",\"WARC-Payload-Digest\":\"sha1:MYT6US7CEWL4TVQO7ENTZGDHDTBU4IR7\",\"WARC-Block-Digest\":\"sha1:FVDLPNL3YJV73EOLHXI7RZCJ7VM4BOZ4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655887319.41_warc_CC-MAIN-20200705090648-20200705120648-00060.warc.gz\"}"} |
https://physics.stackexchange.com/questions/112165/would-adding-water-after-heating-decrease-the-overall-heat-when-compared-to-add | [
"Would adding water after heating decrease the overall heat, when compared to adding the water before heating for the same period of time?\n\nI was rather cold last night, and warmed up a cup of water to drink in the microwave. I put it in for 60 seconds, but it came out boiling hot, so I put a bit of cold water in with the hot water, and it was warm for me to drink.\n\nSo then it came to me - what if I had put that extra water in before I put it in the microwave? If I still heated it for 60 seconds, would the extra surface area of the extra water cause it to heat faster, or would the speed remain the same, due to the averaging of heat?\n\nThanks.\n\n• The real question is - People warm their cups of water in a microwave? People drink warm water? As for your question, I believe the 60 seconds in the microwave after you add more water will make the temperature change lower. $Q=mc\\Delta T$. Same time period means the heat, $Q$, is the same (need confirmation about that), $c$ is the specific heat (constant) and $m$ is what you're changing. We see that, under these conditions, $\\Delta T\\; \\alpha\\; \\frac1{m}$, so more mass means less temperature change. – Shahar May 11 '14 at 1:25\n\nIt's safe to approximate that all of the microwave's energy is deposited in the water. To some extent you will also heat up the mug and the walls of the microwave, but we will neglect that in favor of a bigger effect.\n\nIf the water reached 100 ºC in the microwave before the heating cycle ended, the heat after that was \"wasted\" by creating water vapor.\n\nSuppose what you want is water at 80 ºC. You pour some room-temperature water into a cup and microwave it for 60 s. After 40 s its temperature reaches 100 ºC; after this its temperature stops changing and vapor begins forming. At the end of the 60 s you remove the boiling water and add more room-temperature water. Because the heat of vaporization for water is quite large compared to the heat capacity for the liquid, very little of the boiling water actually left the cup as steam in the microwave. Essentially the final third of the microwave's heat is wasted.\n\nNow suppose instead you add enough water that it would take 75 seconds to boil, but only put it in the microwave for 60 s. Voilà: all of the microwave's heat usefully raises the temperature of the water.\n\nA microwave works by dielectric heating, where an oscillating high frequency electric field causes polar molecules to align in the field. Since the field alternates, they will rotate continuously thereby dispersing energy in the process macroscopically observed as temperature.\n\nThe quantity of heat $Q$ will not remain constant for different objects being heated$^{1}$. The cup prior to heating will therefore receive the same $Q$ per unit mass; thus, it will have a higher temperature in the end.\n\n$^{1}$ Not entirely sure about this, and I don't have a thermometer to test this experimentally.\n\nIt will come out cooler if you add the extra water at the end. It could be a lot-if you literally mean it came out boiling hot at the end of 60 seconds, you could have used the last 15 seconds to boil water. If you had added the extra water at the beginning, it would come out at 100C. If you add it at the end the mix will be cooler. The extra heat in the first case went into boiling water.\n\nEven if you never boil the water, the temperature at a given time will be hotter for the smaller quantity. You will then evaporate more water, losing heat in the process. Adding the water at the end will still result in a cooler final temperature, but probably only a small amount.\n\nIt would most likely be less warm if you add more water. It is because heat is measured in calories or temp. 1 calorie is how much energy that is required to heat up 1cm3 water up one degree Celsius. So theoretically if I added more mass or water it would take more energy to heat up because 25 cm3 of water would less energy to heat up because it has less mass than 50 cm3 of water. Heat doesn't change so time is the variable.\n\nLet us say for example you heated up half a cup of chocolate milk (25 cm3) for one minute and it would take x amount of heat compared to a full cup of chocolate milk (50 cm3) which might take 1.5x or 2x the time or heat that half a cup of chocolate milk took to reach same temp. It simply requires more energy and calories to heat them to the same temp...\n\nLet's go back of the envelope here: I'm going to assume that your \"original\" volume of water is 6 oz (I think that's fairly standard for a cup of coffee/tea), but you add an \"extra\" 2 oz of cold water. (I'll refer to the amounts of these waters by the quoted titles throughout the answer to keep the language precise). Whether or not this water will be added before/after microwaving will be left to be seen momentarily. Also, let's assume a 1000 Watt microwave, to keep the math easy, and because it splits the difference nicely between the common 900 Watt and 1100 Watt models. Finally, I'll use your 60 seconds as the time interval, also I can assure you my water never gets hot that fast in the microwave.\n\nIf your 1000 Watt microwave is left on for one minute, the total work done by the microwave will be $W=P\\Delta t$. That means that over 60 seconds, your microwave does 60000 Joules of work. In both this part of the answer, as well as the other, I will assume that all of this work is transferred into the water as heat. $$Q=mC\\Delta T$$ where $Q$ is the heat, $m$ is the mass of the water, $C=4.18 J/g$ per degree Celcius is the specific heat of water, and $\\Delta T$ is the temperature change we're looking for. $$60000J=(237g)(4.18 J/g)\\Delta T$$ When we crunch numbers, we get $$\\Delta T=60.6$$ degrees Celcius. If our water started at a room temperature of 22 degrees Celcius, that gives us a final temperature of $T_f=82.6$ degrees Celcius. Ok, so perhaps we were too optimistic to think that all of the microwave's work heated the water, but at least we've got a reasonable temperature to compare against the next step.\nFor this step, let's heat only our 6 ounces of original water in the microwave. Because we only have 6 ounces of water in the microwave, the mass of the water will now only be 177 grams, but otherwise our specific heat calculation will be identical to find $\\Delta t$. $$Q=mC\\Delta T$$ again, which will look like $$60000J=(177g)(4.18J/g)\\Delta T$$ where $$\\Delta T=81.09$$ Oh shoot! Looks like when we consider the starting temperature of room temperature water, our water will begin to experience a phase change in the microwave. However, the heat of fusion is quite large, compared to the small amount of heat which would have raised our water above 100 degrees Celcius, so I will just assume that our water temperature caps at $T_f=100$ degrees Celcius without a significant loss of mass to steam. Now the fun part. We will add 2 ounces, or 59 grams of room temperature ($T=22$) water to our microwaved water. When the hot, original water and the room temperature, extra water are mixed, they will exchange heat with each other such that $$Q_{original} +Q_{extra}=0$$ so that $$m_{original}C\\Delta T_{original} =-m_{extra}C\\Delta T_{extra}$$ but $C$, the specific heat of water divides out so $$(177g)(100-T_f)=-(59g)(22-T_f)$$ and $$17700-(177g)T_f=-1298+(59g)T_f$$ $$18998=(236g)T_f$$ where we get that $T_f=80.5$ degrees Celcius. This is slightly lower than our answer for \"Adding Water First\""
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9439871,"math_prob":0.97799474,"size":4342,"snap":"2019-43-2019-47","text_gpt3_token_len":1086,"char_repetition_ratio":0.14707239,"word_repetition_ratio":0.0053619305,"special_character_ratio":0.2618609,"punctuation_ratio":0.09049774,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9918523,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T05:41:09Z\",\"WARC-Record-ID\":\"<urn:uuid:adbec3d0-4cb2-4669-8657-6593d6fb194e>\",\"Content-Length\":\"168237\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e2b688c3-1d2b-41bd-b219-cc20d9de85f0>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2c2bf0b-fec6-4bce-873e-718ca091e618>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/112165/would-adding-water-after-heating-decrease-the-overall-heat-when-compared-to-add\",\"WARC-Payload-Digest\":\"sha1:ZRIJ266CKGTDGQMGC522FXKK2BUPXGLE\",\"WARC-Block-Digest\":\"sha1:XF7VIOU24ZOGUSK55FE43KF5TTE7D6QZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987756350.80_warc_CC-MAIN-20191021043233-20191021070733-00299.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/1012.2897/ | [
"arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.\n\n# Harmonic Maaß-Jacobi forms of degree 1 with higher rank indices\n\nCharles Conley Department of Mathematics University of North Texas 1155 Union Circle #311430 Denton TX 76203-1430 USA and Martin Raum MR MPI für Mathematik Vivatsgasse 7 53111 Bonn, Germany\nApril 28, 2020\n###### Abstract\n\nWe define and investigate real analytic weak Jacobi forms of degree 1 and arbitrary rank. En route we calculate the Casimir operator associated to the maximal central extension of the real Jacobi group, which for rank exceeding 1 is of order 4. The notion of mixed mock modular forms is extended to Jacobi forms so as to include multivariable Appell functions in a natural way. Using the Casimir operator, we make a connection between this new notion and the notion of real analytic Jacobi forms.\n\n## 1 Introduction\n\nThe theory of holomorphic Jacobi forms was developed by Eichler and Zagier in the course of their work on the Saito-Kurokawa conjecture [EZ85]. Later Berndt and Schmidt initiated a theory of real analytic Jacobi forms [BS98], which was developed further by Pitale [Pit09]. In the real analytic case, holomorphicity is replaced by the requirement that the forms be eigenfunctions of the Casimir operator, a third order operator which generates the center of the algebra of invariant operators [BCR].\n\nBringmann and Richter studied harmonic Maaß-Jacobi forms in the sense of Pitale [BR10], but with a weak growth condition that includes the -function discovered by Zwegers. Zwegers had used this function in [Zwe02] to understand the hitherto mysterious mock modular forms discovered by Ramanujan in the early 20 century. His work has been the focus of intense interest, having applications to mock theta functions [Ono09], combinatorics [Bri08, BL09, BGM09, BZ10], and physics [MO10].\n\nZwegers has just generalized the -function to higher Jacobi forms [Zwe10], by demonstrating the modularity of the multivariable Appell functions arising from certain character formulas for Lie superalgebras [KW94, KW01, STT05]. It may be that these functions will have an impact comparable to that of the -function.\n\nMock modular forms are the holomorphic parts of harmonic Maaß forms. Together with real analytic Jacobi forms of degree 1 and rank 1, they have received considerable attention over the past decade; see for example [GZ98] and the references above. Recently Zagier defined a mixed mock modular form to be the product of a mock modular form and a holomorphic modular form [Zag09]. These developments suggest the need for a precise definition of harmonic weak Jacobi forms of higher rank, along with a “mixed mock” version of this definition capturing the essential features of mixed mock modular forms.\n\nIn the present work we generalize the notion of harmonic weak Maaß-Jacobi forms of degree 1 to arbitrary indices of higher degree, in a manner which includes the Appell functions treated in [Zwe10]. Let be the rank Jacobi group , the semidirect product action being trivial on the first factor, and let be its central extension by the additive group of real symmetric matrices. An important ingredient of our work is the center of the universal enveloping algebra of . Using ideas developed by Borho [Bor76], Quesne [Que88], and Campoamor-Stursburg and Low [CSL09], we prove in Section 5 that this center is the polynomial algebra generated by and one additional element of degree . We refer to as the Casimir element of , as it is in some sense a lift of the Casimir element of .\n\nGiven any action of , we refer to the operator by which acts as the Casimir operator with respect to the action. In Section 2 we give formulas for the Casimir operators with respect to the standard slash actions of , in terms of both the usual coordinates (2) and the raising and lowering operators (2.6). For , these operators are of order 4.\n\nLet be the full Jacobi group, the integer points of . The slash actions of of interest here all drop to actions of . We define a Maaß-Jacobi form to be an eigenform of the Casimir operator with respect to such an action, invariant under and satisfying a certain growth condition.\n\nIn Section 3 we build a theory of mixed mock Jacobi forms by imposing conditions arising from a family of Laplace operators. This theory allows for specialization to torsion points in a manner compatible with the notion of harmonic weak Maaß forms in the classical setting [BF04]. We connect mixed mock Jacobi forms with harmonic Maaß-Jacobi forms, and we show that the space of all mixed mock Jacobi forms is closed under multiplication by holomorphic Jacobi forms and, as mentioned, contains the Appell functions appearing in [Zwe10]. We also decide the question of the extent to which these functions are typical examples.\n\nIn Section 4 we investigate a distinguished subspace of the space of Maaß-Jacobi forms, the space of semi-holomorphic forms. We show that in the higher rank case it is connected to the space of skew-holomorphic Jacobi forms: we define a -operator (4.1) which maps any harmonic Maaß-Jacobi form to the derivation of its non-holomorphic part, a skew-holomorphic Jacobi form in the sense of [Sko90, Hay06]. In Corollary 13 we show that all possible cuspidal non-holomorphic parts occur.\n\nThe Zagier-type dualities proved in Corollary 9 demonstrate the arithmetic relevance of our more general construction. As Bringmann and Richter remark in the rank 1 case [BR10], this relates holomorphic parts not only to one another, but also to non-holomorphic parts.\n\nThe paper concludes with Section 5, in which we use the algorithm developed by Helgason [Hel77] to deduce the invariant and covariant differential operators presented in Section 2.\n\n### Acknowledgements\n\nThe authors thank Olav Richter and Don Zagier for inspiring discussions. The second author holds a scholarship from the Max-Planck Society of Germany, and he expresses his gratitude to the CRM in Bellaterra, Spain, where great parts of his research were carried out during the Advanced Course on Modularity.\n\n## 2 Maaß-Jacobi forms with lattice indices\n\nWe first fix some notation. All vector spaces are complex unless we indicate otherwise. Let denote the space of matrices over a ring , abbreviate as , and let be the symmetric subspace of . Write and for the transpose and (when is square) trace of a matrix , respectively. Regarding elements of as column vectors, we will freely identify with via .\n\nWrite for the standard basis vector of and for the elementary matrix with entry and other entries , the sizes of and being determined by context. Let be the identity matrix in , and set\n\n J2n:=(0−InIn 0).\n\nThe real Jacobi group of rank and its subgroup , the full Jacobi group, are\n\n (2.1) GJN:=SL2(R)⋉(RN⊗R2),ΓJN:=SL2(Z)⋉(ZN⊗Z2).\n\nThe product in arises from the natural right action of on . It can be written most simply using the above identification of with : for , and , ,\n\n (M,X)(ˇM,ˇX)=(MˇM,XˇM+ˇX).\n\nLet be the Poincaré upper half plane, and define\n\n H1,N:=H×CN.\n\nWe will write for the -coordinate and for the -coordinates. We will be interested in a certain family of slash actions (i.e., right actions) of on . These actions are not restrictions of actions of , but rather quotients of restrictions of actions of a certain central extension of by the additive group . It will be necessary for us to work with in so far as we will use its Casimir element to construct for each slash action an invariant differential operator, the Casimir operator.\n\n###### Definition 1.\n\nMaintaining the identification, the centrally extended rank real Jacobi group and its product are\n\n ~GJN:={(M,X,κ): (M,X)∈GJN, κ∈MN(R), κ+12XJ2XT∈MTN(R)}, (M,X,κ)(ˇM,ˇX,ˇκ):=(MˇM,XˇM+ˇX,κ+ˇκ−XˇMJ2ˇXT).\n\nNote that is centerless, and the center of is . As we will see in Section 5, is a subgroup of .\n\nNow fix an element of . For , define\n\n Mτ:=(aτ+b)(cτ+d)−1,β(M,τ):=(cτ+d)−1.\n\nThen is the standard left action of on , and is a scalar cocycle with respect to it:\n\n β(MˇM,τ)=β(M,ˇMτ)β(ˇM,τ).\n\nScalar cocycles are in bijection with slash actions on scalar functions. For example, is a cocycle for all , and the associated slash action of on is usually written\n\n ϕ|k[M](τ):=βk(M,τ)ϕ(Mτ).\n\nFor future reference, let us mention that the algebra of differential operators on invariant with respect to the -action is the polynomial algebra on one variable generated by the -Casimir operator of , which differs by an additive constant from the weight hyperbolic Laplacian\n\n (2.2) Δk := 4y2∂τ∂¯¯τ−2iky∂¯¯τ.\n\nThe theory of cocycles is well-known; see e.g. [BCR] for a brief summary. Here we will only review the method by which the scalar cocycles of a given action are classified up to cohomological equivalence. The stabilizer of under is , and one checks that the restriction of any cocycle to defines a representation of on . Moreover, it is a fact that two cocycles are equivalent if and only if they define equal representations of . It follows that exhausts the cocycles of the action under consideration up to equivalence. For example, the conjugate is also a cocycle, equivalent to .\n\nHenceforth write and for the columns of any element of . The action of on generalizes to the following well-known left action of on :\n\n (2.3) (M,X)(τ,z):=(Mτ,β(M,τ)(z+X1τ+X2)).\n\nRegard this as an action of . As such, the stabilizer of the element of is , and the equivalence classes of the scalar cocycles of the action are in bijection with the representations of on .\n\nIn order to describe a complete family of cocycles, define a function by\n\n a((M,X,κ),(τ,z)) := κ+X2XT1+X1zT+zXT1+X1XT1τ −cβ(M,τ)(z+X1τ+X2)(z+X1τ+X2)T\n\n(recall that is ). For , define by\n\n αL((M,X,κ),(τ,z)) := exp{2πitr[La((M,X,κ),(τ,z))]}.\n###### Lemma 2.\n\nFor all and , is a scalar cocycle with respect to the action (2.3) on of the centrally extended Jacobi group from Definition 1. Moreover, any scalar cocycle of this action is equivalent to exactly one of these cocycles.\n\n###### Proof.\n\nThe proof that is a cocycle of the action of on is the same as the proof that it is a cocycle of the action of on . The proof that is a cocycle is standard in the case and proceeds along the same lines in general. One must prove that . First check that it suffices to prove this for both and in either the semisimple or the nilpotent part of , and then check each of the resulting four cases directly. The second sentence follows immediately from the classification of representations of .\n\nAs a consequence of this lemma we have the following family of slash actions of on : for , and ,\n\n ϕ|k,k′,L[M,X,κ](τ,z) := ϕ((M,X,κ)(τ,z)) ×βk(M,τ)¯¯¯βk(M,τ)αL((M,X,κ)(τ,z)).\n\nObserve that since is positive, makes sense for all , with . We will write for . (Usually we will be concerned only with the case , but at one point we will need the freedom to choose differently.) By Lemma 2, any slash action is equivalent to exactly one of the actions ; as we have mentioned, is equivalent to .\n\n###### Definition 3.\n\nA differential operator on is covariant from to if for all and , we have\n\n T(f|k,L[g]) = (Tf)∣∣k′,L′[g].\n\nLet be the space of covariant operators from to , and let be the space of those of order . When and , we refer to such operators as -invariant and write simply and .\n\nAt this point we state the main results of Section 5, Theorem 4 and Propositions 6, 7, and 8. Elements of holomorphic in will be called semi-holomorphic. For any matrix and any -vector , set\n\n A[w]:=wTAw.\n\nRecall the Laplacian (2.2) and our notation and . For brevity, write . For invertible, define\n\n Ck,L := −2Δk−N/2+2y2(∂¯¯τ% \\it\\L−1[∂z]+∂τ\\it\\L−1[∂¯z])−8y∂τvT∂¯z −12y2(\\it\\L−1[∂¯z]\\it\\L−1[∂z]−(∂T¯z\\it\\L−1∂z)2)+2y(vT∂¯z)∂Tz\\it\\L−1∂u −12(2k−N+1)iy∂T¯z\\it\\L−1∂u+2vT(vT∂¯z)∂¯z+(2k−N−1)ivT∂¯z.\n###### Theorem 4.\n\nFor invertible, the operator is, up to additive and multiplicative scalars, the Casimir operator of with respect to the -action (see Section 5). It generates the image of the -action of the center of the universal enveloping algebra of . In particular, it lies in the center of . Its action on semi-holomorphic functions is\n\n (2.5) −2Δk−N/2+2y2∂¯¯τ\\it\\L−1[∂z].\n\nNote that for , (2) is of order 4. At it is of order 3 and reduces to the operator given in [BR10] with . (There is a misprint in [BR10]: the term should be . This stems in part from a similar misprint in (8) of [Pit09], where the term coming from (6) of [Pit09] is missing.)\n\n###### Definition 5.\n\nThe lowering operators, and , and the raising operators, and , are\n\n Xk,L− :=−2iy(y∂¯¯τ+vT∂¯z), Xk,L+ :=2i(∂τ+y−1vT∂z+y−2\\it\\L[v])+ky−1, Yk,L− :=−iy∂¯z, Yk,L+ :=i∂z+2iy−1\\it\\Lv.\n\nFor and , these are the operators given on page 59 of [BS98]. (There is a misprint in their formula for : the expression on the far right should be multiplied by .) Note that are actually -vector operators. We write for their entries.\n\nFrequently we will suppress the superscript . Care must be taken with this abbreviation, as for example means .\n\n###### Proposition 6.\n\nThe spaces are 1-dimensional, and the spaces are -dimensional. They have bases given by\n\nThe spaces are equal to . All other are zero.\n\nThe raising operators commute with one another, as do the lowering operators (but keep in mind that, for example, means ). The commutators between them are\n\n [X−,X+]=−k,[Y−,j,Y+,j′]=i\\it\\Ljj′,[X−,Y+]=−Y−,[Y−,X+]=Y+.\n\n###### Proposition 7.\n\nAny covariant differential operator of order may be expressed as a linear combination of products of up to raising and lowering operators. There is a unique such expression in which the raising operators are all to the left of the lowering operators.\n\nThe expression of this form for the Casimir operator is\n\n (2.6) Ck,L=−2X+X−+i(X+\\it% \\L−1[Y−]−\\it\\L−1[Y+]X−)−12(\\it\\L−1[Y+]\\it\\L−1[Y−]−YT+(YT+\\it\\L−1Y−)\\it\\L−1Y−)−12(2k−N−3)iYT+\\it\\L−1Y−.\n\n###### Proposition 8.\n\nThe algebra is generated by . The spaces and are of dimensions and , respectively. Bases for them are given by the following equations:\n\n D3k,L = Span{X+Y−,iY−,j, Y+,iY+,jX−:1≤i≤j≤N}⊕D2k,L, D2k,L = Span{1, X+X−, Y+,iY−,j:1≤i,j≤N}.\n\nThe focus of this paper is the space of harmonic Maaß-Jacobi forms of index and weight . In order to define it, fix and a positive definite integral even lattice of rank . We will identify with its Gram matrix with respect to a fixed basis, a positive definite symmetric matrix with entries in and diagonal entries in . Write for the covolume of the lattice, the determinant of the Gram matrix.\n\nThe full Jacobi group defined in (2.1) clearly has a central extension by which is a subgroup of . It is easy to check that when is a Gram matrix, the cocycle is trivial on . Therefore the -action factors through to an action of , which we will also denote by .\n\n###### Definition 9 (Maaß-Jacobi forms).\n\nA Maaß-Jacobi form of weight and index is a function satisfying the following conditions:\n\n1. For all , we have .\n\n2. is an eigenfunction of .\n\n3. For some , as .\n\nIf is annihilated by the Casimir operator , it is said to be a harmonic Maaß-Jacobi form. We denote the space of all harmonic Maaß-Jacobi forms of fixed weight and index by .\n\n###### Remark 1.\n\nAdapting the proof in [BS98, Section 2.6], which is based on [LV80, Section 1.3] and [MVW87, Section 2.I.2], we see that any automorphic representation of is a tensor product . Here is a genuine representation of the metaplectic cover of , and is the Schrödinger-Weil representation of central character . The latter is the extension to the metaplectic cover of the Jacobi group of the Schrödinger representation of the Heisenberg group, which is induced from the character of its center. Thus, as in [Pit09], semi-holomorphic forms play an important role in the representation-theoretic treatment of harmonic Maaß-Jacobi forms.\n\nFor later use we set\n\n e(r):=e2πir, q:=e(τ), ζr:=N∏i=1e(ziri).\n\n## 3 Mixed mock Jacobi forms\n\nThe Maaß-Jacobi forms introduced in the last section completely capture the spectral aspects of the Jacobi group. However, for arithmetic applications the conditions in Definition 9 are too weak. Indeed, even harmonic Maaß-Jacobi forms yield a partial differential equation of order that is imposed on in the Fourier addend .\n\nTo get an arithmetically significant subspace it is necessary to impose further conditions. It is highly desirable that this leads to finite dimensional spaces of solutions for each and . Starting with the Laplace operator, we impose conditions ensuring that specialization to torsion points yields the Fourier expansions of harmonic Maaß forms over . Later we will attach certain polynomials to each of the resulting space of solutions. After fixing these polynomials, these spaces of solutions are indeed finite dimensional.\n\nThere is a family of -invariant metrics on . To make their expression more readable we use the S-coordinates on defined by with and real (see [BS98]).\n\n###### Proposition 1.\n\nFor any positive definite symmetric matrix ,\n\n ds2=y−2dτd¯¯τ+y−1(τ¯¯¯τC[∂p]+2x∂TqC∂p+C[∂q])\n\nis an invariant metric. The associated Laplace operator is\n\n###### Proof.\n\nThe invariance with respect to follows as for . The invariance with respect to the Heisenberg group we can see by choosing an appropriate basis of and again following the calculation for . The Laplace operator can be seen to be attached to by choosing a basis of , such that is becomes a diagonal matrix.\n\nA function is harmonic with respect to all Laplace operators in Proposition 1 if and only if it vanishes under the operators and for all . Note that this is equivalent to vanishing under all elements of which annihilate constants.\n\nThe following definition is not standard; we use it to determine a particular subspace of modular forms.\n\n###### Definition 2.\n\nA function is polynomially torsion harmonic if and only if there is an absolutely convergent series representation such that for each there are nonzero polynomials and in variables, , satisfying\n\n Y−,iϕh∈pY,i(v/y)ker(Y+,i),X−ϕh∈pX(v/y)ker(X+).\n\nThe next lemma justifies this definition by connecting it with the order 1 covariant operators. We will see below that the -function and the Appell functions are typical examples of polynomially torsion harmonic functions.\n\n###### Lemma 3.\n\nFor , any function in the intersection of all kernels of the order 1 raising operators is a scalar multiple of\n\n y−ke(l¯¯¯τ+h¯¯¯z+4L[v]/y)\n\nIn what follows we need the space of images of polynomially torsion harmonic functions under .\n\n###### Definition 4.\n\nA completed mixed mock Jacobi form of weight and index and harmonic index is a function satisfying the following conditions:\n\n1. For all , we have .\n\n2. For all and some , we have and .\n\n3. For some , as .\n\n###### Remark 2.\n\nThis notion of a mixed mock modular form is based on a definition introduced by Zagier in a seminar [Zag09]. It encompasses products of mock and holomorphic modular forms, which have applications in physics.\n\n###### Remark 3.\n\nBecause and are anti-holomorphic differential operators, the space of mixed mock Jacobi forms is preserved under multiplication by for all .\n\n###### Remark 4.\n\nExample 6 will show that we need the freedom to choose in (ii). Since the Laplace operators are not in the center of the universal enveloping algebra, this is permissible, but it is an interesting phenomenon which might inspire the construction of further examples of harmonic Maaß-Jacobi forms.\n\nWe are mainly interested in mixed mock Jacobi forms whose completion vanishes under the Casimir operator.\n\n###### Proposition 5.\n\nThe Fourier expansion of a completed mixed mock Jacobi form with constant polynomials and of degree 1 is of the form\n\nwhere . The index runs over all values yielding a fixed value of , the harmonicity index of the mixed mock Jacobi form. These forms are eigenfunctions of the Casimir operator if and only if .\n\n###### Proof.\n\nIt is easy to see that the Fourier expansion maps to the kernels of and under and , and the module of solutions has rank over the holomorphic functions. For the second statement, apply the decomposition in (2.6) and use the assumption on and the .\n\nThe first sum in the preceding lemma only involves holomorphic functions, and we will call this part of a completed mixed mock Jacobi form a mixed mock Jacobi form.\n\n###### Example 6.\n\nIn [Zwe10] Zwegers investigated the higher Appell sums , which Kac and Wakimoto had previously related to affine Lie superalgebras [KW01]. These sums are examples of mixed mock Jacobi forms (with Definition 4 (i) holding for a congruence subgroup). With Zwegers’s considerations in mind, it is not hard to see that all Fourier coefficients of the meromorphic Jacobi forms\n\n η(τ)lf(u+z)g(v−ξz)f(u)θ(z)\n\nwith a sufficiently large are mixed mock Jacobi forms. Here is an arbitrary holomorphic Jacobi form for rank 1, and is a holomorphic Jacobi form of arbitrary rank.\n\n###### Theorem 7.\n\nGiven a mixed mock Jacobi form for any , the function is a mixed mock modular form. For any and any matrix , the function is a mixed mock Jacobi form for an appropriate congruence subgroup.\n\n###### Proof.\n\nWe need only check the images under the elliptic -operators or under and , respectively. In the case that we specialize to torsion points , the result holds, as corresponds to in the specialization. The second case reduces to linearity of differential operators.\n\n## 4 Semi-holomorphic forms\n\nRecall that a function on holomorphic in is called semi-holomorphic. We will denote the space of semi-holomorphic harmonic Maaß-Jacobi forms by . Semi-holomorphic forms vanish under , and acts on them by . In particular, semi-holomorphic forms do not fall under Definition 4 unless they are holomorphic.\n\nThe theory of semi-holomorphic forms essentially mimics that of harmonic weak Maaß forms. Indeed, in Theorem 5 we will see that the -decomposition gives a well-behaved bijection between vector-valued weak harmonic Maaß forms and harmonic semi-holomorphic Maaß-Jacobi forms.\n\nWe first discuss semi-holomorphic Fourier expansions of Maaß-Jacobi forms. The negative discriminant of a Fourier index is denoted by\n\n D:=DL(n,r):=|L|(4n−L−1[r])\n\nBy analogy with [BF04, page 9], define a function\n\n H(y) :=e−y∫∞−2ye−tt−k−N/2dt.\n###### Proposition 1.\n\nAny semi-holomorphic harmonic Maaß-Jacobi form has a Fourier expansion of the form\n\n yN/2−k ∑n∈Z,r∈ZNs.t. D=0c0(n,r)qnζr+∑n∈Z,r∈ZNs.t. D≫−∞c+(n,r)qnζr +∑n∈Z,r∈ZNs.t. D≪∞c−(n,r)H(πDy/2|L|)e(−iDy/4|L|)qnζr.\n\n###### Proof.\n\nThis can be proved as in the case of rank lattices, by solving the differential equation for the coefficients coming from the Casimir operator and then imposing the growth condition.\n\nOur investigation will concentrate on semi-holomorphic harmonic Maaß-Jacobi forms, and in particular their relation to skew-holomorphic forms. To state this relation we must define a -operator. Proceeding as in [BR10, Section 4], we first define the lowering operator\n\n D(L)− :=−2iy(y∂¯¯τ+vT∂¯z−14y\\it\\L−1[∂¯z]) = X−−i2\\it\\L−1[Y−].\n\nUsing this operator, we define the -operator by\n\n (4.1) ξk,L :=yk−5/2D(L)−.\n\nThis is an analog of the -operator in [Maa49]. The latter sends Maaß forms to their shadows, which are holomorphic if they have harmonic preimages. In our setting skew-holomorphic forms take the place of holomorphic ones.\n\n###### Definition 2 (Skew-holomorphic Jacobi forms).\n\nA skew-holomorphic Jacobi form of weight and index is a semi-holomorphic function satisfying the following conditions. First, for all the equation holds. Second, the Fourier expansion of has the form\n\n ϕ(τ,z) =∑n∈Z,r∈ZNs.t. D≫−∞c(n,r)e(−iDy/2|L|)qnζr.\n\nWe write for the space of all such forms.\n\n###### Remark 5.\n\nSkew-holomorphic Jacobi forms were first introduced by Skoruppa in [Sko90]. There are several articles treating a slightly more general notion than that we have given. See in particular [Hay06].\n\n###### Remark 6.\n\nThe Fourier expansion condition can be stated in terms of annihilation by the heat operator .\n\n###### Proposition 3.\n\nIf , then is an element of .\n\n###### Proof.\n\nBy Proposition 6, is a covariant operator from to . Applying to the Fourier expansion of a Maaß-Jacobi form as in Proposition 1 shows that the Fourier expansion of has the correct form.\n\nThe -operator is compatible with the -decomposition. To state this precisely, let be the elliptic metaplectic group with the same level as . Denote the spaces of vector-valued harmonic Maaß forms for the Weil representation by . For weakly holomorphic vector-valued Maaß forms change the superscript to . The -operator maps this space of harmonic Maaß forms to the space of weakly holomorphic forms.\n\nTo revise the -decomposition we need the following -series for :\n\n (4.2) θL,μ(τ,z) :=∑r∈ZN,r≡μ(LZN)qL−1[r]/4ζr.\n###### Definition 4 (θ-decomposition).\n\nThe Maaß-Jacobi -decomposition is the map defined by\n\n f(τ,z) =∑μ(ZN/LZN)θsemiL(f)μ(τ)θL,μ(τ,z).\n\nSimilarly, the skew-holomorphic -decomposition map is defined by\n\n f(τ,z) =∑μ(ZN/LZN)¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯θskL(f)μ(τ)θL,μ(τ,z).\n\n###### Remark 7.\n\nThe existence of a -decomposition for a harmonic Maaß form is equivalent to its semi-holomorphicity.\n\n###### Theorem 5.\n\nIf is even, the -decomposition of forms in and commutes with the -operators and . More precisely, the following diagram is commutative:\n\n###### Proof.\n\nThis is a calculation analogous to that in [BR10, Section 6].\n\nBefore we consider the Poincaré series we define a special part of the space of semi-holomorphic harmonic Maaß-Jacobi forms. We will show that it maps surjectively to the space of skew-holomorphic Jacobi forms with cuspidal shadow.\n\n###### Definition 6 (Maaß-Jacobi forms with cuspidal shadow).\n\nThe inverse image under of , the cuspidal subspace of , is denoted by . It is the space of semi-holomorphic harmonic Maaß-Jacobi forms with cuspidal shadow.\n\n### 4.1 Poincaré series\n\nIn [BR10, Section 5] the authors define Maaß-Poincaré series for the Jacobi group. They restrict to Jacobi indices of rank one. In this section we generalize their considerations to arbitrary lattice indices.\n\nWe use the notation of Section 2; in particular, is an integral lattice and is in . Throughout this section will be an integer and will be in . Maintain as above and set as follows:\n\n D:=DL(n,r):=|L|(4n−L−1[r]),h:=hL(r):=|L|L−1[r].\n\nThe standard scalar product of two -vectors and will be written as .\n\nUsing the -Whittaker function (see [WW96]), we define\n\n (4.3) Ms,κ(t) :=|t|−κ/2Msgn(t)κ/2,s−1/2(|t|), (4.4) ϕ(n,r)k,L,s(τ,z) :=Ms,k−N/2(πDy/|L|)e(rz+iL−1[r]y/4+nx).\n###### Lemma 7.\n\nThe function defined in (4.4) is an eigenfunction of the Casimir operator in Theorem 4, with eigenvalue\n\n (4.5)"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90844184,"math_prob":0.97670263,"size":23100,"snap":"2020-24-2020-29","text_gpt3_token_len":5232,"char_repetition_ratio":0.17505196,"word_repetition_ratio":0.03183902,"special_character_ratio":0.21077922,"punctuation_ratio":0.10388709,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9879831,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-09T13:54:01Z\",\"WARC-Record-ID\":\"<urn:uuid:71efc68c-a308-461f-876c-c29c844ee62d>\",\"Content-Length\":\"1049277\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:55796443-faef-4c00-a290-f936f425a7fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:7924c8e0-df88-4937-9430-9ddb400f7016>\",\"WARC-IP-Address\":\"104.28.21.249\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/1012.2897/\",\"WARC-Payload-Digest\":\"sha1:D2DKGC2LNOF3EB4WR4T7TEEQE2WIMF5P\",\"WARC-Block-Digest\":\"sha1:OY3CWZPRNPGYM2CDQZJLV746L5J5ZNNM\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655900335.76_warc_CC-MAIN-20200709131554-20200709161554-00524.warc.gz\"}"} |
https://scikit-allel.readthedocs.io/en/v1.1.3/stats/sf.html | [
"# Site frequency spectra¶\n\n`allel.``sfs`(dac)[source]\n\nCompute the site frequency spectrum given derived allele counts at a set of biallelic variants.\n\nParameters: dac : array_like, int, shape (n_variants,) Array of derived allele counts. sfs : ndarray, int, shape (n_chromosomes,) Array where the kth element is the number of variant sites with k derived alleles.\n`allel.``sfs_folded`(ac)[source]\n\nCompute the folded site frequency spectrum given reference and alternate allele counts at a set of biallelic variants.\n\nParameters: ac : array_like, int, shape (n_variants, 2) Allele counts array. sfs_folded : ndarray, int, shape (n_chromosomes//2,) Array where the kth element is the number of variant sites with a minor allele count of k.\n`allel.``sfs_scaled`(dac)[source]\n\nCompute the site frequency spectrum scaled such that a constant value is expected across the spectrum for neutral variation and constant population size.\n\nParameters: dac : array_like, int, shape (n_variants,) Array of derived allele counts. sfs_scaled : ndarray, int, shape (n_chromosomes,) An array where the value of the kth element is the number of variants with k derived alleles, multiplied by k.\n`allel.``sfs_folded_scaled`(ac, n=None)[source]\n\nCompute the folded site frequency spectrum scaled such that a constant value is expected across the spectrum for neutral variation and constant population size.\n\nParameters: ac : array_like, int, shape (n_variants, 2) Allele counts array. n : int, optional The total number of chromosomes called at each variant site. Equal to the number of samples multiplied by the ploidy. If not provided, will be inferred to be the maximum value of the sum of reference and alternate allele counts present in ac. sfs_folded_scaled : ndarray, int, shape (n_chromosomes//2,) An array where the value of the kth element is the number of variants with minor allele count k, multiplied by the scaling factor (k * (n - k) / n).\n`allel.``joint_sfs`(dac1, dac2)[source]\n\nCompute the joint site frequency spectrum between two populations.\n\nParameters: dac1 : array_like, int, shape (n_variants,) Derived allele counts for the first population. dac2 : array_like, int, shape (n_variants,) Derived allele counts for the second population. joint_sfs : ndarray, int, shape (m_chromosomes, n_chromosomes) Array where the (i, j)th element is the number of variant sites with i derived alleles in the first population and j derived alleles in the second population.\n`allel.``joint_sfs_folded`(ac1, ac2)[source]\n\nCompute the joint folded site frequency spectrum between two populations.\n\nParameters: ac1 : array_like, int, shape (n_variants, 2) Allele counts for the first population. ac2 : array_like, int, shape (n_variants, 2) Allele counts for the second population. joint_sfs_folded : ndarray, int, shape (m_chromosomes//2, n_chromosomes//2) Array where the (i, j)th element is the number of variant sites with a minor allele count of i in the first population and j in the second population.\n`allel.``joint_sfs_scaled`(dac1, dac2)[source]\n\nCompute the joint site frequency spectrum between two populations, scaled such that a constant value is expected across the spectrum for neutral variation, constant population size and unrelated populations.\n\nParameters: dac1 : array_like, int, shape (n_variants,) Derived allele counts for the first population. dac2 : array_like, int, shape (n_variants,) Derived allele counts for the second population. joint_sfs_scaled : ndarray, int, shape (m_chromosomes, n_chromosomes) Array where the (i, j)th element is the scaled frequency of variant sites with i derived alleles in the first population and j derived alleles in the second population.\n`allel.``joint_sfs_folded_scaled`(ac1, ac2, m=None, n=None)[source]\n\nCompute the joint folded site frequency spectrum between two populations, scaled such that a constant value is expected across the spectrum for neutral variation, constant population size and unrelated populations.\n\nParameters: ac1 : array_like, int, shape (n_variants, 2) Allele counts for the first population. ac2 : array_like, int, shape (n_variants, 2) Allele counts for the second population. m : int, optional Number of chromosomes called in the first population. n : int, optional Number of chromosomes called in the second population. joint_sfs_folded_scaled : ndarray, int, shape (m_chromosomes//2, n_chromosomes//2) Array where the (i, j)th element is the scaled frequency of variant sites with a minor allele count of i in the first population and j in the second population.\n`allel.``fold_sfs`(s, n)[source]\n\nFold a site frequency spectrum.\n\nParameters: s : array_like, int, shape (n_chromosomes,) Site frequency spectrum n : int Total number of chromosomes called. sfs_folded : ndarray, int Folded site frequency spectrum\n`allel.``fold_joint_sfs`(s, m, n)[source]\n\nFold a joint site frequency spectrum.\n\nParameters: s : array_like, int, shape (m_chromosomes, n_chromosomes) Joint site frequency spectrum. m : int Number of chromosomes called in the first population. n : int Number of chromosomes called in the second population. joint_sfs_folded : ndarray, int Folded joint site frequency spectrum.\n`allel.``scale_sfs`(s)[source]\n\nScale a site frequency spectrum.\n\nParameters: s : array_like, int, shape (n_chromosomes,) Site frequency spectrum. sfs_scaled : ndarray, int, shape (n_chromosomes,) Scaled site frequency spectrum.\n`allel.``scale_sfs_folded`(s, n)[source]\n\nScale a folded site frequency spectrum.\n\nParameters: s : array_like, int, shape (n_chromosomes//2,) Folded site frequency spectrum. n : int Number of chromosomes called. sfs_folded_scaled : ndarray, int, shape (n_chromosomes//2,) Scaled folded site frequency spectrum.\n`allel.``scale_joint_sfs`(s)[source]\n\nScale a joint site frequency spectrum.\n\nParameters: s : array_like, int, shape (m_chromosomes, n_chromosomes) Joint site frequency spectrum. joint_sfs_scaled : ndarray, int, shape (m_chromosomes, n_chromosomes) Scaled joint site frequency spectrum.\n`allel.``scale_joint_sfs_folded`(s, m, n)[source]\n\nScale a folded joint site frequency spectrum.\n\nParameters: s : array_like, int, shape (m_chromosomes//2, n_chromosomes//2) Folded joint site frequency spectrum. m : int Number of chromosomes called in the first population. n : int Number of chromosomes called in the second population. joint_sfs_folded_scaled : ndarray, int, shape (m_chromosomes//2, n_chromosomes//2) Scaled folded joint site frequency spectrum.\n`allel.``plot_sfs`(s, yscale=’log’, bins=None, n=None, clip_endpoints=True, label=None, plot_kwargs=None, ax=None)[source]\n\nPlot a site frequency spectrum.\n\nParameters: s : array_like, int, shape (n_chromosomes,) Site frequency spectrum. yscale : string, optional Y axis scale. bins : int or array_like, int, optional Allele count bins. n : int, optional Number of chromosomes sampled. If provided, X axis will be plotted as allele frequency, otherwise as allele count. clip_endpoints : bool, optional If True, do not plot first and last values from frequency spectrum. label : string, optional Label for data series in plot. plot_kwargs : dict-like Additional keyword arguments, passed through to ax.plot(). ax : axes, optional Axes on which to draw. If not provided, a new figure will be created. ax : axes The axes on which the plot was drawn.\n`allel.``plot_sfs_folded`(*args, **kwargs)[source]\n\nPlot a folded site frequency spectrum.\n\nParameters: s : array_like, int, shape (n_chromosomes/2,) Site frequency spectrum. yscale : string, optional Y axis scale. bins : int or array_like, int, optional Allele count bins. n : int, optional Number of chromosomes sampled. If provided, X axis will be plotted as allele frequency, otherwise as allele count. clip_endpoints : bool, optional If True, do not plot first and last values from frequency spectrum. label : string, optional Label for data series in plot. plot_kwargs : dict-like Additional keyword arguments, passed through to ax.plot(). ax : axes, optional Axes on which to draw. If not provided, a new figure will be created. ax : axes The axes on which the plot was drawn.\n`allel.``plot_sfs_scaled`(*args, **kwargs)[source]\n\nPlot a scaled site frequency spectrum.\n\nParameters: s : array_like, int, shape (n_chromosomes,) Site frequency spectrum. yscale : string, optional Y axis scale. bins : int or array_like, int, optional Allele count bins. n : int, optional Number of chromosomes sampled. If provided, X axis will be plotted as allele frequency, otherwise as allele count. clip_endpoints : bool, optional If True, do not plot first and last values from frequency spectrum. label : string, optional Label for data series in plot. plot_kwargs : dict-like Additional keyword arguments, passed through to ax.plot(). ax : axes, optional Axes on which to draw. If not provided, a new figure will be created. ax : axes The axes on which the plot was drawn.\n`allel.``plot_sfs_folded_scaled`(*args, **kwargs)[source]\n\nPlot a folded scaled site frequency spectrum.\n\nParameters: s : array_like, int, shape (n_chromosomes/2,) Site frequency spectrum. yscale : string, optional Y axis scale. bins : int or array_like, int, optional Allele count bins. n : int, optional Number of chromosomes sampled. If provided, X axis will be plotted as allele frequency, otherwise as allele count. clip_endpoints : bool, optional If True, do not plot first and last values from frequency spectrum. label : string, optional Label for data series in plot. plot_kwargs : dict-like Additional keyword arguments, passed through to ax.plot(). ax : axes, optional Axes on which to draw. If not provided, a new figure will be created. ax : axes The axes on which the plot was drawn.\n`allel.``plot_joint_sfs`(s, ax=None, imshow_kwargs=None)[source]\n\nPlot a joint site frequency spectrum.\n\nParameters: s : array_like, int, shape (n_chromosomes_pop1, n_chromosomes_pop2) Joint site frequency spectrum. ax : axes, optional Axes on which to draw. If not provided, a new figure will be created. imshow_kwargs : dict-like Additional keyword arguments, passed through to ax.imshow(). ax : axes The axes on which the plot was drawn.\n`allel.``plot_joint_sfs_folded`(*args, **kwargs)[source]\n\nPlot a joint site frequency spectrum.\n\nParameters: s : array_like, int, shape (n_chromosomes_pop1/2, n_chromosomes_pop2/2) Joint site frequency spectrum. ax : axes, optional Axes on which to draw. If not provided, a new figure will be created. imshow_kwargs : dict-like Additional keyword arguments, passed through to ax.imshow(). ax : axes The axes on which the plot was drawn.\n`allel.``plot_joint_sfs_scaled`(*args, **kwargs)[source]\n\nPlot a scaled joint site frequency spectrum.\n\nParameters: s : array_like, int, shape (n_chromosomes_pop1, n_chromosomes_pop2) Joint site frequency spectrum. ax : axes, optional Axes on which to draw. If not provided, a new figure will be created. imshow_kwargs : dict-like Additional keyword arguments, passed through to ax.imshow(). ax : axes The axes on which the plot was drawn.\n`allel.``plot_joint_sfs_folded_scaled`(*args, **kwargs)[source]\n\nPlot a scaled folded joint site frequency spectrum.\n\nParameters: s : array_like, int, shape (n_chromosomes_pop1/2, n_chromosomes_pop2/2) Joint site frequency spectrum. ax : axes, optional Axes on which to draw. If not provided, a new figure will be created. imshow_kwargs : dict-like Additional keyword arguments, passed through to ax.imshow(). ax : axes The axes on which the plot was drawn."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6960954,"math_prob":0.9385863,"size":11622,"snap":"2019-51-2020-05","text_gpt3_token_len":2845,"char_repetition_ratio":0.16483043,"word_repetition_ratio":0.7600936,"special_character_ratio":0.23481329,"punctuation_ratio":0.23497017,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97823966,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-16T09:17:36Z\",\"WARC-Record-ID\":\"<urn:uuid:66315768-8a7b-464d-9b33-bd03b3ce4025>\",\"Content-Length\":\"47156\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c8b5516d-5fec-47d0-9b20-0bae403e2ab8>\",\"WARC-Concurrent-To\":\"<urn:uuid:fb685f61-5863-449a-b3ac-bca631c9151b>\",\"WARC-IP-Address\":\"104.208.221.96\",\"WARC-Target-URI\":\"https://scikit-allel.readthedocs.io/en/v1.1.3/stats/sf.html\",\"WARC-Payload-Digest\":\"sha1:LY5KYUY5BGLCSAVTG3EJCRKUBCJRNPWC\",\"WARC-Block-Digest\":\"sha1:TLHQZCA23TOTUAPQNBWARCW4YEPHTKGA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541318556.99_warc_CC-MAIN-20191216065654-20191216093654-00165.warc.gz\"}"} |
https://au.mathworks.com/help/curvefit/examples/how-to-choose-knots.html | [
"Documentation\n\n## How to Choose Knots\n\nThis example shows how to select and optimize knots using the `optknt` and `newknt` commands from Curve Fitting Toolbox™.\n\n### Sample Data\n\nHere are some sample data, much used for testing spline approximation with variable knots, the so-called Titanium Heat Data. They record some property of titanium measured as a function of temperature.\n\n```[xx,yy] = titanium; plot(xx,yy,'x'); axis([500 1100 .55 2.25]); title('The Titanium Heat Data'); hold on```",
null,
"Notice the rather sharp peak. We'll use these data to illustrate some methods for knot selection.\n\nFirst, we pick a few data points from these somewhat rough data. We will interpolate using this subset, then compare results to the full dataset.\n\n```pick = [1 5 11 21 27 29 31 33 35 40 45 49]; tau = xx(pick); y = yy(pick); plot(tau,y,'ro'); legend({'Full Dataset' 'Subsampled Data'}, 'location','NW');```",
null,
"### General Considerations\n\nA spline of order `k` with `n+k` knots has `n` degrees of freedom. Since we have 12 data sites, `tau(1) < ... < tau(12)`, a fit with a cubic spline, i.e., a fourth order spline, requires a knot sequence `t` of length 12+4.\n\nMoreover, the knot sequence `t` must satisfy the Schoenberg-Whitney conditions, i.e., must be such that the i-th data site lies in the support of the i-th B-spline, i.e.,\n\n` t(i) < tau(i) < t(i+k) for all i,`\n\nwith equality allowed only in case of a knot of multiplicity `k`.\n\nOne way to choose a knot sequence satisfying all these conditions is as the optimal knots, of Gaffney/Powell and Micchelli/Rivlin/Winograd.\n\n### Optimal Knots\n\nIn optimal spline interpolation, to values at sites\n\n` tau(1), ..., tau(n)`\n\nsay, the knots are chosen so as to minimize the constant in a standard error formula. Specifically, the first and the last data site are chosen as k-fold knots. The remaining `n-k` knots are supplied by `optknt`.\n\nHere is the beginning of the help from `optknt`:\n\nOPTKNT Optimal knot distribution.\n\n`OPTKNT(TAU,K) returns an `optimal' knot sequence for`\n\n`interpolation at data sites TAU(1), ..., TAU(n) by splines of`\n\n`order K. TAU must be an increasing sequence, but this is not`\n\n`checked.`\n\n`OPTKNT(TAU,K,MAXITER) specifies the number MAXITER of iterations`\n\n`to be tried, the default being 10.`\n\n`The interior knots of this knot sequence are the n-K`\n\n`sign-changes in any absolutely constant function h ~= 0 that`\n\n`satisfies`\n\n` integral{ f(x)h(x) : TAU(1) < x < TAU(n) } = 0`\n\n`for all splines f of order K with knot sequence TAU.`\n\n### Trying OPTKNT\n\nWe try using `optknt` for interpolation on our example, interpolating by cubic splines to data\n\n` (tau(i), y(i)), for i = 1, ..., n.`\n\n```k = 4; osp = spapi( optknt(tau,k), tau,y); fnplt(osp,'r'); hl = legend({'Full Dataset' 'Subsampled Data' ... 'Cubic Spline Interpolant Using Optimal knots'}, ... 'location','NW'); hl.Position = hl.Position-[.14,0,0,0];```",
null,
"This is a bit disconcerting!\n\nHere, marked by stars, are also the interior optimal knots:\n\n```xi = fnbrk(osp,'knots'); xi([1:k end+1-(1:k)]) = []; plot(xi,repmat(1.4, size(xi)),'*'); hl = legend({'Full Dataset' 'Subsampled Data' ... 'Cubic Spline Interpolant Using Optimal knots' ... 'Optimal Knots'}, 'location','NW'); hl.Position = hl.Position-[.14,0,0,0];```",
null,
"### What Happened?\n\nThe knot choice for optimal interpolation is designed to make the maximum over all functions `f` of the ratio\n\n` norm(f - If) / norm(D^k f)`\n\nas small as possible, where the numerator is the norm of the interpolation error, `f - If`, and the denominator is the norm of the `k`-th derivative of the interpolant, `D^k f`. Since our data imply that `D^k f` is rather large, the interpolation error near the flat part of the data is of acceptable size for such an `optimal' scheme.\n\nActually, for these data, the ordinary cubic spline interpolant provided by `csapi` does quite well:\n\n```cs = csapi(tau,y); fnplt(cs,'g',2); hl = legend({'Full Dataset' 'Subsampled Data' ... 'Cubic Spline Interpolant Using Optimal knots' ... 'Optimal Knots' 'Cubic Spline Interpolant Using CSAPI'}, ... 'location','NW'); hl.Position = hl.Position-[.14,0,0,0]; hold off```",
null,
"### Knot Choice for Least Squares Approximation\n\nKnots must be selected when doing least-squares approximation by splines. One approach is to use equally-spaced knots to begin with, then use `newknt` with the approximation obtained for a better knot distribution.\n\nThe next sections illustrate these steps with the full titanium heat data set.\n\n### Least Squares Approximation with Uniform Knot Sequence\n\n```unif = linspace(xx(1), xx(end), 2+fix(length(xx)/4)); sp = spap2(augknt(unif, k), k, xx, yy); plot(xx,yy,'x'); hold on fnplt(sp,'r'); axis([500 1100 .55 2.25]); title('The Titanium Heat Data'); hl = legend({'Full Dataset' ... 'Least Squares Cubic Spline Using Uniform Knots'}, ... 'location','NW'); hl.Position = hl.Position-[.14,0,0,0];```",
null,
"This is not at all satisfactory. So we use `newknt` for a spline approximation of the same order and with the same number of polynomial pieces, but the breaks better distributed.\n\n### Using NEWKNT to Improve the Knot Distribution\n\n```spgood = spap2(newknt(sp), k, xx,yy); fnplt(spgood,'g',1.5); hl = legend({'Full Dataset' ... 'Least Squares Cubic Spline Using Uniform Knots' ... 'Least Squares Cubic Spline Using NEWKNT'}, ... 'location','NW'); hl.Position = hl.Position-[.14,0,0,0]; hold off```",
null,
"This is quite good. Incidentally, even one interior knot fewer would not have sufficed in this case."
]
| [
null,
"https://au.mathworks.com/help/examples/curvefit/win64/pckkntdm_01.png",
null,
"https://au.mathworks.com/help/examples/curvefit/win64/pckkntdm_02.png",
null,
"https://au.mathworks.com/help/examples/curvefit/win64/pckkntdm_03.png",
null,
"https://au.mathworks.com/help/examples/curvefit/win64/pckkntdm_04.png",
null,
"https://au.mathworks.com/help/examples/curvefit/win64/pckkntdm_05.png",
null,
"https://au.mathworks.com/help/examples/curvefit/win64/pckkntdm_06.png",
null,
"https://au.mathworks.com/help/examples/curvefit/win64/pckkntdm_07.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7667573,"math_prob":0.9896308,"size":5053,"snap":"2019-51-2020-05","text_gpt3_token_len":1412,"char_repetition_ratio":0.12477718,"word_repetition_ratio":0.08575031,"special_character_ratio":0.28735405,"punctuation_ratio":0.21938325,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99555475,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T08:34:47Z\",\"WARC-Record-ID\":\"<urn:uuid:606ca0a9-261e-4768-90ff-d222ad850edf>\",\"Content-Length\":\"74977\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c88992b1-3df0-40f0-b809-6c646b5d3e2a>\",\"WARC-Concurrent-To\":\"<urn:uuid:c669ed8a-9ca2-48e1-9ea7-593e8c5f1cfc>\",\"WARC-IP-Address\":\"23.50.228.199\",\"WARC-Target-URI\":\"https://au.mathworks.com/help/curvefit/examples/how-to-choose-knots.html\",\"WARC-Payload-Digest\":\"sha1:IZK4TZ6P7C6CT5QGUKKPLFHA7UTY3RCJ\",\"WARC-Block-Digest\":\"sha1:NSBYLS4BDT5LPSWJ7ET2XVKZKWVLQ27T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540486979.4_warc_CC-MAIN-20191206073120-20191206101120-00282.warc.gz\"}"} |
https://eyetube.me/ejercicios-thevenin-norton-resueltos-12/ | [
"# EJERCICIOS THEVENIN NORTON RESUELTOS PDF\n\nPublishing platform for digital magazines, interactive publications and online catalogs. Convert documents to beautiful publications and share them worldwide. El libro que se presenta es un compendio de problemas resueltos de circuitos La aplicación de las leyes de Kirchhoff; de los teoremas de Thevenin, Norton. El libro que se presenta es un compendio de problemas resueltos de circuitos La aplicación de las leyes de Kirchhoff; de los teoremas de Thevenin, Norton, Millman, en este libro fueron ejercicios de examen en diferentes convocatorias .",
null,
"Author: Meztikora Mooguzragore Country: Malta Language: English (Spanish) Genre: Literature Published (Last): 16 August 2011 Pages: 141 PDF File Size: 3.79 Mb ePub File Size: 2.43 Mb ISBN: 829-8-90398-738-1 Downloads: 93117 Price: Free* [*Free Regsitration Required] Uploader: Fenrishakar",
null,
"From part b of the figure: The voltage at node 3 is equal to the voltage across a short, i.\n\nComplex Exponential Forcing Function P Then Box A will warm up and Box B will cool off. Ejerciclos short circuit has replaced combination of resistor Ri and the closed switch.\n\nInverse Laplace Transform P Apply KCL at the inverting input node of horton op amp: Solving for v out: To determine the value of the open circuit voltage, v ocwe connect an open circuit across the terminals of the circuit and then calculate the value of the voltage across that open circuit. Then a maximum power will be dissipated in resistor R when: Circuits and Fourier Series P That is, Ri is an open circuit.\n\nIF MUSIC BE THE FOOD OF LOVE DAVID DICKAU PDF",
null,
"Next, the plot shows an underdamped response. Initial value of Vc s: VP The initial and steady-state inductor currents shown on the plot agree with the values obtained from the circuit.\n\nA plot of the output of the VCCS versus the input is shown below. Series and Parallel capacitors P7.\n\nThat is, the slope of the line is equal to -1 times the Thevenin resistance and the “v – intercept” is equal ejercifios the open circuit voltage. Figure c shows the circuit from Figure P 4. DP The slope of the graph is positive so the Thevenin resistance is negative. KVL around the right-hand mesh gives: First, the open circuit voltage: The input of the VCCS is the voltage of the left-hand voltage source.\n\nWe will use the initial conditions to evaluate the constants A and 5. KCL at the top node of fU gives: Here is the circuit that is noryon to determine Rt.\n\n## Ejercicios Resueltos de Thevenin y Norton\n\nBox B is ejercicuos warmer than Box A. The Thevenin equivalent resistance of the circuit connected to the inductor is calculated as Ri t No final value exists. All the element currents and voltages will again have constant values, but probably different constant values than they had before the switch closed. Consequently, the gain does not change when the microphone resistance changes.\n\nBLAKE AND MORTIMER THE SECRET OF THE SWORDFISH PDF\n\nThe Power Superposition Principle Pll. Three Phase Voltages P The Unit Step Response P8. To determine the value of the Thevenin resistance, R tfirst replace the 10 V voltage source by a 0 V voltage source, i.\n\n### Full text of “Solucionario Circuitos Eléctricos Dorf, Svoboda 6ed”\n\nA half watt resistor can’t absorb this much power. The energy stored in the inductor instantaneously dissipates in the spark. The power dissipated in the resistors is excessive. To prevent the spark, add a resistor say 1 kO across the switch terminals.\n\nNext, connect a current source across the tenninals of the circuit and then ejervicios the voltage across that current source as shown in Figure b. Here is one convenient way.",
null,
"If you short the terminals of each box, the resistor in Box A will draw 1 amp and dissipate 1 watt. With R negligibly small, the circuit reaches steady state almost immediately i.\n\nApply KVL to the right mesh: Page 42, line The node equations are: Also, the node voltages at the fhevenin nodes of an ideal op amp are equal.",
null,
"Posted in Sex"
]
| [
null,
"https://i.ytimg.com/vi/rHnefupuAWs/maxresdefault.jpg",
null,
"https://eyetube.me/download_pdf.png",
null,
"http://www.tuveras.com/electrotecnia/teoremas/theveje22.gif",
null,
"https://www.docsity.com/documents/pages/2017/03/06/83117a5905afdce2b530dff5d91945a0.png",
null,
"http://www.sc.ehu.es/sbweb/electronica/elec_basica/tema1/images/circuitos/Teorema Th/T1Teor_Th1.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6892534,"math_prob":0.8988631,"size":3921,"snap":"2021-21-2021-25","text_gpt3_token_len":951,"char_repetition_ratio":0.13300996,"word_repetition_ratio":0.07350689,"special_character_ratio":0.21270084,"punctuation_ratio":0.11111111,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9708617,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,9,null,null,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-09T22:45:46Z\",\"WARC-Record-ID\":\"<urn:uuid:faf9b94c-c8fa-4576-abf6-7e1dbcb3c8e4>\",\"Content-Length\":\"34416\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e8167ae6-5bce-4e5c-bf4d-bf1d34b6f545>\",\"WARC-Concurrent-To\":\"<urn:uuid:3aea08f8-e466-4a2f-be52-542437fd90f0>\",\"WARC-IP-Address\":\"172.67.202.108\",\"WARC-Target-URI\":\"https://eyetube.me/ejercicios-thevenin-norton-resueltos-12/\",\"WARC-Payload-Digest\":\"sha1:35EQCERDYIETGFCABGBOYAKKLLKUZYLA\",\"WARC-Block-Digest\":\"sha1:UHXN2GJTVXUAYZNVLOADYZEZXB3JCQX2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989018.90_warc_CC-MAIN-20210509213453-20210510003453-00111.warc.gz\"}"} |
https://www.percent-off.com/_70_percent-off_1620_ | [
"# 70 percent off 1620\n\n### Inputs\n\nOriginal price: \\$\n\nDiscount percentage: %\n\nDiscount:\nFinal Price:\n\n### Details\n\nHow to calculate 70 percent-off \\$1620. How to figure out percentages off a price. Using this calculator you will find that the amount after the discount is \\$486. To find any discount, just use our Discount Calculator above.\n\nUsing this calculator you can find the discount value and the discounted price of an item. It is helpfull to answer questions like:\n\n• What is 70 percent (%) off \\$1620?\n• What is \\$1620 minus 70 percent (%) off?\n• How to calculate 70 percent off \\$1620?\n• How much will you pay for an item where the original price before discount is \\$1620 when discounted 70 percent (%)? What is the final or sale price?\n• \\$1134 is what percent off \\$1620?\n\n## Percent-off Formulas\n\nTo calculate discount it is ease by using the following formulas:\n\n(a) Amount Saved = Orig. Price x Discount % / 100\n(b) Sale Price = Orig. Price - Amount Saved\n\n## How to calculate 70 Percent-off\n\nNow, let's solve the questions stated above:\n\n## FAQs on Percent-off\n\n### What's 70 percent-off \\$1620?\n\nReplacing the given values in formula (a) we have:\n\nAmount Saved = Original Price x Discount in Percent / 100. So,\n\nAmount Saved = 1620 x 70 / 100\n\nAmount Saved = 113400 / 100\n\nIn other words, a 70% discount for a item with original price of \\$1620 is equal to \\$1134 (Amount Saved).\n\nNote that to find the amount saved, just multiply it by the percentage and divide by 100.\n\n### What's the final price of an item of \\$1620 when discounted \\$1134?\n\nUsing the formula (b) and replacing the given values:\n\nSale Price = Original Price - Amount Saved. So,\n\nSale Price = 1620 - 1134\n\nThis means the cost of the item to you is \\$486.\n\nYou will pay \\$486 for a item with original price of \\$1620 when discounted 70%.\n\nIn this example, if you buy an item at \\$1620 with 70% discount, you will pay 1620 - 1134 = 486 dollars.\n\n### 1134 is what percent off 1620 dollars?\n\nUsing the formula (b) and replacing given values:\n\nAmount Saved = Original Price x Discount in Percent /100. So,\n\n1134 = 1620 x Discount in Percent / 100\n\n1134 / 1620 = Discount in Percent /100\n\n100 x 1134 / 1620 = Discount in Percent\n\n113400 / 1620 = Discount in Percent, or\n\nDiscount in Percent = 70 (answer).\n\nTo find more examples, just choose one at the bottom of this page."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8633353,"math_prob":0.9310766,"size":2366,"snap":"2023-14-2023-23","text_gpt3_token_len":607,"char_repetition_ratio":0.1735817,"word_repetition_ratio":0.076744184,"special_character_ratio":0.32375318,"punctuation_ratio":0.10745614,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973203,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T06:27:26Z\",\"WARC-Record-ID\":\"<urn:uuid:a792eef6-6372-45e6-971a-f582d81fb592>\",\"Content-Length\":\"50189\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b053af6-8b21-4252-96ad-f80fbdf57589>\",\"WARC-Concurrent-To\":\"<urn:uuid:65213f47-ee2e-42db-882d-8ded4efef57d>\",\"WARC-IP-Address\":\"172.67.212.90\",\"WARC-Target-URI\":\"https://www.percent-off.com/_70_percent-off_1620_\",\"WARC-Payload-Digest\":\"sha1:7BFNLVXNXMPWSTEXASP7W7XNDVRYGHQN\",\"WARC-Block-Digest\":\"sha1:EWHKCKCJ24G6FTRN3WRSC7BJKCGSWS74\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653608.76_warc_CC-MAIN-20230607042751-20230607072751-00097.warc.gz\"}"} |
http://journal.svmo.ru/en/archive/article?id=1709 | [
"MSC2020 05C15\n\n### On new algorithmic techniques for the weighted vertex coloring problem\n\n#### O. O. Razvenskaya1\n\nAnnotation The classical NP-hard weighted vertex coloring problem consists in minimizing the number of colors in colorings of vertices of a given graph so that, for each vertex, the number of its colors equals a given weight of the vertex and adjacent vertices receive distinct colors. The weighted chromatic number is the smallest number of colors in these colorings. There are several polynomial-time algorithmic techniques for designing efficient algorithms for the weighted vertex coloring problem. For example, standard techniques of this kind are the modular graph decomposition and the graph decomposition by separating cliques. This article proposes new polynomial-time methods for graph reduction in the form of removing redundant vertices and recomputing weights of the remaining vertices so that the weighted chromatic number changes in a controlled manner. We also present a method of reducing the weighted vertex coloring problem to its unweighted version and its application. This paper contributes to the algorithmic graph theory. weighted vertex coloring problem, efficient algorithm, computational complexity\n\n1Olga O. Razvenskaya, graduate student, Department of Applied Mathematics and Information Science, National Research University Higher School of Economics (25/12 Bolshaya Pecherskaya St., Nizhny Novgorod 603155, Russia), ORCID: http://orcid.org/0000-0002-1440-9910, [email protected]\n\nCitation: O. O. Razvenskaya, \"[On new algorithmic techniques for the weighted vertex coloring problem]\", Zhurnal Srednevolzhskogo matematicheskogo obshchestva,22:4 (2020) 442–448 (In Russian)\n\nDOI 10.15507/2079-6900.22.202004.442-448"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8111376,"math_prob":0.8770342,"size":1763,"snap":"2021-04-2021-17","text_gpt3_token_len":389,"char_repetition_ratio":0.15122229,"word_repetition_ratio":0.048034936,"special_character_ratio":0.21440727,"punctuation_ratio":0.1292517,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9792589,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-24T18:24:58Z\",\"WARC-Record-ID\":\"<urn:uuid:87a01c22-4790-46d4-96e0-754cea4de1bd>\",\"Content-Length\":\"16436\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8e02b8f9-5169-4a25-810c-afb3e5c56d74>\",\"WARC-Concurrent-To\":\"<urn:uuid:f6ee73c6-533d-486e-9a21-7342c00465b7>\",\"WARC-IP-Address\":\"194.54.66.250\",\"WARC-Target-URI\":\"http://journal.svmo.ru/en/archive/article?id=1709\",\"WARC-Payload-Digest\":\"sha1:BQYB7VQG5JAGGLZIGM7M4EJVEX3N6DK2\",\"WARC-Block-Digest\":\"sha1:3LYLXSNGOADU6T4NSBDDSG6TSZ4KCZSL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703550617.50_warc_CC-MAIN-20210124173052-20210124203052-00032.warc.gz\"}"} |
http://xoax.net/math/crs/algebra/lessons/Lesson8/ | [
"# Algebra: Adding and Multiplying Polynomials\n\n## Adding and Multiplying Polynomials\n\nIn this Algebra video tutorial, we explain how to add and multiply polynomials and give formulas for the degree of a sum or product of polynomials. We also define what coefficient is, and briefly touch on the subject of additive and multiplicative inverses of polynomials.\n\nTo begin, we give an example of adding two abstract first degree polynomials. Below, we use the letters c and d with subscripts to represent the constant coefficients of our x and constant terms. Reading from the top left to the bottom right, we apply the associative property to regroup the sum. Then we use commutativity to reorder the terms in the parentheses. Finally, we use associativity and the distributive property to regroup the terms and factor out an x from the first group.",
null,
"Next, we show demonstrate how to add two concrete polynomials. Like our previous abstract example, both of these polynomials are first degree. Everything is the same as the previous example, except that we use 3 and 5 for the values of c1 and c0 and 2 and 4 for the values d1 and d0.",
null,
"When we add two polynomial, the degree of the resulting polynomial sum is less than or equal to the higher degree of the two polynomials that we are adding together. The polynomial sum may have lower degree because the monomials with the highest degrees can cancel. When we multiply two polynomials, the resulting polynomial product has its degree equal to the sum of the degrees of the polynomials that we are multiplying, as we will see below\n\nBelow, we demonstrate the multiplication of two abstract first degree polynomials. Again, we use the letters c and d with subscripts to represent the constant coefficients of our x and constant terms. In our first step, we use the distributive property to distribute the terms of our first polynomial across the second. Then we use distributivity again to distribute the terms of the second polynomial. Finally, we use distributivity to factor out x from the middle two terms. The result is a second degree polynomial, as expected.",
null,
"Next, we show how to multiply two concrete polynomials: x - y + 2 and xy - 3. Just as before, we begin by distributing the terms of the first polynomial through the second polynomial. Then we use the distributive property again to distribute the terms of the second polynomial and get six terms. Since none of these terms can be combined, we use commutativity to rearrange them in order of descending degree.",
null,
"Finally, we mention we can get the additive inverse of a polynomial simply by negating each of its terms; this is equivalent to multiplying the polynomial by -1. The multiplicative inverse is trickier and is almost never a polynomial. However, we will wati to take up the issue or multiplicative inverses at a later time."
]
| [
null,
"http://xoax.net/math/crs/algebra/lessons/Lesson8/Image1.png",
null,
"http://xoax.net/math/crs/algebra/lessons/Lesson8/Image2.png",
null,
"http://xoax.net/math/crs/algebra/lessons/Lesson8/Image3.png",
null,
"http://xoax.net/math/crs/algebra/lessons/Lesson8/Image4.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.907037,"math_prob":0.98486364,"size":2830,"snap":"2019-35-2019-39","text_gpt3_token_len":591,"char_repetition_ratio":0.19143666,"word_repetition_ratio":0.09375,"special_character_ratio":0.19363958,"punctuation_ratio":0.09416196,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999043,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-22T00:24:46Z\",\"WARC-Record-ID\":\"<urn:uuid:c8a98714-a718-485a-a574-b324117c17e8>\",\"Content-Length\":\"23342\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2cc346f9-8299-4094-8bbd-26f325c47a90>\",\"WARC-Concurrent-To\":\"<urn:uuid:22a0b0c6-e1a4-429a-bf1e-c6cdea157bd3>\",\"WARC-IP-Address\":\"184.168.178.1\",\"WARC-Target-URI\":\"http://xoax.net/math/crs/algebra/lessons/Lesson8/\",\"WARC-Payload-Digest\":\"sha1:F6DS7ROK2V4SYMLTCYCZDCHOIQJXBJQR\",\"WARC-Block-Digest\":\"sha1:KDYYCXJPLYH7L3JEAGM72TY4XENCZ57G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027316555.4_warc_CC-MAIN-20190822000659-20190822022659-00507.warc.gz\"}"} |
https://forums.nrel.gov/t/rotor-furl-coordinate-system/2068 | [
"# Rotor-Furl Coordinate System\n\nI have a question regarding the transformation matrix\n\n• How can I derivative the rotational matrix from Nacelle / Yaw Coordinate System (d1,d2,d3) to Rotor-Furl Coordinate System (rf1,rf2,rf3) ? Where is the origin location of Rotor-Furl Coordinate System (rf1,rf2,rf3)?\nPlease provide me with the required papers to understand this part\nThanks\n\nDear Mohamed,\n\nI can’t seem to find the derivation of the transformation matrix between the nacelle and rotor-furl coordinate system published in any report. However, the transformation can be derived by multiplying several simpler 3x3 matrices together i.e.\n\n{rf} = [TransMat] * {d}\nwith\n[TransMat] = [RFrlSkew]^T * [RFrlTilt]^T * [q_RFrl] * [RFrlTilt] * [RFrlSkew]\n\nwhere,\n{} represents a 3x1 vector\n[] represents a 3x3 matrix\n^T represents a matrix transpose\n[RFrlSkew] represents the transformation matrix associated with the single rotation RFrlSkew\n[RFrlTilt] represents the transformation matrix associated with the single rotation RFirlTilt\n[q_RFrl] represents the transformation matrix associated with the single rotation of RFrlDOF\n\nThe origin of the rotor-furl coordinate system is not defined nor used in FAST.\n\nI hope that helps.\n\nBest regards,\n\nThank you for your explanation. However, I have a question\nWhy it is necessary to use Similarity Transformations and express the [q_RFrl] rotational matrix in Yaw frame?\nIt seems that the rotations [RFrlSkew] and [RFrlTilt] are made with respect to current frame concept (not fixed frame concept).\n\nDear Mohamed,\n\nI’m not sure I understand your question, but [TransMat] in this case is defined such that {d} and {rf} coordinate systems are parallel {rf} = {d} when q_RFrl = 0, i.e. when [q_RFrl] = the 3x3 identity matrix.\n\nBest regards,\n\nThank you for kind consideration and clarification.\nHowever, I mean that why the transformation between rotor furl frame {rf} and Yaw frame{d} was not like this\n{rf} = [TransMat] * {d}\n[TransMat] =[q_RFrl] * [RFrlTilt] * [RFrlSkew]\n\nAlso, I agree with you partially, {d} and {rf} coordinate systems are parallel {rf} = {d} only when q_RFrl = 0, RfrlSkew =0 and RfrlTilt=0\n\nSo, Why it is necessary to add( [RFrlSkew]^T * [RFrlTilt]^T ) to the [TransMat] ?\n\nDear Mohammed,\n\n[RFrlSkew]^T * [RFrlTilt]^T are included in [TransMat] so that {rf} = {d} for any value of RFrlSkew and RFrlTIlt, as long as q_RFrl = 0.\n\nBest regards,"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90268177,"math_prob":0.9280084,"size":2350,"snap":"2022-05-2022-21","text_gpt3_token_len":660,"char_repetition_ratio":0.14364876,"word_repetition_ratio":0.089918256,"special_character_ratio":0.24468085,"punctuation_ratio":0.096296296,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9903025,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T08:12:52Z\",\"WARC-Record-ID\":\"<urn:uuid:f4426d75-0ac5-46e3-b59c-78dc3864d189>\",\"Content-Length\":\"26108\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:221a09ba-905d-4acd-bc9a-756a2a774561>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e026b42-7642-48a8-87fa-922f6514fcef>\",\"WARC-IP-Address\":\"184.104.202.109\",\"WARC-Target-URI\":\"https://forums.nrel.gov/t/rotor-furl-coordinate-system/2068\",\"WARC-Payload-Digest\":\"sha1:CWSAPIFX4RM4PE2DYKXHHKK53G3KS7KE\",\"WARC-Block-Digest\":\"sha1:5ORQSW37JLYMEZPB6EMQDEOVUULTWEBF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662556725.76_warc_CC-MAIN-20220523071517-20220523101517-00617.warc.gz\"}"} |
https://openmx.ssri.psu.edu/thread/1009 | [
"# Confidence Intervals for Univariate Ordinal ACE Model with 2 Thresholds\n\n5 posts / 0 new",
null,
"Offline\nJoined: 06/17/2011 - 17:47\nConfidence Intervals for Univariate Ordinal ACE Model with 2 Thresholds\n\nI was wondering if there was a reason I can't seem to get confidence intervals for an ordinal ACE model with 2 thresholds.\n\nI made sure to specify the mxCI for my standardized variance components call in my model and to put the \"intervals=T\" in my mxRun line. In my model fit $output$confidenceIntervals exists, but does not have any values.\n\nIs this normal? If not, is there a way to create confidence intervals in the ordinal case? If it's just a silly code error, my code is below.\n\nAny help would be appreciated!\n\nunivACEOrdModel <- mxModel(\"univACEOrd\",\nmxModel(\"ACE\",\n# Matrices a, c, and e to store a, c, and e path coefficients\nmxMatrix( type=\"Full\", nrow=nv, ncol=nv, free=TRUE, values=.6, label=\"a11\", name=\"a\" ),\nmxMatrix( type=\"Full\", nrow=nv, ncol=nv, free=TRUE, values=.6, label=\"c11\", name=\"c\" ),\nmxMatrix( type=\"Full\", nrow=nv, ncol=nv, free=TRUE, values=.6, label=\"e11\", name=\"e\" ),\n# Matrices A, C, and E compute variance components\nmxAlgebra( expression=a %% t(a), name=\"A\" ),\nmxAlgebra( expression=c %\n% t(c), name=\"C\" ),\nmxAlgebra( expression=e %% t(e), name=\"E\" ),\n# Algebra to compute total variances and standard deviations (diagonal only)\nmxAlgebra( expression=A+C+E, name=\"V\" ),\nmxMatrix( type=\"Iden\", nrow=nv, ncol=nv, name=\"I\"),\nmxAlgebra( expression=solve(sqrt(I\nV)), name=\"sd\"),\nmxAlgebra( expression=cbind(A/VP,C/VP,E/VP),name=\"stndVCs\"),\n# Calculate 95% CIs here\nmxAlgebra(A+C+E,name=\"VP\"),\n\n## Yes, it's repetitive, but I was desperate and the above was exactly how it ran in my continuous univariate ACE model\n\n mxCI(c(\"stndVCs\")),\n# Constraint on variance of ordinal variables\nmxConstraint(V == I, name=\"Var1\"),\n# Matrix & Algebra for expected means vector\nmxMatrix( type=\"Zero\", nrow=1, ncol=nv, name=\"M\" ),\nmxAlgebra( expression= cbind(M,M), name=\"expMean\" ),\nmxMatrix( type=\"Full\", nrow=2, ncol=nv, free=TRUE, values=c(0.8,1.2), label=c(\"thre1\",\"thre2\"), name=\"T\" ),\nmxAlgebra( expression= cbind(T,T), dimnames=list(c('th1','th2'),selVars), name=\"expThre\" ),\n# Algebra for expected variance/covariance matrix in MZ\nmxAlgebra( expression= rbind ( cbind(A+C+E , A+C),\ncbind(A+C , A+C+E)), name=\"expCovMZ\" ),\n# Algebra for expected variance/covariance matrix in DZ, note use of 0.5, converted to 1*1 matrix\nmxAlgebra( expression= rbind ( cbind(A+C+E , 0.5%x%A+C),\ncbind(0.5%x%A+C , A+C+E)), name=\"expCovDZ\" )\n),\nmxModel(\"MZ\",\nmxData( observed=mzData, type=\"raw\" ),\nmxFIMLObjective( covariance=\"ACE.expCovMZ\", means=\"ACE.expMean\", dimnames=selVars, thresholds=\"ACE.expThre\" )\n),\nmxModel(\"DZ\",\nmxData( observed=dzData, type=\"raw\" ),\nmxFIMLObjective( covariance=\"ACE.expCovDZ\", means=\"ACE.expMean\", dimnames=selVars, thresholds=\"ACE.expThre\" )\n),\nmxAlgebra( expression=MZ.objective + DZ.objective, name=\"min2sumll\" ),\nmxAlgebraObjective(\"min2sumll\")\n\n\n)\n\nunivACEOrdFit <- mxRun(univACEOrdModel,intervals=T)",
null,
"Offline\nJoined: 07/31/2009 - 15:24\nIt was only recently in the\n\nIt was only recently in the OpenMx 1.1 beta release that we started support confidence intervals specified in the submodels. You can try downloading the beta and see if the script works. Another alternative would be to specify mxCI(\"ACE.stndVCs\") in the container model.",
null,
"Offline\nJoined: 06/17/2011 - 17:47\nThanks so much for the quick\n\nThanks so much for the quick reply.\n\nThat's the problem with being lazy and adapting other people's scripts. I removed the unnecessary submodel and everything is working perfectly now.",
null,
"Offline\nJoined: 01/21/2011 - 13:24\nBest to avoid T (and F)\n\nKelly\nIt is better to spell out TRUE and FALSE as one day you will use a variable T (or F) and wonder why your previously working example stops working",
null,
"Offline\nJoined: 07/31/2009 - 15:24\nAlong the same lines, you\n\nAlong the same lines, you should avoid naming you MxMatrix or MxAlgebra object \"T\" or \"F\" or \"c\". The model will execute, but should you try to use mxEval() to examine components of your model, then the model names will override the R function or variable names. For example,\n\nmxEval(c(MZ.objective[1,1], DZ.objective[1,1]), univACEOrdFit)\n\nwill no longer work because mxEval() beleives that 'c' is a MxMatrix object, not the c() function in R."
]
| [
null,
"https://openmx.ssri.psu.edu/sites/default/files/pictures/picture-6325.jpg",
null,
"https://openmx.ssri.psu.edu/sites/default/files/pictures/picture-15.jpg",
null,
"https://openmx.ssri.psu.edu/sites/default/files/pictures/picture-6325.jpg",
null,
"https://openmx.ssri.psu.edu/sites/default/files/pictures/picture-5598.jpg",
null,
"https://openmx.ssri.psu.edu/sites/default/files/pictures/picture-15.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7435278,"math_prob":0.9629076,"size":3809,"snap":"2023-40-2023-50","text_gpt3_token_len":1122,"char_repetition_ratio":0.14586072,"word_repetition_ratio":0.04296875,"special_character_ratio":0.2735626,"punctuation_ratio":0.19066148,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99783725,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,8,null,null,null,8,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T09:21:43Z\",\"WARC-Record-ID\":\"<urn:uuid:efa80569-c796-44f6-926c-9b79449a39b1>\",\"Content-Length\":\"41236\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:15f91426-97a7-4990-9ee2-a39941f5fe79>\",\"WARC-Concurrent-To\":\"<urn:uuid:19a3bf07-8374-4610-9a99-e46606d943ad>\",\"WARC-IP-Address\":\"128.118.212.48\",\"WARC-Target-URI\":\"https://openmx.ssri.psu.edu/thread/1009\",\"WARC-Payload-Digest\":\"sha1:F7KCTORKQUZ3XWYULGNFHQXUIPLUHUER\",\"WARC-Block-Digest\":\"sha1:6MNNJJS7G4IWWWZENHFCTVFIIMRZ2WVY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100381.14_warc_CC-MAIN-20231202073445-20231202103445-00754.warc.gz\"}"} |
https://computergraphics.stackexchange.com/questions/8721/how-to-translate-the-center-of-an-equirectangular-projection | [
"# How to translate the center of an equirectangular projection?\n\nI'm trying to perfectly align two or more equirectangular photos of the same place taken from slightly different positions. Using an example provided by openMVG I managed to get the relative position between the two shots, but I can't figure out how to translate the center of the projection according to the data obtained. I tried with Hugin without success, editing the camera position information (it doesn't seem to be able to export the entire equirectangular photo) and I didn't found any useful panotools. I found some literature about the problem but not any real implementation to adapt.\n\nCould you advise me of a possible path to follow?\n\nMy requirement is to do it programmatically because I will have to apply it to thousands of photos.\n\nThanks a lot.\n\n• Are you looking for an existing application to do it or do you want to understand the algorithm so you can write the code yourself? – user1118321 Apr 3 '19 at 3:22\n• Sorry probably it is not clear from the question. I don't think there is any existing application to do it, so I'm interested in understanding a possible algorithm approach to code myself. – Lucio Coire Galibone Apr 3 '19 at 16:26\n• Is this what you mean by equirectangular projection: en.wikipedia.org/wiki/Equirectangular_projection ? Are you looking for the transformation from the points on one of the pictures to the points on the other picture, knowing the change in 3D position of the camera (center of projection)? Or is it that you know where the second center of projection is in 3D relative to the coordinate system of the first one? I honestly do not understand what geometric information you have and what you are trying to obtain. – Futurologist Apr 22 '19 at 16:24\n\nAn equirectangular projection treats the x coordinate as the angle theta around the vertical axis going from 0 to 360 degrees. These angles match the longitudinal angles of a globe. The y coordinate is treated as the angle phi that represents the latitudes of a globe. They typically go from +90° at the zenith (or north pole) to -90° at the nadir (or south pole).\n\nSo to translate the center of projection of an image by some amount (x0, y0), you can simply add x0 to the x value of every coordinate (wrapping around to the other side at the edges). The y coordinate is similar - add y0 to every coordinate. However, instead of wrapping around, you need to reflect the value. So if your new coordinate is less than 0, simply take the absolute value. If it's greater than the height, take the difference between the new value and the height of the image and subtract that from the height of the image. If it was less than 0 or greater than height then you need to also add half the width to the x coordinate.\n\nSo putting it in pseudocode:\n\nnewY = oldY + translateY;\nif (newY < 0)\n{\nnewY = fabs(newY);\ntranslateX = fmod((translateX + (width / 2.0)), width);\n}\nelse if (newY >= height)\n{\nnewY = height - (newY - height);\ntranslateX = fmod((translateX + (width / 2.0)), width);\n}\nnewX = oldX + translateX;\nif (newX < 0)\n{\nnewX = width + newX;\n}\nelse if (newX >= width)\n{\nnewX = newX - width;\n}"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.94724655,"math_prob":0.91817844,"size":760,"snap":"2020-34-2020-40","text_gpt3_token_len":156,"char_repetition_ratio":0.09920635,"word_repetition_ratio":0.0,"special_character_ratio":0.1881579,"punctuation_ratio":0.06521739,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96294695,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-08T06:58:57Z\",\"WARC-Record-ID\":\"<urn:uuid:4b236396-78d5-4d4d-b3f0-65e63ac0d38f>\",\"Content-Length\":\"150024\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02504b6e-8697-448f-9759-ec2082c45320>\",\"WARC-Concurrent-To\":\"<urn:uuid:a549df2b-d552-4f45-99b2-0f569d28944f>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://computergraphics.stackexchange.com/questions/8721/how-to-translate-the-center-of-an-equirectangular-projection\",\"WARC-Payload-Digest\":\"sha1:GHJBZCDNBH7FJ4IIDIUEDATYCKJYWGRD\",\"WARC-Block-Digest\":\"sha1:A3RWX5YJ7WAYHV63RQD2OA6V4GUW23DG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737289.75_warc_CC-MAIN-20200808051116-20200808081116-00158.warc.gz\"}"} |
https://mathematica.stackexchange.com/questions/192488/divergent-series-not-correctly-plotted | [
"# Divergent series not correctly plotted\n\nI have a problem about the plotting of a function which is defined as the power series $$F(\\eta)= \\left[1+\\frac{10.75}{\\eta^{15/4}}+O\\left(\\frac{1}{\\eta^{15/2}}\\right)\\right]^{-7/4} \\biggr[1 + \\frac{6G_4}{G_2\\eta^{15/4}} +\\frac{15 G_6}{G_2\\eta^{15/2}}+\\frac{28 G_8}{G_2\\eta^{45/4}}+\\frac{45 G_{10}}{G_2\\eta^{15}} + \\frac{66G_{12}}{G_2\\eta^{75/4}} + \\frac{91G_{14}}{G_2 \\eta^{45/2}}+ O\\left(\\frac{1}{\\eta^{105/4}}\\right)\\biggr]^{-1}$$ where $$G_i$$ are some known constant coefficients and the series as a function of $$\\eta$$ is well-defined in the limit of $$\\eta\\to+\\infty$$. The problem is that when considering this series as a normal function of $$\\eta$$ and trying to plot it along the entire axis of this variable, the resulting plot produced by Mathematica goes to zero as $$\\eta$$ approaches $$0$$. While the previous series has an evident divergence for $$\\eta\\to0$$ as seen if we stop the expansion to some low order $$F(\\eta)\\approx 1-\\left(\\frac{75.25}{4}+\\frac{6G_4}{G_2}\\right)\\frac{1}{\\eta^{15/4}}+O\\left(\\frac{1}{\\eta^{15/2}}\\right)$$ In fact the plot that I got is",
null,
"which I cannot really understand. For $$\\eta\\to+\\infty$$ the function is correctly approaching $$1$$, but why for $$\\eta$$ close to $$3$$ the function starts to decrease and it arrives to zero? Here is my code\n\nG2 = -1.8452283;\nG4 = 8.33410;\nG6 = -95.1884;\nG8 = 1458.21;\nG10 = -25889;\nG12 = 5.02*^5;\nG14 = -1.04*^7;\nF[x_] := ((1 + 10.75/(x^(15/4)))^(-7/4))*((1 + (182*G14)/(\n2*G2*x^(45/2)) + (132*G12)/(2*G2*x^(75/4)) + (56*G8)/(\n2*G2*x^(45/4)) + (30*G6)/(2*G2*x^(15/2)) + (12*G4)/(\n2*G2*x^(15/4)) + (90*G10)/(2*G2*x^15))^(-1))\nPlot[F[x], {x, -3, 10}, PlotStyle -> ColorData]\n\n• Please post actual, copyable, Mathematica code, not images of code! And for us to help, we probably need also to see in the code the values of all the G's. Mar 2, 2019 at 21:30\n• Can you maybe give a reference to the book/paper where this divergent series came from? Mar 3, 2019 at 2:29\n\nYour series expansion for $$\\eta\\to0$$ is wrong:\nAssuming[η > 0, Series[F[η], {η, 0, 30}]]\n\n$$0.000172184\\frac{G_2}{G_{14}}η^{465/16} + \\mathcal{O}(η)^{481/16}$$\nWhat you are seeing for $$\\eta<3$$ is the finite number of terms you've included in your formula for $$F$$. If you include more terms, the \"correct\" behavior will extend further towards 0. But you cannot expect a series expansion around $$\\eta=+\\infty$$ to be accurate all the way down to $$\\eta=0$$.\n• Thank you for your explanation. My point is that the plot obtained (and correctly described by the series expansion that you have suggested) does not give the expected result of $F(\\eta)$ in $\\eta=0$, because I do know that it should be $F(0)\\approx 3.23$ and not $0$. Of course this problem is due to the fact that the given function $F(\\eta)$ has been defined using a product of serires expansions that are well-defined only for $\\eta\\to+\\infty$. Is there a way to compute the domain of convergence of $F(\\eta)$, that is the interval around $+\\infty$ where the function has the correct trend? Mar 4, 2019 at 12:12"
]
| [
null,
"https://i.stack.imgur.com/utZGD.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.799714,"math_prob":0.9998611,"size":1642,"snap":"2023-40-2023-50","text_gpt3_token_len":661,"char_repetition_ratio":0.13492064,"word_repetition_ratio":0.0,"special_character_ratio":0.49147382,"punctuation_ratio":0.07520892,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999597,"pos_list":[0,1,2],"im_url_duplicate_count":[null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T20:47:56Z\",\"WARC-Record-ID\":\"<urn:uuid:f3c554c0-ae29-4418-861a-adbff37d0e43>\",\"Content-Length\":\"164120\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6fdb1a8c-d8aa-4fa4-879a-e93712bf4ecd>\",\"WARC-Concurrent-To\":\"<urn:uuid:998a4ea3-b655-4f88-85e0-d5d4952f0161>\",\"WARC-IP-Address\":\"104.18.10.86\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/192488/divergent-series-not-correctly-plotted\",\"WARC-Payload-Digest\":\"sha1:4PKXDMB6JJQFEN7DMP6JHOMFNY6JSHJD\",\"WARC-Block-Digest\":\"sha1:PG3VQ4KQBKA35LAAKJKUCR4WR7T7WKXF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510326.82_warc_CC-MAIN-20230927203115-20230927233115-00327.warc.gz\"}"} |
https://www.tes.com/teaching-resource/rounding-decimals-year-4-12078925 | [
"",
null,
"Rounding Decimals - Year 4\n\nIn this KS2 maths teaching resource pupils practise rounding decimals with one decimal place to the nearest whole number. It is an ideal teaching aid to use in a lesson covering some of the year 4 curriculum objectives in the maths programme of study (Fractions, including decimals). Content includes:\n\n• How to round to the nearest whole number using a number line explanation\n• Number line rounding activity and worksheet\n• Further number line rounding activity and worksheet\n• How to round decimal numbers explanation\n• Round the decimal numbers to the nearest whole number activity and worksheet\n• Circle the numbers that round up to the nearest whole number activity and worksheet\n• Circle the numbers that round down to the nearest whole number activity and worksheet\n• Reasoning activity and worksheet\n\n‘Rounding Decimals - Year 4’ is editable so teachers can adapt the resource to meet their individual teaching needs\n\n\\$3.22\nSave for later\n\nInfo\n\nCreated: Feb 28, 2019\n\nUpdated: Mar 4, 2019\n\nPresentation\n\nppt, 2 MB\n\nRounding-Decimals---Year-4\n\nWorksheet\n\npdf, 946 KB\n\nRounding-Decimals---Year-4\n\nActivity\n\nJPG, 148 KB\n\nRounding-Decimals---Year-4-(7)\n\nReport a problem"
]
| [
null,
"https://l.imgt.es/resource-preview-imgs/ce97c152-5a62-42f5-a7b6-0ef55ca8a8c5%2FRoundingDecimalsYear41.JPG",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.87990975,"math_prob":0.9193778,"size":950,"snap":"2019-26-2019-30","text_gpt3_token_len":183,"char_repetition_ratio":0.16913319,"word_repetition_ratio":0.19607843,"special_character_ratio":0.18736842,"punctuation_ratio":0.026490066,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97237223,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-25T09:27:04Z\",\"WARC-Record-ID\":\"<urn:uuid:23898276-e679-4532-a99c-e751a4d914e7>\",\"Content-Length\":\"155757\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2c4e3555-51c6-4dec-a828-7f94dfde9c9b>\",\"WARC-Concurrent-To\":\"<urn:uuid:bf385f83-8780-429e-89b3-48747e10b1ad>\",\"WARC-IP-Address\":\"151.101.248.228\",\"WARC-Target-URI\":\"https://www.tes.com/teaching-resource/rounding-decimals-year-4-12078925\",\"WARC-Payload-Digest\":\"sha1:KJLODKQU7YDRU5C2E4L37PLGVGAOMYSA\",\"WARC-Block-Digest\":\"sha1:24IOGPMSC2BHBWN26N3Z5OHE7OWYWJ7P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999817.30_warc_CC-MAIN-20190625092324-20190625114324-00193.warc.gz\"}"} |
https://psychology.wikia.org/wiki/Ordinal_number | [
"Commonly, ordinal numbers, or ordinals for short, are numbers used to denote the position in an ordered sequence: first, second, third, fourth, etc., whereas a cardinal number says \"how many there are\": one, two, three, four, etc. (See How to name numbers.)\n\nHere, we describe the mathematical meaning of transfinite ordinal numbers. They were introduced by Georg Cantor in 1897, to accommodate infinite sequences and to classify sets with certain kinds of order structures on them. Ordinals are an extention of the natural numbers different from integers and from cardinals.\n\nWell-ordering is total ordering with transfinite induction, where transfinite induction extends mathematical induction beyond the finite. Ordinals represent equivalence classes of well orderings with order-isomorphism being the equivalence relationship. Each ordinal is taken to be the set of smaller ordinals. Ordinals may be categorized as: zero, successor ordinals, and limit ordinals (of various cofinalities). Given a class of ordinals, one can identify the α-th member of that class, i.e. one can index (count) them. A class is closed and unbounded if its indexing function is continuous and never stops. One can define addition, multiplication, and exponentiation on ordinals, but not subtraction or division. The Cantor normal form is a standarized way of writing down ordinals. There is a many to one association of ordinals and cardinals. Larger and larger ordinals can be defined, but they become more and more difficult to describe. Ordinals have a natural topology.\n\n## Ordinals extend the natural numbers\n\nA natural number can be used for two purposes: to describe the size of a set, or to describe the position of an element in a sequence. While in the finite world these two concepts coincide, when dealing with infinite sets one has to distinguish between the two. The notion of size leads to cardinal numbers, which were also discovered by Cantor, while the position is generalized by the ordinal numbers described here.\n\nWhereas the notion of cardinal number is associated to a set with no particular structure on it, the ordinals are intimately linked with the special kind of sets which are called well-ordered (so intimately linked, in fact, that some mathematicians make no distinction between the two concepts). To define things briefly, a well-ordered set is a totally ordered set (given any two elements one defines a smaller and a larger one in a coherent way) in which there is no infinite decreasing sequence (however, there may be infinite increasing sequences). Ordinals may be used to label the elements of any given well-ordered set (the smallest element being labeled 0, the one after that 1, the next one 2, \"and so on\") and to measure the \"length\" of the whole set by the least ordinal which is not a label for an element of the set. This \"length\" is called the order type of the set.\n\nAny ordinal is defined by the set of ordinals that precede it: in fact, the most common definition of ordinals identifies each ordinal as the set of ordinals that precede it. For example, the ordinal 42 is the order type of the ordinals less than it, i.e., the ordinals from 0 (the smallest of all ordinals) to 41 (the immediate predecessor of 42), and it is generally identified as the set {0,1,2,…,41}. Conversely, any set of ordinals which is downward-closed—meaning that any ordinal less than an ordinal in the set is also in the set—is (or can be identified with) an ordinal.\n\nSo far we have mentioned only finite ordinals, which are the natural numbers. But there are infinite ones as well: the smallest infinite ordinal is ω, which is the order type of the natural numbers (finite ordinals) and which can even be identified with the set of natural numbers (indeed, the set of natural numbers is well-ordered—as is any set of ordinals—and since it is downward closed it can be identified with the ordinal associated to it, which is exactly how we define ω).",
null,
"A graphical “matchstick” representation of the ordinal ω². Each stick correspond to an ordinal of the form ω·m+n where m and n are natural numbers.\n\nPerhaps a clearer intuition of ordinals can be formed by examining a first few of them: as mentioned above, they start with the natural numbers, 0, 1, 2, 3, 4, 5, … After all natural numbers comes the first infinite ordinal, ω, and after that come ω+1, ω+2, ω+3, and so on. (Exactly what addition means will be defined later on: just consider them as names.) After all of these come ω·2 (which is ω+ω), ω·2+1, ω·2+2, and so on, then ω·3, and then later on ω·4. Now the set of ordinals which we form in this way (the ω·m+n, where m and n are natural numbers) must itself have an ordinal associated to it: and that is ω2. Further on, there will be ω3, then ω4, and so on, and ωω, then ωω², and much later on ε0 (just to give a few examples of the very smallest—countable—ordinals). We can go on in this way indefinitely far (\"indefinitely far\" is exactly what ordinals are good at: basically every time one says \"and so on\" when enumerating ordinals, it defines a larger ordinal).\n\n## Definitions\n\n### Define well-ordered set\n\nA well-ordered set is an ordered set in which every non-empty subset has a least element: this is equivalent (at least in the presence of the axiom of dependent choices) to just saying that the set is totally ordered and there is no infinite decreasing sequence, something which is perhaps easier to visualize. In practice, the importance of well-ordering is justified by the possibility of applying transfinite induction, which says, essentially, that any property that passes on from one the predecessors of an element to that element itself must be true of all elements (of the given well-ordered set). If the states of a computation (computer program or game) can be well-ordered in such a way that each step is followed by a \"lower\" step, then you can be sure that the computation will terminate.\n\nNow we don't want to distinguish between two well-ordered sets if they only differ in the \"labeling of their elements\", or more formally: if we can pair off the elements of the first set with the elements of the second set such that if one element is smaller than another in the first set, then the partner of the first element is smaller than the partner of the second element in the second set, and vice versa. Such a one-to-one correspondence is called an order isomorphism (or a strictly increasing function) and the two well-ordered sets are said to be order-isomorphic, or similar (obviously this is an equivalence relation). Provided there exists an order isomorphism between two well-ordered sets, the order isomorphism is unique: this makes it quite justifiable to consider the sets as essentially identical, and to seek a \"canonical\" representative of the isomorphism type (class). This is exactly what the ordinals provide, and it also provides a canonical labeling of the elements of any well-ordered set.\n\nSo we essentially wish to define an ordinal as an isomorphism class of well-ordered sets: that is, as an equivalence class for the equivalence relation of \"being order-isomorphic\". There is a technical difficulty involved, however, in the fact that the equivalence class is too large to be a set in the usual Zermelo-Fraenkel formalization of set theory. But this is not a serious difficulty. We will say that the ordinal is the order type of any set in the class.\n\n### Definition of an ordinal as an equivalence class\n\nThe original definition of ordinal number, found for example in Principia Mathematica, defines the order type of a well-ordering as the set of all well-orderings similar (order-isomorphic) to that well-ordering: in other words, an ordinal number is genuinely an equivalence class of well-ordered sets. This definition must be abandoned in ZF and related systems of axiomatic set theory because these equivalence classes are too large to form a set. However, this definition still can be used in type theory and in Quine's set theory New Foundations and related systems (where it affords a rather surprising alternative solution to the Burali-Forti paradox of the largest ordinal).\n\n### Von Neumann definition of ordinals\n\nRather than defining an ordinal as an equivalence class of well-ordered sets, we can try to define it as some particular well-ordered set which (canonically) represents the class. Thus, we want to construct ordinal numbers as special well-ordered sets in such a way that every well-ordered set is order-isomorphic to one and only one ordinal number.\n\nThe ingenious definition suggested by John von Neumann, and which is now taken as standard, is this: define each ordinal as a special well-ordered set, namely that of all ordinals before it. Formally:\n\nA set S is an ordinal if and only if S is totally ordered with respect to set containment and every element of S is also a subset of S.\n\n(Here, \"set containment\" is another name for the subset relationship.) Such a set S is automatically well-ordered with respect to set containment. This relies on the axiom of well foundation: every nonempty set S has an element a which is disjoint from S.\n\nNote that the natural numbers are ordinals by this definition. For instance, 2 is an element of 4 = {0, 1, 2, 3}, and 2 is equal to {0, 1} and so it is a subset of {0, 1, 2, 3}.\n\nIt can be shown by transfinite induction that every well-ordered set is order-isomorphic to exactly one of these ordinals.\n\nFurthermore, the elements of every ordinal are ordinals themselves. Whenever you have two ordinals S and T, S is an element of T if and only if S is a proper subset of T, and moreover, either S is an element of T, or T is an element of S, or they are equal. So every set of ordinals is totally ordered. And in fact, much more is true: Every set of ordinals is well-ordered. This important result generalizes the fact that every set of natural numbers is well-ordered and it allows us to use transfinite induction liberally with ordinals.\n\nAnother consequence is that every ordinal S is a set having as elements precisely the ordinals smaller than S. This statement completely determines the set-theoretic structure of every ordinal in terms of other ordinals. It's used to prove many other useful results about ordinals. One example of these is an important characterization of the order relation between ordinals: every set of ordinals has a supremum, the ordinal obtained by taking the union of all the ordinals in the set. Another example is the fact that the collection of all ordinals is not a set. Indeed, since every ordinal contains only other ordinals, it follows that every member of the collection of all ordinals is also its subset. Thus, if that collection were a set, it would have to be an ordinal itself by definition; then it would be its own member, which contradicts the axiom of regularity. (See also the Burali-Forti paradox). The class of all ordinals is variously called \"Ord\", \"ON\", or \"∞\".\n\nAn ordinal is finite if and only if the opposite order is also well-ordered, which is the case if and only if each of its subsets has a greatest element.\n\n### Other definitions\n\nThere are other modern formulations of the definition of ordinal. Each of these is essentially equivalent to the definition given above. One of these definitions is the following. A class S is called transitive if each element x of S is a subset of S, i.e.",
null,
". An ordinal is then defined to be a transitive set whose members are also transitive. It follows from this that the members are themselves ordinals. Note that the axiom of regularity (foundation) is used in showing that these ordinals are well ordered by containment (subset).\n\n## Transfinite induction\n\n### What is transfinite induction?\n\nTransfinite induction holds in any well-ordered set, but it is so important in relation to ordinals that it is worth restating here.\n\nAny property which passes from the set of ordinals smaller than a given ordinal α to α itself, is true of all ordinals.\n\nThat is, if P(α) is true whenever P(β) is true for all β<α, then P(α) is true for all α. Or, more practically: in order to prove a property P for all ordinals α, one can assume that it is already known for all smaller β<α.\n\n### Transfinite recursion\n\nTransfinite induction can be used not only to prove things, but also to define them (such a definition is normally said to follow by transfinite recursion - we use transfinite induction to prove that the result is well-defined): the formal statement is tedious to write, but the bottom line is, in order to define a (class) function on the ordinals α, one can assume that it is already defined for all smaller β<α. One proves by transfinite induction that there is one and only one function satisfying the recursion formula upto and including α.\n\nHere is an example of definition by transfinite induction on the ordinals (more will be given later): define a function F by letting F(α) be the smallest ordinal not in the set of F(β) for all β<α. Note how we assume the F(β) known in the very process of defining F: this apparent paradox is exactly what definition by transfinite induction permits. Now in fact F(0) makes sense since there is no β<0, so the set of all F(β) for β<0 is empty, so F(0) must be 0 (the smallest ordinal of all), and now that we know F(0), then F(1) makes sense (and it is the smallest ordinal not equal to F(0)=0), and so on (the and so on is exactly transfinite induction). Well, it turns out that this example is not very interesting since F(α)=α for all ordinals α: but this can be shown, precisely, by transfinite induction.\n\n### Successor and limit ordinals\n\nAny nonzero ordinal has a smallest element (which is zero). It may or may not have a largest element, however: 42 or ω+6 have a largest element, whereas ω does not (there is no largest natural number). If an ordinal has a largest element α, then it is the next ordinal after α, and it is called a successor ordinal, namely the successor of α, written α+1. In the von Neumann definition of ordinals, the successor of α is",
null,
"since its elements are those of α and α itself.\n\nA nonzero ordinal which is not a successor is called a limit ordinal. One justification for this term is that a limit ordinal is indeed the limit in a topological sense of all smaller ordinals (for the order topology).\n\nQuite generally, when (αι<γ) is a sequence of ordinals (a family indexed by a limit γ), and if we assume that (αι) is increasing (αι<αι′ whenever ι<ι′), or at any rate non-decreasing, we define its limit to be the least upper bound of the set {αι}, that is, the smallest ordinal (it always exists) greater than any term of the sequence. In this sense, a limit ordinal is the limit of all smaller ordinals (indexed by itself).\n\nThus, every ordinal is either zero, or a successor (of a well-defined predecessor), or a limit. This distinction is important, because many definitions by transfinite induction rely upon it. Very often, when defining a function F by transfinite induction on all ordinals, one defines F(0), and F(α+1) assuming F(α) is defined, and then, for limit ordinals δ one defines F(δ) as the limit of the F(β) for all β<δ (either in the sense of ordinal limits, as we have just explained, or for some other notion of limit if F does not take ordinal values). Thus, the interesting step in the definition is the successor step, not the limit ordinals. Such functions (especially for F nondecreasing and taking ordinal values) are called continuous. We will see that ordinal addition, multiplication and exponentiation are continuous as functions of their second argument.\n\n### Indexing classes of ordinals\n\nWe have mentioned that any well-ordered set is similar (order-isomorphic) to a unique ordinal number",
null,
", or, on other words, that its elements can be indexed in increasing fashion by the ordinals less than",
null,
". This applies, in particular, to any set of ordinals: any set of ordinals is naturally indexed by the ordinals less than some",
null,
". The same holds, with a slight modification, for classes of ordinals (a collection of ordinals, possibly too large to form a set, defined by some property): any class of ordinals can be indexed by ordinals (and, when the class is unbounded, this puts it in class-bijection with the class of all ordinals). So we can freely speak of the",
null,
"-th element in the class (with the convention that the “0-th” is the smallest, the “1-th” is the next smallest, and so on). Formally, the definition is by transfinite induction: the",
null,
"-th element of the class is defined (provided it has already been defined for all",
null,
"), as the smallest element greater than the",
null,
"-th element for all",
null,
".\n\nWe can apply this, for example, to the class of limit ordinals: the",
null,
"-th ordinal which is either a limit or zero is",
null,
"(so far we have not defined multiplication but we can take this notation as a temporary definition, which will agree with the general notion to be defined later). Similarly, we can consider ordinals which are additively indecomposable (meaning that it is a nonzero ordinal which is not the sum of two strictly smaller ordinals): the",
null,
"-th additively indecomposable ordinal is indexed as",
null,
". The technique of indexing classes of ordinals is often useful in the context of fixed points: for example, the",
null,
"-th ordinal such that",
null,
"is written",
null,
".\n\n### Closed unbounded sets and classes\n\nA class of ordinals is said to be unbounded, or cofinal, when given any ordinal, there is always some element of the class greater than it (then the class must be a proper class, i.e., it cannot be a set). It is said to be closed when the limit of a sequence of ordinals in the class is again in the class: or, equivalently, when the indexing (class-)function",
null,
"is continuous in the sense that, for",
null,
"a limit ordinal,",
null,
"(the",
null,
"-th ordinal in the class) is the limit of all",
null,
"for",
null,
"; this is also the same as being closed, in the topological sense, for the order topology (to avoid talking of topology on proper classes, one can demand that the intersection of the class with any given ordinal is closed for the order topology on that ordinal, this is again equivalent).\n\nOf particular importance are those classes of ordinals which are closed and unbounded, sometimes called clubs. For example, the class of all limit ordinals is closed and unbounded: this translates the fact that there is always a limit ordinal greater than a given ordinal, and that a limit of limit ordinals is a limit ordinal (a fortunate fact if the terminology is to make any sense at all!). The class of additively indecomposable ordinals, or the class of",
null,
"ordinals, or the class of cardinals, are all closed unbounded; the set of regular cardinals, however, is unbounded but not closed, and any finite set of ordinals is closed but not unbounded.\n\nA class is stationary if it has a nonempty intersection with every closed unbounded class. All superclasses of closed unbounded classes are stationary and stationary classes are unbounded, but there are stationary classes which are not closed and there are stationary classes which have no closed unbounded subclass (such as the class of all limit ordinals with countable cofinality). Since the intersection of two closed unbounded classes is closed and unbounded, the intersection of a stationary class and a closed unbounded class is stationary. But the intersection of two stationary classes may be empty, e.g. the class of ordinals with cofinality ω with the class of ordinals with uncountable cofinality.\n\nRather than formulating these definitions for (proper) classes of ordinals, we can formulate them for sets of ordinals below a given ordinal",
null,
": A subset of a limit ordinal",
null,
"is said to be unbounded (or cofinal) under",
null,
"provided any ordinal less than",
null,
"is less than some ordinal in the set. More generally, we can call a subset of any ordinal",
null,
"cofinal in",
null,
"provided every ordinal less than",
null,
"is less than or equal to some ordinal in the set. The subset is said to be closed under",
null,
"provided it is closed for the order topology in",
null,
", i.e. a limit of ordinals in the set is either in the set or equal to",
null,
"itself.\n\n## Arithmetic of ordinals\n\nFor more details on this topic, see ordinal arithmetic.\n\nThere are three usual operations on ordinals: addition, multiplication, and (ordinal) exponentiation. Each can be defined in essentially two different ways: either by constructing an explicit well-ordered set which represents the operation or by using transfinite recursion. The Cantor normal form provides a standardized way of writing ordinals. The so-called \"natural\" arithmetical operations retain commutivity at the expense of continuity.\n\n## Ordinals and cardinals\n\n### Initial ordinal of a cardinal\n\nEach ordinal has an associated cardinal, its cardinality, obtained by simply forgetting the order. Any well-ordered set having that ordinal as its order-type has the same cardinality. The smallest ordinal having a given cardinal as its cardinality is called the initial ordinal of that cardinal. Every finite ordinal (natural number) is initial, but most infinite ordinals are not initial. The axiom of choice is equivalent to the statement that every set can be well-ordered, i.e. that every cardinal has an initial ordinal. In this case, it is traditional to identify the cardinal number with its initial ordinal, and we say that the initial ordinal is a cardinal.\n\nThe α-th infinite initial ordinal is written",
null,
". Its cardinality is written",
null,
". For example, the cardinality of ω0 = ω is",
null,
", which is also the cardinality of ω² or ε0 (all are countable ordinals). So (assuming the axiom of choice) we identify ω with",
null,
", except that the notation",
null,
"is used when writing cardinals, and ω when writing ordinals (this is important since",
null,
"whereas",
null,
"). Also,",
null,
"is the smallest uncountable ordinal (to see that it exists, consider the set of equivalence classes of well-orderings of the natural numbers: each such well-ordering defines a countable ordinal, and",
null,
"is the order type of that set),",
null,
"is the smallest ordinal whose cardinality is greater than",
null,
", and so on, and",
null,
"is the limit of the",
null,
"for natural numbers n (any limit of cardinals is a cardinal, so this limit is indeed the first cardinal after all the",
null,
").\n\n### Cofinality\n\nThe cofinality of an ordinal",
null,
"is the smallest ordinal",
null,
"which is the order type of a cofinal subset of",
null,
". Notice that a number of authors define confinality or use it only for limit ordinals. The cofinality of a set of ordinals or any other well ordered set is the cofinality of the order type of that set.\n\nThus for a limit ordinal, there exists a",
null,
"-indexed strictly increasing sequence with limit",
null,
". For example, the cofinality of ω² is ω, because the sequence ω·m (where m ranges over the natural numbers) tends to ω²; but, more generally, any countable limit ordinal has cofinality ω. An uncountable limit ordinal may have either cofinality ω as does",
null,
"or an uncountable cofinality.\n\nThe cofinality of 0 is 0. And the cofinality of any successor ordinal is 1. The cofinality of any limit ordinal is at least",
null,
".\n\nAn ordinal which is equal to its cofinality is called regular and it is always an initial ordinal. Any limit of regular ordinals is a limit of initial ordinals and thus is also initial even if it is not regular which it usually is not. If the Axiom of Choice, then",
null,
"is regular for each α. In this case, the ordinals 0, 1,",
null,
",",
null,
", and",
null,
"are regular, whereas 2, 3,",
null,
", and ωω·2 are initial ordinals which are not regular.\n\nThe cofinality of any ordinal α is a regular ordinal, i.e. the cofinality of the cofinality of α is the same as the cofinality of α. So the cofinality operation is idempotent.\n\n## Some “large” countable ordinals\n\nFor more details on this topic, see Large countable ordinals.\n\nWe have already mentioned the ordinal ε0, which is the smallest satisfying the equation",
null,
", so it is the limit of the sequence 0, 1,",
null,
",",
null,
",",
null,
", etc. Many ordinals can be defined in such a manner as fixed points of certain ordinal functions (the",
null,
"-th ordinal such that",
null,
"is called",
null,
", then we could go on trying to find the",
null,
"-th ordinal such that",
null,
", “and so on”, but all the subtlety lies in the “and so on”). We can try to do this systematically, but no matter what system is used to define and construct ordinals, there is always an ordinal that lies just above all the ordinals constructed by the system. Perhaps the most important ordinal which limits in this manner a system of construction is the Church-Kleene ordinal,",
null,
"(despite the",
null,
"in the name, this ordinal is countable), which is the smallest ordinal which cannot in any way be represented by a computable function (this can be made rigorous, of course). Considerably large ordinals can be defined below",
null,
", however, which measure the “proof-theoretic strength” of certain formal systems (for example,",
null,
"measures the strength of Peano arithmetic). Large ordinals can also be defined above the Church-Kleene ordinal, which are of interest in various parts of logic.\n\n## Topology and ordinals\n\n### Ordinals as topological spaces\n\nAny ordinal can be made into a topological space by endowing it with the order topology (since, being well-ordered, an ordinal is in particular totally ordered): in the absence of indication to the contrary, it is always that order topology which is meant when an ordinal is thought of as a topological space. (Note that if we are willing to accept a proper class as a topological space, then the class of all ordinals is also a topological space for the order topology.)\n\nThe set of limit points of an ordinal α is precisely the set of limit ordinals less than α. Successor ordinals (and zero) less than α are isolated points in α. In particular, the finite ordinals and ω are discrete topological spaces, and no ordinal beyond that is discrete. The ordinal α is compact as a topological space if and only if α is a successor ordinal.\n\nThe closed sets of a limit ordinal α are just the closed sets in the sense that we have already defined, namely, those which contain a limit ordinal whenever they contain all sufficiently large ordinals below it.\n\nAny ordinal is, of course, an open subset of any further ordinal. We can also define the topology on the ordinals in the following inductive way: 0 is the empty topological space, α+1 is obtained by taking the one-point compactification of α (if α is a limit ordinal; if it is not, α+1 is merely the disjoint union of α and a point), and for δ a limit ordinal, δ is equipped with the inductive limit topology.\n\nAs topological spaces, all the ordinals are Hausdorff and even normal. They are also totally disconnected (connected components are points), scattered (=every non-empty set has an isolated point; in this case, just take the smallest element), zero-dimensional (=the topology has a clopen basis: here, write an open interval (β,γ) as the union of the clopen intervals (β,γ'+1)=[β+1,γ'] for γ'<γ). However, they are not extremally disconnected in general (there is an open set, namely ω, whose closure is not open).\n\nThe topological spaces ω1 and its successor ω1+1 are frequently used as text-book examples of non-countable topological spaces. For example, in the topological space ω1+1, the element ω1 is in the closure of the subset ω1 even though no sequence of elements in ω1 has the element ω1 as its limit. The space ω1 is first-countable, but not second-countable, and ω1+1 has neither of these two properties, despite being compact. It is also worthy of note that any continuous function from ω1 to R (the real line) is eventually constant: so the Stone-Čech compactification of ω1 is ω1+1, just as its one-point compactification (in sharp contrast to ω, whose Stone-Čech compactification is much larger than ω1).\n\n### Ordinal-indexed sequences\n\nIf α is a limit ordinal and X is a set, an α-indexed sequence of elements of X merely means a function from α to X. If X is a topological space, we say that an α-indexed sequence of elements of X converges to a limit x when it converges as a net, in other words, when given any neighborhood U of x there is an ordinal β<α such that xι is in U for all ι≥β. This coincides with the notion of limit defined above for increasing ordinal-indexed sequences in an ordinal.\n\nOrdinal-indexed sequences are more powerful than ordinary (ω-indexed) sequences to determine limits in topology: for example, ω1 is a limit point of ω1+1 (because it is a limit ordinal), and, indeed, it is the limit of the ω1-indexed sequence which maps any ordinal less than ω1 to itself: however, it is not the limit of any ordinary (ω-indexed) sequence in ω1, since any function from the natural numbers to ω1 is bounded. However, ordinal-indexed sequences are not powerful enough to replace nets (or filters) in general: for example, on the Tychonoff plank (the product space",
null,
"), the corner point",
null,
"is a limit point (it is in the closure) of the open subset",
null,
", but it is not the limit of an ordinal-indexed sequence."
]
| [
null,
"https://static.wikia.nocookie.net/psychology/images/8/83/Omega_squared.png/revision/latest/scale-to-width-down/256",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/2bcb823aaa86a0945d7299fd3edf640b5f7bfe32",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/aa8c3fbbdc240872874a1be922e5150d930cfece",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/a223c880b0ce3da8f64ee33c4f0010beee400b1a",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/a223c880b0ce3da8f64ee33c4f0010beee400b1a",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/d887fccd6816244086df6badf860ba816504b035",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/7ed48a5e36207156fb792fa79d29925d2f7901e8",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/d887fccd6816244086df6badf860ba816504b035",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/a223c880b0ce3da8f64ee33c4f0010beee400b1a",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/e4435fb58e1feb747cd9ad1835b3b0b7b0748fa9",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/a223c880b0ce3da8f64ee33c4f0010beee400b1a",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b2a6f74d7256597b05dfd2a0d8924df2e177db74",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/a223c880b0ce3da8f64ee33c4f0010beee400b1a",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/858a06838aaa753a28eeb613da171e218f44704c",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/2137e811accd25f099a39cf1bc58cbcdd3cb188a",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/545fd099af8541605f7ee55f08225526be88ce57",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/c5321cfa797202b3e1f8620663ff43c4660ea03a",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/cef072d068e10c9668853feecb1f4728b36146d0",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/c5321cfa797202b3e1f8620663ff43c4660ea03a",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b3f9db07b272b7034961ad583b0d8264989318b7",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/e038c542d9770c4bf69b2fd8bedb00eee9bf89ee",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/523f6066e848ffd999022488bcad941f23bb30d5",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/15f097b57783e051c7f975bb4cf2c20830ae0c6e",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/9fd1f42a5a41f25dc3689817aa40dca0ad1649bd",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/721cd7f8c15a2e72ad162bdfa5baea8eef98aab1",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/721cd7f8c15a2e72ad162bdfa5baea8eef98aab1",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/721cd7f8c15a2e72ad162bdfa5baea8eef98aab1",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/e4fe9c4b6406c42b6506e3048a520b2a8a7df3e6",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/f276be4f7f2f77195c377e4b3897085a8ece69be",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/e20e29ac56d6cc52eaeb2f9c0bf79ef706428ddf",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/e20e29ac56d6cc52eaeb2f9c0bf79ef706428ddf",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/7b914a8bfef5d1b9b106048afa0aab4a99251f38",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/78c211ce8badf4ffbf9417ecceb0ef7ab0a8caed",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/649576b619f8aad6c47a90ca6266c03c4accc481",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/57263f565851485af54c589561735513ac456858",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/57263f565851485af54c589561735513ac456858",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/c5321cfa797202b3e1f8620663ff43c4660ea03a",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/c5321cfa797202b3e1f8620663ff43c4660ea03a",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/b79333175c8b3f0840bfb4ec41b8072c83ea88d3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/649576b619f8aad6c47a90ca6266c03c4accc481",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/48eff443f9de7a985bb94ca3bde20813ea737be8",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/ff08a9e70810bc6e66e7a719451ef1d0f2f297a8",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/48eff443f9de7a985bb94ca3bde20813ea737be8",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/e20e29ac56d6cc52eaeb2f9c0bf79ef706428ddf",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/7b914a8bfef5d1b9b106048afa0aab4a99251f38",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/649576b619f8aad6c47a90ca6266c03c4accc481",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/858a06838aaa753a28eeb613da171e218f44704c",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/48eff443f9de7a985bb94ca3bde20813ea737be8",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/9aa5636cabcea62c44c8b91fd9095e06054a6fa4",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/f2e322088d40a99bd8b5540c3c0fdaffced1cc3e",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/bce48dd56254d0a7c33e987c7c8eeb44c963ac04",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/858a06838aaa753a28eeb613da171e218f44704c",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/5d6b1688fbdbcb542466ef547dc77f6bae8bda45",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/bce48dd56254d0a7c33e987c7c8eeb44c963ac04",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/7698ee54562612764c71a2b0da16e68c3122c032",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/0287c1265a806f47a4187ef41cec17113c53f204",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/e20e29ac56d6cc52eaeb2f9c0bf79ef706428ddf",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/0287c1265a806f47a4187ef41cec17113c53f204",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/acb0a8377db20e42274444cb181d51b5532b5844",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/db4bdb9b368101fd9034e419f85f5f8167045d6b",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/f4639128ab29706a8342fbd90ea1ca32604ec9f5",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/png/14494c5f434cfc007c7e04edffd6f751b66fbc1c",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9310639,"math_prob":0.95467675,"size":30097,"snap":"2021-31-2021-39","text_gpt3_token_len":6901,"char_repetition_ratio":0.19659057,"word_repetition_ratio":0.023540856,"special_character_ratio":0.21443997,"punctuation_ratio":0.106212765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99578714,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154],"im_url_duplicate_count":[null,1,null,1,null,1,null,null,null,null,null,null,null,null,null,null,null,2,null,null,null,2,null,null,null,1,null,null,null,1,null,null,null,3,null,1,null,null,null,null,null,1,null,null,null,1,null,1,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,3,null,null,null,null,null,null,null,1,null,1,null,4,null,4,null,2,null,6,null,3,null,2,null,2,null,null,null,null,null,null,null,null,null,null,null,3,null,null,null,1,null,null,null,4,null,2,null,3,null,3,null,null,null,1,null,1,null,6,null,3,null,1,null,6,null,1,null,2,null,4,null,2,null,5,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-19T17:53:06Z\",\"WARC-Record-ID\":\"<urn:uuid:5e20a775-ac9c-4837-acab-1cbd06994068>\",\"Content-Length\":\"261003\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ad360e1-3da5-4cc3-9a0e-c44f819e7fce>\",\"WARC-Concurrent-To\":\"<urn:uuid:406bce9e-eef2-4f84-a2fd-cfb3da1b279a>\",\"WARC-IP-Address\":\"151.101.128.194\",\"WARC-Target-URI\":\"https://psychology.wikia.org/wiki/Ordinal_number\",\"WARC-Payload-Digest\":\"sha1:VYW4CZ7TJKNC4N3RHVIMO5PPVCHYFUCN\",\"WARC-Block-Digest\":\"sha1:76MH3MOTY453VQE7UOICRO7MMCEZISK4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056892.13_warc_CC-MAIN-20210919160038-20210919190038-00681.warc.gz\"}"} |
https://hopeithelps.dev/message/240435/ | [
"# Message from C, C++ discussions\n\nNovember 2019\n\n— Its when it collects that data and passes it to the next function where the program gets weird\n\n—\n\nChoose_Array_Member_Amount function , which is what called the intial question function in the first place now has the data it needed from the initial question function and stores it in a variable called Choice\n\n— Choice is passed it a switch statement\n\n— God I lost it\n\n— Because choice is an enumeration , a default case is not needed\n\n—\n\n`ArrayData choose_Array_Member_Amount(InitialQuestionP Choose){ int ArrayMemberTotal; StringorInt choice = Choose(); ArrayData Array_Data; Array_Data.chooseStringorInt = choice; clearScreen(); switch(choice){ case INTEGAR: puts(\"How many members do you need in your Integer Array?\\n\"); break; case aSTRING: puts(\"How many members do you need in your String Array?\\n\"); break; } if(scanf(\" %d\", &ArrayMemberTotal)){ Array_Data.ArrayMemberTotal = ArrayMemberTotal; return Array_Data; } else if(scanf(\" %d\", &ArrayMemberTotal) != 1){ not_An_Number(); switch(choice){ case INTEGAR: puts(\"How many members do you need in your Integer Array?\\n\"); break; case aSTRING: puts(\"How many members do you need in your String Array?\\n\"); break; } scanf(\" %d\", &ArrayMemberTotal); } }`\n\nMessage permanent page\n\n— This is the function where the issues happens\n\n— If the user types in the create data type it works fine\n\n— Choice is an enumeration called stringorInt that has two symbols, INTEGAR, or aSTRING, which we know at this point\n\nMessage permanent page\n\n— Depending on which case it will ask the user one of two question\n\n— But the same premise , how many members do you want in your array\n\n— If i input an int it works fine,"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.79600114,"math_prob":0.6945701,"size":1640,"snap":"2022-27-2022-33","text_gpt3_token_len":399,"char_repetition_ratio":0.1277506,"word_repetition_ratio":0.1882353,"special_character_ratio":0.24085365,"punctuation_ratio":0.13448276,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97318095,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-09T17:08:13Z\",\"WARC-Record-ID\":\"<urn:uuid:80cc079f-031d-421f-944a-a81b4a9695b6>\",\"Content-Length\":\"10002\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aafb6a4a-2922-4bfc-be78-012f802fdc33>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc49717d-ba88-4685-8df2-38cc762b9c88>\",\"WARC-IP-Address\":\"54.73.26.109\",\"WARC-Target-URI\":\"https://hopeithelps.dev/message/240435/\",\"WARC-Payload-Digest\":\"sha1:TI5ATNNE4LVUF6DSFRGESNK7OP3H5MKI\",\"WARC-Block-Digest\":\"sha1:OSIV3D7THDEX2S5EKWGJF4BNQLH4FB5K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571056.58_warc_CC-MAIN-20220809155137-20220809185137-00530.warc.gz\"}"} |
https://www.ias.ac.in/listing/bibliography/boms/Y_Waseda | [
"• Y Waseda\n\nArticles written in Bulletin of Materials Science\n\n• Thermodynamic properties of Pt5La, Pt5Ce, Pt5Pr, Pt5Tb and Pt5 Tm intermetallics\n\nThe Gibbs’ energies of formation of Pt5La, Pt5Ce, Pt5Pr, Pt5Tb and Pt5 Tm intermetallic compounds have been determined in the temperature range 870–1100 K using the solid state cell:$$Ta,M + MF_3 /CaF_2 /Pt_5 M + Pt + MF_3 ,Ta$$.\n\nThe reversible emf of the cell is directly related to the Gibbs’ energy of formation of the Pt5M compound. The results can be summarized by the equations:$$\\begin{gathered} \\Delta G_f^ \\circ \\left\\langle {Pt_5 La} \\right\\rangle = - 373,150 + 6 \\cdot 60 T\\left( { \\pm 300} \\right)J mol^{ - 1} \\hfill \\\\ \\Delta G_f^ \\circ \\left\\langle {Pt_5 Ce} \\right\\rangle = - 367,070 + 5 \\cdot 79 T\\left( { \\pm 300} \\right)J mol^{ - 1} \\hfill \\\\ \\Delta G_f^ \\circ \\left\\langle {Pt_5 Pr} \\right\\rangle = - 370,540 + 4 \\cdot 69 T\\left( { \\pm 300} \\right)J mol^{ - 1} \\hfill \\\\ \\Delta G_f^ \\circ \\left\\langle {Pt_5 Tb} \\right\\rangle = - 372,280 + 4 \\cdot 11 T\\left( { \\pm 300} \\right)J mol^{ - 1} \\hfill \\\\ \\Delta G_f^ \\circ \\left\\langle {Pt_5 Tm} \\right\\rangle = - 368,230 + 4 \\cdot 89 T\\left( { \\pm 300} \\right)J mol^{ - 1} \\hfill \\\\ \\end{gathered}$$ relative to the low temperature allotropic form of the lanthanide element and solid platinum as standard states The enthalpies of formation of all the Pt5M intermetallic compounds obtained in this study are in good agreement with Miedema’s model. The experimental values are more negative than those calculated using the model. The variation of the thermodynamic properties of Pt5M compounds with atomic number of the lanthanide element is discussed in relation to valence state and molar volume.\n\n• System Cu-Rh-O: Phase diagram and thermodynamic properties of ternary oxides CuRhO2 and CuRh2O4\n\nAn isothermal section of the phase diagram for the system Cu-Rh-O at 1273 K has been established by equilibration of samples representing eighteen different compositions, and phase identification after quenching by optical and scanning electron microscopy (SEM), X-ray diffraction (XRD), and energy dispersive analysis of X-rays (EDX). In addition to the binary oxides Cu2O, CuO, and Rh2O3, two ternary oxides CuRhO2 and CuRh2O4 were identified. Both the ternary oxides were in equilibrium with metallic Rh. There was no evidence of the oxide Cu2Rh2O5 reported in the literature. Solid alloys were found to be in equilibrium with Cu2O. Based on the phase relations, two solid-state cells were designed to measure the Gibbs energies of formation of the two ternary oxides. Yttria-stabilized zirconia was used as the solid electrolyte, and an equimolar mixture of Rh+Rh2O3 as the reference electrode. The reference electrode was selected to generate a small electromotive force (emf), and thus minimize polarization of the three-phase electrode. When the driving force for oxygen transport through the solid electrolyte is small, electrochemical flux of oxygen from the high oxygen potential electrode to the low potential electrode is negligible. The measurements were conducted in the temperature range from 900 to 1300 K. The thermodynamic data can be represented by the following equations: {fx741-1} where Δf(ox)Go is the standard Gibbs energy of formation of the interoxide compounds from their component binary oxides. Based on the thermodynamic information, chemical potential diagrams for the system Cu-Rh-O were developed.\n\n• # Dr Shanti Swarup Bhatnagar for Science and Technology\n\nPosted on October 12, 2020\n\nProf. Subi Jacob George — Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur, Bengaluru\nChemical Sciences 2020"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8243984,"math_prob":0.9707023,"size":3678,"snap":"2021-31-2021-39","text_gpt3_token_len":1033,"char_repetition_ratio":0.1017964,"word_repetition_ratio":0.104811,"special_character_ratio":0.27107123,"punctuation_ratio":0.075968996,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98504394,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-02T02:27:10Z\",\"WARC-Record-ID\":\"<urn:uuid:59dc3c63-bae0-4e73-9e77-e6b911b0f7db>\",\"Content-Length\":\"30634\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:183bafd6-d90b-43a3-8b9f-6a7bef175e10>\",\"WARC-Concurrent-To\":\"<urn:uuid:984157be-85f3-4920-8fe1-114cf15abbdd>\",\"WARC-IP-Address\":\"13.232.189.126\",\"WARC-Target-URI\":\"https://www.ias.ac.in/listing/bibliography/boms/Y_Waseda\",\"WARC-Payload-Digest\":\"sha1:V2656LDDRFQVUT2PYRWF3EGFD46E3MGI\",\"WARC-Block-Digest\":\"sha1:532WFPD3BA7AMW556W34NJOPBET3XTUM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154302.46_warc_CC-MAIN-20210802012641-20210802042641-00529.warc.gz\"}"} |
https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019.html | [
"Geosci. Model Dev., 12, 4443–4467, 2019\nhttps://doi.org/10.5194/gmd-12-4443-2019\nGeosci. Model Dev., 12, 4443–4467, 2019\nhttps://doi.org/10.5194/gmd-12-4443-2019\n\nDevelopment and technical paper 24 Oct 2019\n\nDevelopment and technical paper | 24 Oct 2019",
null,
"# Improving permafrost physics in the coupled Canadian Land Surface Scheme (v.3.6.2) and Canadian Terrestrial Ecosystem Model (v.2.1) (CLASS-CTEM)\n\nImproving permafrost physics in the coupled Canadian Land Surface Scheme (v.3.6.2) and Canadian Terrestrial Ecosystem Model (v.2.1) (CLASS-CTEM)\nJoe R. Melton1, Diana L. Verseghy2,*, Reinel Sospedra-Alfonso3, and Stephan Gruber4 Joe R. Melton et al.\n• 1Climate Research Division, Environment and Climate Change Canada, Victoria, B.C., Canada\n• 2Formerly at Climate Research Division, Environment and Climate Change, Toronto, Canada\n• 3Canadian Centre for Climate Modelling and Analysis, Climate Research Division, Environment and Climate Change Canada, Victoria, B.C., Canada\n• 4Department of Geography and Environmental Studies, Carleton University, Ottawa, Canada\n• *retired\n\nAbstract\n\nThe Canadian Land Surface Scheme and Canadian Terrestrial Ecosystem Model (CLASS-CTEM) together form the land surface component of the Canadian Earth System Model (CanESM). Here, we investigate the impact of changes to CLASS-CTEM that are designed to improve the simulation of permafrost physics. Overall, 18 tests were performed, including changing the model configuration (number and depth of ground layers, different soil permeable depth datasets, adding a surface moss layer), and investigating alternative parameterizations of soil hydrology, soil thermal conductivity, and snow properties. To evaluate these changes, CLASS-CTEM outputs were compared to 1570 active layer thickness (ALT) measurements from 97 observation sites that are part of the Global Terrestrial Network for Permafrost (GTN-P), 105 106 monthly ground temperature observations from 132 GTN-P borehole sites, a blend of five observation-based snow water equivalent (SWE) datasets (Blended-5), remotely sensed albedo, and seasonal discharge for major rivers draining permafrost regions. From the tests performed, the final revised model configuration has more ground layers (increased from 3 to 20) extending to greater depth (from 4.1 to 61.4 m) and uses a new soil permeable depths dataset with a surface layer of moss added. The most beneficial change to the model parameterizations was incorporation of unfrozen water in frozen soils. These changes to CLASS-CTEM cause a small improvement in simulated SWE with little change in surface albedo but greatly improve the model performance at the GTN-P ALT and borehole sites. Compared to the GTN-P observations, the revised CLASS-CTEM ALTs have a weighted mean absolute error (wMAE) of 0.41–0.47 m (depending on configuration), improved from >2.5 m for the original model, while the borehole sites see a consistent improvement in wMAE for most seasons and depths considered, with seasonal wMAE values for the shallow surface layers of the revised model simulation of at most 3.7 C, which is 1.2 C more than the wMAE of the screen-level air temperature used to drive the model as compared to site-level observations (2.5 C). Subgrid heterogeneity estimates were derived from the standard deviation of ALT on the 1 km2 measurement grids at the GTN-P ALT sites, the spread in wMAE in grid cells with multiple GTN-P ALT sites, as well as from 35 boreholes measured within a 1200 km2 region as part of the Slave Province Surficial Materials and Permafrost Study. Given the size of the model grid cells (approximately 2.8), subgrid heterogeneity makes it likely difficult to appreciably reduce the wMAE of ALT or borehole temperatures much further.\n\nShare\n1 Introduction\n\nPermafrost underlies between 9 % and 14 % of the exposed land surface north of 60 S (13–18×106 km2Gruber2012). The presence of perennially frozen soil at depth has strong impacts on local hydrology, energy fluxes, plant communities, and carbon dynamics. Several factors influence ground temperature and therefore the presence of permafrost, including snow cover, vegetation structure and function, hydrology, and topography . Permafrost has been warming and active layers have thickened over the last three decades . This trend is expected to continue due to climate change making the carbon presently contained in frozen soils vulnerable to release to the atmosphere either as carbon dioxide or methane, depending on local conditions. Since the carbon stored in frozen soils becomes readily accessible to microbial respiration once soils thaw, accurately simulating the physics of the permafrost response to a changing climate is vital for reliable predictions of the permafrost carbon feedback to climate change.\n\nThe Canadian Land Surface Scheme (CLASS) is the land surface component of the Canadian Earth System Model (CanESM). CLASS has been tested for its cold region performance in several studies previously. evaluated CLASS (v.2.5) at a site on the Alaska North Slope. The principal conclusions of the study were that CLASS was most sensitive to ground column depth and soil composition with lesser sensitivity to variations in the radiative fluxes, specification of the overlying vegetation, and the initial soil moisture. tested CLASS at a fen wetland and a willow–birch forest in the northern Hudson Bay lowlands. They found the upper soil layer temperatures to be consistently overestimated using the model's default mineral soil parameterization, whereas using the organic soil parametrization of improved the simulated temperatures significantly. did some tests with a subarctic open woodland site in Churchill, Manitoba, using CLASS with the parameterization. Recommendations from their work included introducing a non-vascular plant functional type (PFT) and a sparse canopy representation, varying the minimum stomatal conductance according to PFT, and re-examination of the snowmelt algorithm. The snowmelt recommendations were subsequently investigated by and . More recently, used CLASS (v.3.5) in the Canadian Regional Climate Model version 5 (CRCM5) to look at the impact of snow and soil parameterizations on simulated permafrost and climate. Their simulations included offline tests using the ERA-Interim meteorological forcing over the pan-Arctic region. Paquin and Sushama tested several options that have previously been made available in CLASS but not yet implemented operationally, including (1) increasing the number and depth of soil layers (47 levels extending to 65 m), (2) using the parameterization for peatlands and assuming an organic surface soil layer for most other regions, and (3) changing the snow thermal conductivity parameterization from to . The formulation was subsequently adopted in CLASS v.3.6 (Verseghy2017). also used CLASS in CRCM5 to investigate cold region hydrological performance. They reported improvements by incorporating super-cooled soil water, fractional permeable area, and a changed hydraulic conductivity formulation for frozen soil. coupled CLASS v.3.6 to the Prairie Blowing Snow Model (PBSM) to simulate the influence of chinooks (Föhn winds) over the South Saskatchewan River Basin. investigated 15 alternative parameterizations relating to the model physics and concluded by recommending that four of those be considered for adoption in CLASS to improve the simulated snow water equivalent (SWE) and soil water. Three of the suggested parameterizations dealt with snow properties and the fourth was related to soil thermal conductivity .\n\nOur study evaluates the individual and combined effects of suggested enhancements to the Canadian Land Surface Scheme coupled to the Canadian Terrestrial Ecosystem Model (CLASS-CTEM) for simulating processes relevant to soils with permafrost or pronounced seasonal freezing. The model enhancements suggested above have previously been recommended in research studies but not been previously implemented into the CLASS-CTEM framework (unless otherwise noted). Here, we investigate the impact of these previously proposed model enhancements as well as several model configuration changes suggested in the literature. Based on this evaluation, a revised version of CLASS-CTEM containing several enhancements is described and also evaluated. To evaluate model behavior, we draw upon measurements of the thickness of annual thaw in perennially frozen soils (active layer thickness) and borehole temperature sites from the Global Terrestrial Network for Permafrost (GTN-P2016) along with other observation-based datasets for snow, surface albedo, and runoff.\n\nNumerous studies have investigated the permafrost physics performance of models (e.g., see review in Riseborough et al.2008) including other large-scale models used in Earth system model (ESM) applications, such as JULES , JS-BACH , and the Community Land Model (CLM, e.g., Alexeev et al.2007; Lawrence et al.2008; Lee et al.2014), allowing us to design our proposed experiments based on their conclusions. The performance of CLASS-CTEM permafrost physics will be evaluated through offline simulations where the model is forced with reanalysis meteorology to avoid biases found in the simulated climate of the coupled model as well as biases in the associated feedbacks. This study is focused on model performance at the large spatial scale of the CanESM, as our principle aim is to improve the simulated permafrost physics so that the carbon cycle processes in these regions is well bounded. It is therefore not aimed at shedding light on physical processes in permafrost zones or investigating model performance at individual point locations as the model performance at a single site does not directly translate to model performance over large regions.\n\nIn the remainder of the paper, Sect. 2 describes the CLASS-CTEM model, the study design as well as parameterizations tested, and the GTN-P sites used in model evaluation. Section 3 evaluates the model performance and discusses the influence of subgrid heterogeneity, while Sect. 4 gives overall conclusions and discusses limitations of our study and future directions for CLASS-CTEM development.\n\n(SoilGrids; Shangguan et al.2017)(Pel16; Pelletier et al.2016)\n\nTable 1List of experiments and the associated model theme they relate to. Experiments denoted with an asterisk were run with both the Climate Research Unit – National Centers for Environmental Prediction (CRUNCEP) and the Climate Research Unit – Japanese 55-year Reanalysis (CRUJRA55) meteorological forcing datasets.",
null,
"2 Experimental setup\n\n## 2.1 CLASS-CTEM\n\nCLASS (v.3.6.2; Verseghy2017) coupled with CTEM (v.2.1; Melton and Arora2016) forms the land surface component of the CanESM. CLASS performs the land surface energy and water balance calculations on a, typically, half-hourly time step. The model uses leaf area index (LAI), rooting depth, canopy mass, and vegetation height to evaluate the energy and water balance terms of the vegetation canopy and its interactions with the atmosphere. The number of soil layers can vary depending on the application but the standard model setup uses three soil layers of 0.1, 0.25, and 3.75 m thickness. The soil texture (sand, clay, organic matter) dataset used by CLASS-CTEM is the Global Soil Dataset for use in Earth system models (GSDE; Shangguan et al.2014). The soil permeable depth is from (hereafter Zobler86). CLASS v.3.6.2 adopts the soil albedo approach of with the incorporation of a soil color index geophysical field.\n\nCLASS prognostically determines the water content (liquid and frozen) and temperature of all soil layers at each time step. Also calculated at each time step, depending on ambient conditions, are the temperature, mass, albedo, and density of a single-layer snowpack, interception of rain and snow on the vegetation canopy, and amount of ponded water on the soil surface. Mineral soils are parameterized using the pedotransfer functions of and . Organic soils (organic matter >30 % by weight) are modeled as peat following . In the standard CLASS-CTEM framework, lateral transfers of heat or moisture between grid cells are neglected; the treatment of processes such as streamflow and blowing snow requires the inclusion of separate, specialized routines (e.g., Soulis et al.2000; Arora et al.2001; MacDonald2015). All simulations presented here have no geothermal heat flux at the bottom of the soil column.\n\nCTEM calculates the carbon and vegetation dynamics on a daily time step receiving from CLASS daily mean soil moisture, soil temperature, and net radiation. Photosynthesis and canopy conductance occur on the CLASS time step. CTEM simulates the respiratory costs and carbon uptake for nine PFTs which are subsets of the four CLASS PFTs. The CLASS PFTs (with corresponding CTEM PFTs in parentheses) are needleleaf tree (needleleaf deciduous and needleleaf evergreen), broadleaf tree (broadleaf cold deciduous, broadleaf drought/dry deciduous, and broadleaf evergreen), crop (photosynthetic pathway C3 and C4), and grass (C3 and C4). CTEM carries five carbon pools representing plant leaves, roots, and stems, along with two detrital pools for litter and soil C.\n\nFor global simulations, CLASS-CTEM is typically run at the CanESM atmosphere resolution, which is approximately 2.8 by 2.8, corresponding to a grid cell size of approximately 49 000 km2 at 45 latitude and about 33 500 km2 at 70. Various studies have used observation-based datasets to evaluate CLASS-CTEM at scales from site level to global (e.g., Peng et al.2014; Melton and Arora2014, 2016). While CLASS-CTEM is capable of running in a mosaic (multiple tiles per grid cell) configuration (e.g., Melton and Arora2014; Melton et al.2017), the simulations presented here are run with a single tile per grid cell.\n\n## 2.2 Study design\n\nOverall, 18 experiments were run to assess the impact of model geophysical fields (soil texture, soil permeable depth, and meteorological forcing), model setup (number of soil layers, addition of a moss layer), and model parameterizations (Table 1). The physical quantities used for model evaluation are presented in the next section. The initial model version (Exp. Base model) uses three ground layers of thicknesses 0.1, 0.25, and 3.75 m for a total depth of 4.1 m. The first seven experiments address model configuration and input geophysical fields. To test the sensitivity of simulated permafrost to meteorological forcing, CLASS-CTEM was forced with two different meteorological datasets, the Climate Research Unit – National Centers for Environmental Prediction (CRUNCEP v.8; Viovy2016) and the Climate Research Unit – Japanese 55-year Reanalysis (CRUJRA55 v.1.0.5; Harris et al.2014; Kobayashi et al.2015). CRUNCEP was used as the base forcing dataset with additional runs performed for some experiments with CRUJRA55 (see Table 1). While both of these meteorological datasets use the CRU TS dataset as the underlying monthly climatology, they differ in their meteorological models (NCEP or JRA55). Additionally, the spatial resolution of JRA55 is 0.5, while that of NCEP is 2.5. Thus, the two datasets differ in their spatial and high-frequency (sub-monthly) temporal variability. However, these differences will be somewhat lessened by their regridding to the CLASS-CTEM model resolution. The meteorological inputs (surface air temperature, surface pressure, specific humidity, wind speed, precipitation, and longwave and shortwave radiation) are disaggregated from 6-hourly to half-hourly time steps while the simulation runs following the methodology in . Both datasets are available over the extended periods necessary for permafrost simulation (CRUNCEP v.8: 1901–2016; CRUJRA55 v.1.0.5: 1901–2017).\n\nExp. 20 ground layers changes the number of ground layers from 3 to 20. The 20 layers have higher resolution near the surface with thicker layers at depth (see Table A1). If the permeable soil depth is shallower than the modeled ground column, layers below the soil permeable depth are treated like hydrologically inactive bedrock and are assigned thermal conductivity (2.5 W m−1 K−1) and heat capacity (2.13×106 J m−3 K−1) values characteristic of sand particles (Verseghy2017). If the transition from permeable soil to impermeable bedrock occurs within a soil layer, CLASS calculates the water fluxes only in the depth of permeable soil but simulates one soil temperature for the layer.\n\nThe influence of the soil permeable depth dataset is examined by replacing the soil permeable depths of Zobler86 with either the SoilGrids dataset (Exp. SoilGrids depth, Shangguan et al.2017) or that of (hereafter referred to as Pel16; Exp. Pel16 depth). The influence of a moss layer is examined in Exps. SoilGrids + Moss and Pel16 + Moss. In these experiments, the top soil layer is replaced with photosynthetically inactive moss with a higher porosity, hydraulic conductivity, and heat capacity than mineral soil following (described in Appendix A1).\n\nWhereas the first series of experiments just described investigated aspects of the model setup, the second series of experiments investigates alternative parameterizations and uses Exp. SoilGrids + Moss as a starting point (the same geophysical fields and model configuration). The alternative parameterizations are described in detail in Appendix Sects. A2 to A7. Briefly, these experiments fall into three main areas related to (1) heat transfer, (2) snow, and (3) hydrology. The heat transfer experiments replace CLASS-CTEM's default soil thermal conductivity parameterization with that of following the recommendations of (Exp. deVries thermal cond. results are discussed in the Supplement). As does not account for frozen water in soil, whereas the study of does, a further experiment uses a recently published parameterization that simplifies and extends to include both frozen and unfrozen water (Exp. Tian16 thermal cond.; see Sect. A2; Tian et al.2016). Four experiments were devoted to aspects of how snow is simulated in CLASS-CTEM. Exps. Snow cover:Yang97 and Snow cover: Brown03 replace CLASS-CTEM's default function to relate snow depth to grid cell fractional snow cover from a linear relationship (Verseghy2017) to a hyperbolic tangent (following Yang et al.1997) or an exponential function (following Brown et al.2003), respectively (Fig. S2). Another experiment (Exp. Fresh snow density) changed the calculation for the density of freshly fallen snow from one based solely on air temperature (Verseghy2017) to also considering wind speed following the CROCUS model . The final experiment concerned with aspects of the snow parameterization is Exp. Snow albedo decay. CLASS-CTEM uses an empirical exponential decay function to simulate the decrease in snow albedo as snow ages. In Exp. Snow albedo decay, the default parameterization is replaced by an efficient spectral method . The last series of experiments looked at hydrology. Water in soils can be, partially or completely, unfrozen at temperatures below 0 C due to the effects of interfacial curvature, adsorption forces, and solutes . Exp. Super-cooled water incorporated the unfrozen water in frozen soil parameterization of , and Exp. Modif. hydrology modifies the soil matric potential and saturated hydraulic conductivity to account for the influence of frozen water following .\n\nFor model spinup, the meteorological forcing years of 1901–1925 were cycled over repeatedly until the model reached active layer thickness (ALT) equilibrium (less than 0.05 m difference between average ALT and spinup cycles across all cells with permafrost within them). To run from 1851 to 2016 while atmospheric CO2 concentration and land cover evolved, the climate was cycled over twice from 1901 to 1925 for the years 1851–1900; then, the model climate was allowed to run freely from 1901 to 2016. For the simulations presented here, CLASS-CTEM was run with a prescribed, rather than prognostically determined, distribution of PFTs.\n\nActive layer thickness in CLASS-CTEM is determined by the temperature and water content of the ground layers. If a layer's temperature is 0 C, the frozen water fraction is used to estimate the thickness of freezing within the layer; i.e., if half of the water content in the layer is frozen, the ALT is assumed to be halfway through the layer. Permafrost area in the model domain was calculated by selecting grid cells with active layer thicknesses less than the model total ground column and multiplying by the grid cell area.\n\n## 2.3 Datasets used for model evaluation\n\n### 2.3.1 Active layer thickness sites from the Global Terrestrial Network for Permafrost (GTN-P)\n\nTo evaluate CLASS-CTEM, 97 open-access GTN-P ALT sites were chosen due to their locations in regions of continuous or discontinuous permafrost (last access: 11 May 2017; Table S1 and Fig. 1). No sites in areas of sporadic or isolated permafrost were used due to the difficulty in representing this type of permafrost within a large model grid. While we attempted to have as broad a spatial coverage of the GTN-P sites as possible, no open-access sites were available for eastern Canada and Fennoscandia. For comparison with CLASS-CTEM, at each observation time, the average of the sampling grid was determined at each GTN-P ALT site. Then for each site, the sampling grid averages were converted to monthly mean values. The closest grid cell was determined from the center of the model grid cells to the ALT sampling location and the modeled monthly average ALTs were compared to the observed values. This resulted in the 97 GTN-P sites, with 1570 ALT observations, being placed into 37 CLASS-CTEM grid cells. As multiple GTN-P sites can be co-located in one CLASS-CTEM grid cell, the weighted mean absolute error (wMAE) for a grid cell was found by averaging the MAE calculated at each site situated within one CLASS-CTEM grid cell.\n\n### 2.3.2 Borehole temperatures from the GTN-P\n\nBorehole data from the GTN-P were downloaded for 132 open-access sites found in the permafrost (including continuous, discontinuous, sporadic, and isolated) or permafrost-free domains (last access: 11 May 2017). Most of the boreholes are in Eurasia, with few in North America (Fig. 1; Table S2). Each site has its own unique time period of observations and number and depth of observations. At each site, the depths of borehole temperatures were selected to be within 0.05 and 3.0 m of the ground surface and the observations were averaged to monthly values. For each borehole and each observation depth, the CLASS-CTEM output was selected for the nearest grid cell and the same month as the observations. Linear interpolation was then used to determine the simulated soil temperature for the same soil depth as the observation. As with the ALT sites, several steps were needed to avoid biasing the comparison with CLASS-CTEM. First, borehole sites co-located in the same CLASS-CTEM grid cell were flagged. The 132 borehole sites are located in 73 unique CLASS-CTEM grid cells. Secondly, the number of observations varied by borehole site so when calculating the kernel density estimates (KDEs; presented later) within a model grid cell, each observation was weighted by the total number of observations per grid cell. Thus, grid cells with many GTN-P borehole sites will have each observation weighted less than sites with fewer observations so each grid cell contributes equally to the KDE and the calculation of wMAE.\n\n### 2.3.3 Snow, albedo, and runoff\n\nSWE from CLASS-CTEM is compared to the Blended-5 dataset for the period from January 1981 to December 2010. Blended-5 is a multi-dataset SWE product developed by that combines five observation-based SWE datasets. Our analysis is limited to regions northward of 45 N with climatological SWE>4 mm to avoid regions of ephemeral snow. Simulated land surface albedo is compared to the MODIS MCD43C3 white-sky albedo for the period spanning February 2000 to December 2013. Similar to SWE, we limit our analysis to regions northward of 45 N. We compared our simulated seasonal runoff to measured discharge rates for seven major river basins that drain permafrost regions for the period from 1965 to 1984 (Ob, Volga, Lena, Yenisei, Yukon, Mackenzie, and Amur rivers; UNESCO Press1993). This comparison is limited to seasonal discharge since the CLASS-CTEM runoff is not routed; thus, the timing of transport of the water from each grid cell to the river mouth is neglected. On a seasonal timescale, this should not cause serious errors, but the results must be interpreted with caution.\n\n### 2.3.4 Permafrost distributions from the literature\n\nBecause permafrost cannot easily be observed spatially and reliable data are sparse, global or continental-scale simulation results are often compared to estimates of permafrost distributions. Most prominently, this is the “circum-Arctic map of permafrost and ground-ice conditions” that distinguishes zones of permafrost extent at a scale of 1:10 000 000. These zones are based on expert assessment and manual delineation, often following isotherms of mean annual air temperature. Here, we use “permafrost extent” to refer to the fraction (0–1) of the surface that is underlain by permafrost within a pixel or a polygon and “permafrost area” to refer to the actual area (km2) underlain by permafrost, and “permafrost region” is used to denote the area (km2) where some proportion of the ground can be expected to contain permafrost. The permafrost region is commonly taken to include areas with a permafrost extent exceeding some threshold . These definitions are relevant because CLASS-CTEM produces a binary result; i.e., permafrost is present or absent in a cell, and the classes and the continuous index (Gruber2012) of permafrost extent that are used for comparison need to be interpreted appropriately. Neglecting aggregation effects , which arise when the average fine-scale behavior of a simulated environmental variable is not equal to the simulated coarse-scale behavior, a threshold of permafrost extent at 50 % provides a first estimate of the region that should be compared with a model producing a binary result. For example, environmental conditions that give rise to a permafrost extent of 60 % would likely be considered to have permafrost in the binary model and their area would be counted as having permafrost entirely (rather than only 60 % of it). Similarly, conditions that produce a permafrost extent of 40 % would likely result in not having permafrost in a binary model. As a consequence, we use the total area of all polygons or pixels with an expected permafrost extent larger than 50 % as the appropriate area to compare with the results from CLASS-CTEM, termed “region_50”. This includes continuous and extensive discontinuous permafrost in the map totalling 15 M km2 , and a similar number can be interpreted from a plot of permafrost zonation index and permafrost region (Gruber2012).",
null,
"Figure 1Locations of the 97 GTN-P ALT sites (blue; Table S1), 132 GTN-P borehole observation sites (red; Table S2), and the Slave Province Surficial Materials and Permafrost Study (SPSMPS; green; Lac de Gras, Northwest Territories, Canada) used for model evaluation. Each site is classified according to its permafrost zone listed in the GTN-P. The site markers are semi-transparent; hence, regions with many closely located GTN-P sites will cause overlap, and darkening, of the markers.\n\n3 Results and discussion\n\n## 3.1 Comparison against GTN-P ALT sites: sites with no simulated permafrost\n\nA first simple test of permafrost performance for CLASS-CTEM is to check whether the GTN-P ALT sites are in fact simulated as containing permafrost. Given that CLASS-CTEM is being run on the CanESM grid (approximately 2.8), it is possible that site conditions such as meteorology, orography, or vegetation at the GTN-P ALT measurement sites could be quite dissimilar to those of the nearest grid cell, which covers many thousands of km2. In such cases, CLASS-CTEM could simulate no permafrost where some permafrost indeed exists. Per experiment, the number of sites with no permafrost simulated are listed in Table 2. These ALT sites were removed from further analysis as the ALT in sites without permafrost is not defined. Most experiments had between six and eight observation sites (corresponding to four to six grid cells) incorrectly simulated as permafrost-free (ISPF). Exp. Base model has significantly more sites ISPF at 15, corresponding to two or three additional grid cells. In general, for the same experiment, the CRUJRA55 meteorological forcing results in fewer grid cells ISPF than CRUNCEP. Small differences in the simulated presence of permafrost (or the number of sites ISPF) are to be expected given the possibility of errors in the meteorological forcing and local variations in site-level characteristics, but large differences can indicate problems with the model setup and parameterizations.\n\n## 3.2 Initial model performance\n\nExp. Base model simulates a permafrost area (PA) of 8.6 M km2 (north of 60 S; Table 2), with permafrost confined to northern Siberia, Alaska, and the northern edge of Canada (Fig. 2). This low PA is in line with that simulated by CLASS-CTEM when coupled within the CanESM, although the spatial distribution is different due to the different atmospheric forcing . Also plotted in Fig. 2 is the PE estimate of . The dataset gives permafrost spatial distribution in four classifications which are not directly comparable to ALTs but may be used to give a general indication of PA from an independent estimate. Due to the coarseness of the model grid, CLASS-CTEM is not able to simulate isolated or sporadic permafrost. For regions of discontinuous and continuous permafrost, comparing the estimated distribution of to the modeled ALT indicates poor agreement.\n\nWith such a small permafrost area, many of the GTN-P ALT sites were ISPF as mentioned above. Of the GTN-P ALT sites where CLASS-CTEM simulated permafrost, Base model simulations show overly shallow ALTs with an average mean absolute error (MAE; described in Sect. 2.3) of 0.410 m. Thus, it appears the modeled soil temperatures are too warm in the more southerly permafrost domain (PD), leading to no permafrost simulated, and too cool at the higher latitudes. However, it should be noted that the model configuration of three ground layers in this experiment makes an accurate estimation of the ALT difficult since the lowest model layer is quite thick (3.75 m).",
null,
"Figure 2ALTs in meters for experiments listed in Table 1 alongside the permafrost map of (bottom right). Experiments with an asterisk prefixing their name use a model configuration based on the SoilGrids + Moss setup. All experiments shown here use CRUNCEP for the meteorological forcing.\n\nTable 2Permafrost area as simulated by CLASS-CTEM (average of 1996–2015) along with literature estimates for terrestrial permafrost north of 60 S. The number of GTN-P sites which CLASS-CTEM incorrectly simulated as permafrost-free (ISPF) is also listed along with the number of corresponding grid cells in square brackets. These GTN-P sites were removed from further analysis since ALT is not defined in locations with no permafrost. The numbers in parentheses indicate the values when CRUJRA55 was used as the meteorological forcing instead of CRUNCEP. See Sect. 2.3.4 for distinction between permafrost area and permafrost region.",
null,
"## 3.3 Increasing the number of ground layers\n\nIncreasing the number of ground layers from 3 to 20 decreases the number of GTN-P ALT sites ISPF from 15 to 7 (Table 2). Figure 3 shows the difference between the simulated and observed ALT at each grid cell with GTN-P ALT sites for selected experiments. The average MAE computed against the GTN-P ALT observations for Exp. 20 ground layers is over 2.5 m with simulated ALTs strongly overestimated (Fig. 3). When the number and depth of ground layers is increased, but the soil permeable depth is left unchanged, CLASS-CTEM simulates the ground layers below the permeable soil depth as impermeable bedrock. The absence of water and therefore of heat consumption by melting ice in these lower ground layers causes the model soil column to be generally too warm. However, the total global PA increases from 8.6 M km2 simulated by Exp. Base model to 16.8 M km2 (Table 2), with an increase in permafrost area primarily in the southern fringes of eastern Siberia and Canada, along with a general deepening of ALT across the high latitudes (Fig. 2). This seeming incongruity of warmer soils with a larger permafrost area likely relates to moving the boundary of zero heat flux from 4.1 m, a depth where seasonal temperature variations can penetrate, to 61.4 m. The shallower modeled soil column in Exp. Base model inhibits the formation of permafrost because of the concentration of the annual heat flux oscillation in the upper few meters of the soil.\n\nThe wMAE calculated for each season from CLASS-CTEM's simulated ground temperatures compared to GTN-P borehole temperatures for three depth zones shows an improvement at all depths and seasons for Exp. 20 ground layers over Exp. Base model (Fig. 4). Generally, across all experiments, CLASS-CTEM performs better with increasing depth. Seasonally, winter is generally simulated best, with summer showing the highest wMAE values. These patterns indicate that the largest challenges to accurate ground temperature simulation are coming from the high variability in forcing at the land surface and from the difficulty in accurately simulating the summertime heat pulse into the ground column.\n\nTo look in closer detail at the model performance for the GTN-P borehole sites, Fig. 5 shows the Gaussian kernel density estimate (KDE) derived from differences between the simulated and observed borehole temperatures. For shallow soils, as the seasons progress from winter to fall, the proportion of instances with a strong cold bias decreases with a warm soil bias taking over in summer, especially in the shallowest depth band. This would indicate the modeled soil heat fluxes are somewhat exaggerated. The fall period generally has the least bias, potentially due to the loss of the warm summer bias but prior to the establishment of the cold winter bias.\n\n## 3.4 Increasing the soil permeable depths\n\nChanging the soil permeable depth dataset to SoilGrids (Exp. SoilGrids depth) from Zobler86 gives a general improvement over the Exp. 20 ground layers simulations with a drop in average MAE to 1.162 m at the GTN-P ALT sites (Fig. 3). There is also a shift to shallower ALTs (Fig. 2), with a slight decrease in PA to 15.7 M km2, which is within the range literature estimates (Table 2 and discussed further in Sect. 2.3.4). The greater permeable depths associated with SoilGrids lead to deeper penetration of water into the soil, resulting in more water being allocated to runoff than made available for plant transpiration or soil evaporation (Fig. S3). Simulations with the alternative soil permeable depth dataset (Exp. Pel16 depth) generally show similar patterns of latent heat flux, runoff, and LAI (not shown) to Exp. SoilGrids depth. The Exp. Pel16 depth simulations have better agreement with the GTN-P ALT observations, reducing the wMAE to 0.757 m (Fig. 3). Exp. SoilGrids also further improves the model's performance at all depths and seasons compared to the GTN-P borehole sites (Fig. 4).\n\nNumerous studies have pointed to the importance of increasing the simulated ground column depth and number of ground layers to better capture the decay with depth of the influence of multi-decadal variability . Of particular relevance to our study, used CLASS in CRCM5 and found shallow soil configurations (permeable depth < 1 m throughout much of the model domain) to lead to overly strong seasonal cycles with resulting overly deep ALTs, similar to the work of , and in line with our Base model simulation with its small estimated PA.\n\nThe availability of comprehensive global soil permeable depth datasets is relatively recent. Previous studies would often assume a constant permeable soil depth, either shallow or deep with the deeper layers hydrologically inactive. Comparing the three permeable depth datasets (Zobler86, SoilGrids, and Pel16; Fig. S1) shows Zobler86 to be by far the shallowest, while SoilGrids and Pel16 disagree on the spatial distribution of the permeable depths for the high-latitude regions. Pel16 shows deep soils in the Canadian boreal forest, Finland, and central southern Russia, with shallower soils in the Siberian plateau. SoilGrids has more very deep soils (>50 m) especially in the west Siberian region and the Ural. These differences in permeable depth have an impact on the simulated ALT, as the SoilGrids and Pel16 experiments perform quite differently at the GTN-P ALT sites (Fig. 3) due to the strong impact of freezing and thawing of water in the soil column.",
null,
"Figure 3Differences between the ALTs from the experimental model runs and those of the Global Terrestrial Network for Permafrost ALT sites (Table S1). Each dot represents a grid cell with one or more GTN-P sites (see Sect. 2.3). In this representation (a “bee swarm”), displacement in the y direction is only to allow each data point to be visible. The background shading is a Gaussian kernel density estimate (KDE), with the quartiles of the distribution indicated by dashed vertical lines within the KDE plot. The mean absolute error (MAE) is produced by calculating the MAE at each grid cell and taking the average across all cells. As the number of sites ISPF differs between experiments (Table 2), the number of grid cells where CLASS-CTEM simulated permafrost is also listed. The total number of grid cells with GTN-P sites is 37. The two meteorological forcings are shown for the experiments where the CRUJRA55 forcing was also used. Experiments below the dashed red line use the model setup from Exp. SoilGrids + Moss as their starting point (Table 1).\n\n## 3.5 Adding an upper layer of organic matter/moss to the soil column\n\nCLASS-CTEM ALTs with both Pel16 and SoilGrids are generally biased deeper than observed at the GTN-P sites (Fig. 3), indicating that the ground surface is either overly insulated from the cold atmosphere during the winter or absorbing too much heat during the summer months. The principal modulating influences on ground heat fluxes in cold regions are hydrology, snow cover (both of which we deal with later), vegetation structure and function, and topography . Vegetation canopies shade the soil surface, attenuating radiation and reducing warming in the summer season. Additionally, dense forests capture snow in the canopy which prevents it from reaching the ground and insulating the soil surface further cooling soils. Another aspect of vegetation influence is the insulating effect of a surface layer of moss or organic matter. Mosses are generally more abundant at high latitudes and have been shown to decrease growing season surface soil temperatures . The effect of mosses on the ground heat flux has also been demonstrated through field experiments , and modeling studies have incorporated organic layers or bryophytes to improve permafrost dynamics. Exps. SoilGrids + Moss and Pel16 + Moss both incorporate a non-photosynthetic moss layer in place of the first layer of soil (see Sect. 2.2) and both simulate generally shallower ALTs than their parent simulations (Exps. SoilGrids depth and Pel16 depth, respectively; Fig. 2). The effect of moss introduction for Exp. SoilGrids + Moss is to reduce average MAE from 1.162 to 0.472 m for the GTN-P ALT sites (Fig. 3). The general cooling influence is evident by comparing to the GTN-P ALT sites (Fig. 3) and also through the increase in simulated PA from 15.7 to 17.9 M km2. A similar improvement is seen for Exp. Pel16+Moss. The high porosity of the moss layer causes less water to be available at the surface for evaporation, reducing the latent heat flux and making more water available for runoff, and its insulating effect keeps the soil surface cooler, which reduces plant growth and also the sensible heat flux (Fig. S4). The reduction in plant growth due to cooler soils also reduces water uptake for transpiration further increasing runoff.\n\nComparing simulated ground temperatures to observations at the GTN-P borehole sites shows a slight increase in wMAE at all depth ranges and seasons compared to the SoilGrids simulation (Fig. 4). Comparing the KDE plots of the bias distribution between modeled and observed borehole temperatures for the SoilGrids + Moss and the SoilGrids simulations shows an increased cool bias in the shallow soil which is especially evident in summer (Fig. 5). This bias extends deeper into the soil column, albeit weakening with depth. The cooling of soils due to the incorporation of a moss layer was also found by ; however, their simulations included a dynamic extent for moss cover. The creation of a cold bias due to the introduction of a moss layer is reasonable considering that the moss layer was applied to all areas uniformly. While this experiment was intended to understand the impact of moss on simulated ground temperatures, future work should attempt to place moss with a more realistic distribution, similar to .\n\nComparing the model experiment outputs to the GTN-P sites in Fig. 3, it is evident that increasing the number of ground layers and the soil permeable depth and incorporating a top layer of moss/organic matter improves the simulated ALTs. These changes have been suggested by other studies as mentioned above, and our results are in line with them. The next experiments use the model configuration from Exp. SoilGrids + Moss as a starting point. While Pel16 generally gave better average MAE values than SoilGrids for ALT compared to the GTN-P sites (Fig. 3), SoilGrids appears to be better validated (see Shangguan et al.2017, Figs. 9–11). Both datasets, however, suffer from sparse data in high latitudes (e.g., Shangguan et al.2017, Fig. 2). Additionally, while it appears that the addition of moss can introduce a summer cool bias in ground temperatures (as discussed above), given the extensive distribution of bryophytes (see the simulated distribution in Fig. 4b in Porada et al.2016), we chose to include moss in our further simulations.\n\n## 3.6 Testing alternate soil thermal conductivity formulations\n\nExp. Tian16 thermal cond. tests the formulation, which is based on but explicitly accounts for the influence of ice (see Sects. A2 and S1). The new formulation simulates a much larger PE than Exp. SoilGrids + Moss at 21.2 M km2 with generally shallower ALTs in most regions except for the western edge of simulated Siberian permafrost (Fig. S5). The average MAE at the GTN-P ALT sites is reduced to 0.314 m (Fig. 3); however, at the GTN-P borehole sites, the simulated ground temperatures are biased cold, primarily in summer and fall, and worsening with depth (Figs. 4 and 5).",
null,
"Figure 4Weighted mean absolute error (wMAE, C) between the simulated ground temperatures and those of the GTN-P borehole temperature sites (Table S2) for three depths: 0.05–0.5, 0.5–1.5, and 1.5–3.0 m. The wMAE is produced by calculating the MAE for each depth range and season at each site within a grid cell and taking the average across all grid cells (see Sect. 2.3). The number of observations differs between depths and is listed along with the number of CLASS-CTEM grid cells with GTN-P borehole sites in square brackets. The color of the text annotations is purely for clarity. The wMAE of CRUNCEP surface air temperatures compared to air temperatures measured at the GTN-P sites is 2.17, 2.46, 2.53, and 2.40 C for DJF, MAM, JJA, and SON, respectively, over 25 337 monthly observations.",
null,
"Figure 5Gaussian kernel density estimates for the difference between the simulated ground temperatures and those of the GTN-P borehole temperature sites (Table S2) for three depths: 0.05–0.5, 0.5–1.5, and 1.5–3.0 m, for each season and for selected experiments. The bandwidth was chosen using Scott's rule of thumb (Scott1992).\n\n## 3.7 Changing the relationship between snow depth and snow cover\n\nTwo experiments investigated different relationships between snow depth and the grid cell snow cover in CLASS-CTEM (Exps. Snow cover: Yang97 and Snow cover: Brown03). These modifications increased global PA (∼1.2 M km2), with a slightly higher PA estimated for Exp. Snow cover: Yang97 (Table 2). For the GTN-P ALT sites, both snow cover experiments increased average MAE from 0.472 m for Exp. SoilGrids + Moss to 0.579 and 0.622 m for Exps. Snow cover: Yang97 and Snow cover: Brown03, respectively. Comparing the simulated SWE from both Exps. Snow cover: Yang97 and Snow cover: Brown03 to Blended-5 (see Sect. 2.3) shows a slight improvement in model performance compared to both Exps. Base model and SoilGrids + Moss throughout the snow year, which tends to be more pronounced during fall and winter (Fig. S6), although there is little difference between the two snow cover experiments.\n\nChanges in snow cover can lead to large changes in albedo due to the significant brightness difference between snow and vegetation/bare ground. To investigate the impact of these experiments on albedo, we evaluated seasonal averages of simulated albedo against MODIS observations over latitudes northward of 45N for the period 2000 to 2013. We find the spring (AMJ) albedo from the various simulations is about the same (Fig. S7).\n\n## 3.8 Considering wind speed in the calculation of fresh snow density\n\nIn CLASS-CTEM, the density of freshly fallen snow depends on the ambient air temperature (Eq. A19). Exp. Fresh snow density tested a parameterization from the CROCUS model that also includes wind speed in this calculation (Eq. A20), which yielded an increase in PA to 18.9 M km2. Compared to the GTN-P ALT sites, the Exp. Fresh snow density results are similar to those of the snow cover experiments with no improvement in average MAE (0.581 m; Fig. 3) and no discernible impact upon modeled DJF SWE compared to Blended-5 or upon spring (AMJ) albedo compared to MODIS (Figs. S6 and S7).\n\nThe typical wind speed in the CRUNCEP meteorological forcing dataset when snow is falling is in the range of 1–5 m s−1 (Fig. 6). With Eq. (A20), the density of freshly fallen snow tends to be lower at very low wind speeds, then higher as wind speed increases for the same air temperature. The generally higher density of fresh snow with the CROCUS parameterization results in a snowpack with higher thermal conductivity and thus cooler soils as evident from the expansion in PA for the Exp. Fresh snow density (Fig. 2). Both the original CLASS-CTEM parameterization and that of the CROCUS model produce fresh snow densities within the range of observations. evaluated 1650 snowfall events from 28 continental US sites and found the density of freshly fallen snow to vary from 21.4 to 526.3 kg m−3 with a median value of 70.9 kg m−3 (for snowfall events where the wind speed was ≤9 m s−1).",
null,
"Figure 6(a) Snow density as a function of air temperature for the original CLASS-CTEM formulation (Eq. A19) and for Exp. Fresh snow density, which includes consideration of wind speed (Eq. A20; purple lines indicate different wind speeds). (b) Histogram of wind speeds for the period 2011 to 2015 from the CRUNCEP meteorological dataset.\n\n## 3.9 Adopting an efficient spectral method for snow albedo decay\n\nChanging the snow albedo decay parameterization from an exponential form (Verseghy2017) to an efficient spectral parameterization (Exp. Snow albedo decay) slightly improves average MAE at the GTN-P ALT sites (Fig. 3), while decreasing PA (15.6 M km2), reflecting a near-uniform deepening of ALT with the exception of small areas on the western edge of the Siberian PD (Figs. 2, S5; Table 2). The efficient spectral method for albedo decay generally produces lower albedos than CLASS-CTEM's original exponential parameterization. The impact upon spring albedo and SWE leads to a notable decline in model performance compared to observation-based datasets (Figs. S6 and S7). The CRUJRA55-forced experiments, on the other hand, give slightly better spring albedo for all experiments forced with that meteorological dataset. This could be due to the sub-monthly variability difference of CRUJRA55 compared to CRUNCEP, as found one of the largest impacts of changing climate variability in model forcing to be snow depth. The lower albedo in Exp. Snow albedo decay leads to a smaller snowpack which melts earlier, resulting in reduced spring runoff, a longer growing season, and a higher LAI. The warmer land surface results in larger ALTs. At the GTN-P borehole sites, Exp. Snow albedo decay's warmer ground layers give a noticeable increase in wMAE values across all seasons and most depth bands.\n\n## 3.10 Allowing unfrozen water in frozen soils\n\nThe inclusion of unfrozen water in frozen soils (Exp. Super-cooled water) increased PA to 20.1 M km2 with a minor improvement at the GTN-P ALT sites (Fig. 3). The GTN-P borehole sites showed little change in the wMAE values (Fig. 4). The larger PA for this experiment could be reflecting the thermal conductivity differences between completely frozen soil and frozen soil with some residual liquid water. The differences in bulk thermal conductivity would slow heat transfer into the deeper ground layers for the Super-cooled water simulation during periods where the soil layer temperature is below 0 C. As a result, spring warming would be slower to reach deeper layers.\n\ninvestigated streamflow for 21 watersheds in eastern Canada using CLASS and the WATROUTE routing scheme. They report their modifications (super-cooled soil water, fractional permeable area, and modified hydrology due to ice; discussed in Sect. A7) improved streamflows particularly during the spring melt. The changes were attributed to reduced hydraulic conductivity of frozen soils causing more snowmelt runoff and less infiltration. We did a rudimentary comparison of our simulated seasonal runoff for seven major river basins that drain permafrost regions (Fig. 7; Sect. 2.3). As the CLASS-CTEM simulations did not include excess ground ice (e.g., slab ice such as ice wedges or lenses commonly found in regions affected by thermokarst processes), groundwater, or interflow, all of which could increase runoff (baseflow) in the summer and fall seasons, we limit our discussion to the spring and winter seasons. Exp. Super-cooled water has lower spring runoff than both Exps. Base model and SoilGrids + Moss but higher winter runoff, making it more in line with observed river discharge (Fig. 7).\n\nGiven that the Super-cooled water and Tian16 thermal cond. simulations had the lowest average MAE at the GTN-P ALT sites (Fig. 3), a simulation was run with both of these parameterizations included (Exp. Super-cooled+Tian16). This experiment further reduced the average ALT MAE but considerably worsened simulated ground temperatures at the GTN-P borehole sites (Fig. 4). This incongruity between model performance at the ALT and borehole sites could be reflecting biases due to the spatial distribution of the sites (see Fig. 1), the differing number of observations of ALT vs. borehole temperatures, or to biases in the observations themselves, which are discussed in Sect. 3.13.\n\n## 3.11 Modifying hydrology due to ice\n\nExp. Modif. hydrology modified soil matric potential and saturated hydraulic conductivity to account for the impact of frozen water following the work of and . These changes yielded a simulated PA of 19.5 M km2 (Table 2), with generally slightly deeper ALTs in much of the high-latitude PD compared to Exp. SoilGrids + Moss (Figs. 2 and S5) and poorer average MAE for the GTN-P ALT sites. Performance at the GTN-P borehole sites is similar to Exp. SoilGrids + Moss (Fig. 4). Since the modifications to soil matric potential and saturated hydraulic conductivity (Eqs. A35 and A36) generally decrease water mobility in soils with ice present, the Exp. Modif. hydrology soils are generally wetter, allowing higher annual latent heat flux and supporting higher LAI. Exp. Modif. hydrology has similar runoff to Exp. Base model with higher spring runoff than observed river discharge, while the winter runoff is reduced compared to Exp. SoilGrids + Moss and is also smaller than the observed river discharge (Fig. 7). To investigate synergistic effects between the two modifications (Exps. Modif. hydrology and Super-cooled water), a simulation was run with both modifications applied (similar to 's Exp. 3). This simulation gave slightly higher spring runoff but similar winter runoff compared to Exp. Modif. hydrology (not shown). Thus, it appears, with respect to runoff, the modifications to hydrology have a stronger influence than super-cooled soil water, in line with the conclusion of that the primary effect is to reduce hydraulic conductivity which decreases infiltration and increases snowmelt runoff.",
null,
"Figure 7Mean 1965–1984 seasonal discharge of major rivers draining permafrost regions (Ob, Volga, Lena, Yenisei, Yukon, Mackenzie, and Amur; UNESCO Press1993) compared to total runoff from selected model runs for the same period. Each dot represents one river basin. The CLASS-CTEM simulated runoff is not routed thus only seasonal values are compared.\n\n## 3.12 Influence of subgrid heterogeneity\n\nThe CLASS-CTEM model grid used in our study is the same as that used in the CanESM. From the experiments conducted, the lowest average MAE at the GTN-P ALT sites we are able to achieve is about 0.4 m. With the size of our model grid cells, what is the best MAE we can reasonably expect given the subgrid heterogeneity at the observation sites? Many of the GTN-P ALT measurements are performed on an 11×11 sampling grid covering between 1 km2 and 1 ha, giving 121 data points at one point in time per site; the mean standard deviation of measured ALT over these sampling grids varies from 0.02 to 0.49 m (Table S1). However, 1 km2 is still small compared to model grids, ranging in size from hundreds to thousands of km2. One measure of the influence of subgrid heterogeneity can be obtained by considering the MAE per site in the grid cells where we have more than one GTN-P ALT site (Fig. 8). For these grid cells, the spread in MAE at each site ranges from 0.01 m (grid cell with two sites) to 0.59 m (12 sites). While it is not reasonable to directly compare the subgrid range of MAE to the model average MAE shown in Fig. 3, Fig. 8 demonstrates that subgrid heterogeneity is a significant source of variability in ALT within model grid cells and that variability will impose constraints on the lower limit of MAE that is attainable by the model.",
null,
"Figure 8MAE for CLASS-CTEM grid cells with multiple GTN-P ALT sites for the SoilGrids + Moss simulation. The number of ALT sites is listed along with the range in MAE in each grid cell in parentheses.\n\nFor the GTN-P borehole sites, the wMAE in temperature bias for the model varies between approximately 1.5 and 3.7 C (Fig. 4), depending on depth and season. As with ALT, what is a reasonable wMAE for ground temperatures given the size of the model grid cells and the discrete nature of a borehole? To better understand the role of subgrid heterogeneity in borehole temperatures, we make use of the Slave Province Surficial Materials and Permafrost Study (SPSMPS; Gruber et al.2018). The SPSMPS collected air and ground temperature measurements for 15 m × 15 m plots with hourly borehole temperatures at 35 boreholes, all located within an approximately 1200 km2 area. The observed screen-level temperatures are generally reasonably close to those of CRUNCEP, but CRUNCEP has slightly cooler summer temperatures (Fig. S8). What is most striking about the borehole temperatures at Lac de Gras is the large spread in ground temperatures at all depths and in most seasons (Fig. 9). The temperature range is smallest in fall and spring when the soils are thawing or freezing and largest in winter with differences varying from 12 to over 20 C depending on the soil depth. This remarkable spread in temperature is due to variations in slope, aspect, soil moisture, soil texture, soil organic matter content, and vegetation type and distribution. The simulated ground temperatures from two experiments are plotted alongside the boreholes (Exps. SoilGrids and SoilGrids + Moss). As the model is driven by CRUNCEP and we have no precipitation information for the SPSMPS sites, it is difficult to determine the cause of any biases. Also, although the SPSMPS sampling area is considerably larger than the GTN-P sites, the same arguments apply concerning the mismatch of scales between the observational area and the model grid, and the variability introduced by subgrid heterogeneity.\n\nAn additional measure of how reasonable the model wMAE is at the borehole sites can be obtained by comparing the CRUNCEP screen-level temperature, which is used to force the model, and the observed screen-level temperature at each GTN-P site. The MAE for screen-level temperature is between 2.17 and 2.53 C across all seasons. Therefore, the model's wMAE range for shallow soil of approximately 3 to 3.7 C varies from approximately 0.8 to 1.2 C above that of the MAE for CRUNCEP's screen-level temperature (for the SoilGrids + Moss simulation). Given the large spread in borehole temperatures in a relatively small area at the SPSMPS sites, and the MAE of the model's forcing air temperature, it appears the model's wMAE can be considered reasonable.",
null,
"Figure 9Borehole temperatures for 0.5, 1, and 2 m depths from SPSMPS (Lac de Gras region, NWT, Canada; ) along with CLASS-CTEM simulated ground temperatures for Exps. SoilGrids and SoilGrids + Moss. The 35 boreholes are each represented by a single line and are all located within an approximately 1200 km2 area. The model output is from the grid cell corresponding to the SPSMPS study area.\n\n## 3.13 Influence of bias due to ALT or borehole sampling locations\n\nTemperatures in individual boreholes and ALT at individual sites often differ from the grid cell they are compared with because of subgrid variability as discussed above. The underlying spatial variation of ground temperature, even at distances smaller than 1 km is well documented . If the locations of GTN-P sites were randomly sampled, subgrid effects would be expected to cancel out and, consequentially, a mean bias (see Fig. 3) close to zero would be indicative of good model performance. In reality, however, the choice of GTN-P measurement locations are likely biased and the nature and consequences of this bias are difficult to assess. For example, ALT sites are likely to be biased toward fine-grained and organic-rich soils and locations with small ALT where probing can be carried out. The choice of ALT and borehole sites in areas of sporadic permafrost is likely to be biased towards cold areas in the landscape. This is because ALT requires permafrost and because permafrost researchers are unlikely to drill, instrument, and operate boreholes in seasonally frozen ground. Finer-scale local studies have noted that observations are strongly biased towards permafrost existence . The melt of excess ice from the top of permafrost presents an additional source of bias that may result in ALT data showing values of seasonal thaw depth that underestimate the amount of ground ice that was melted due to frost-table probing without recording surface subsidence . In summary, it is likely that a slightly positive model bias, i.e., higher temperatures and greater ALT simulated than observed, would correspond to a model that best represents reality. Quantifying that effect, however, is beyond the present study.\n\n4 Conclusions\n\nThe performance of CLASS-CTEM in cold regions has been investigated in the past by numerous researchers who have suggested several modifications to improve the model's performance in these regions. Drawing from these recommendations and other studies, 18 experiments were carried out to investigate the influence of (1) the number of ground layers, (2) soil permeable depth datasets, (3) the addition of a moss layer, (4) changing the soil thermal conductivity formulation, (5) altering the derivation of snow cover based on snow depth, (6) adding the effect of wind speed to the calculation of fresh snow density, (7) changing the model's snow albedo decay calculation to an efficient spectral parameterization, and (8) modifications to frozen soil hydrology including allowing unfrozen water in frozen soils and an alteration to hydraulic conductivity and soil matric potential for the presence of ice. Two soil permeable depth datasets were tested ( and SoilGrids; ) along with two meteorological datasets (CRUNCEP v.8; and CRUJRA55 v.1.0.5; ). The simulated active layer thicknesses (ALTs) were compared to 1570 observations from 97 sites from the Global Terrestrial Network for Permafrost (GTN-P; Table S1, Fig. 1), the simulated soil temperatures to 105 106 monthly observations at 132 GTN-P borehole temperature sites (Table S2), 35 borehole sites from SPSMPS , surface albedo to a remotely sensed dataset (MODIS MCD43C3), snow water equivalent (SWE) to a blend of five observation-based datasets (Blended5; Mudryk et al.2015), and seasonal runoff to river discharge for major rivers draining the Arctic as well as literature estimates of permafrost area (Table 2).\n\nThe original model version had an overly small simulated permafrost area of 8.6 M km2 which was almost doubled to 16.7 M km2 by increasing the number and depth of ground layers. Of the two soil permeable depth datasets, gave consistently lower average mean absolute errors (MAEs) at the GTN-P ALT sites compared to SoilGrids. However, SoilGrids was chosen for further simulations as this dataset appears to be better validated . For the two meteorological datasets used, the permafrost specific results depended on the model configuration and parameterizations tested. More consistently, spring albedo appeared to be better simulated using CRUJRA55, while winter SWE was slightly better with CRUNCEP. Changes to the model configuration by increasing soil permeable depths using the SoilGrids dataset, and adding a layer of moss reduced the average MAE at the GTN-P ALT sites from over 2.5 m (Exp. 20 ground layers) to 0.472 m (Exp. SoilGrids + Moss). While most alternate parameterizations either degraded model performance at the GTN-P ALT and borehole sites or degraded the performance of another model output such as albedo or SWE, incorporating unfrozen water in frozen soils following is being considered for inclusion in future versions of CLASS-CTEM. A simulation with the parameterization resulted in an average MAE of 0.414 m at the GTN-P ALT sites, relatively small impacts on wMAE at the GTN-P borehole sites, and a possible improvement in seasonal runoff. Further assessment of the improvements in runoff using a river routing scheme are needed before this parameterization will be fully adopted. Based on the tests performed here, the optimal model configuration will include more ground layers to a greater depth, soil permeable depths from the SoilGrids dataset, and moss in locations where it is appropriate. These changes give a simulated permafrost area of between 15.7 to 17.9 M km2 (Table 2) which is reasonably close to the expected 15 M km2 based on published estimates derived from mean annual air temperature (see discussion in Sect. 2.3.4) .\n\nThere are six main limitations of our study. First, thermokarst processes due to melt of excess ground ice (ice wedges or lenses) are not simulated. As maps of ground ice extent improve (e.g., O'Neill et al.2019) and become more suitable for use as a model geophysical field, parameterizations such as could be incorporated. Second, our treatment of mosses and their impact is simplistic. A more comprehensive approach such as the LiBry model would allow for dynamic moss extents and more bryophyte subtypes including lichens. Third, the plant functional types used here are not specific to the Arctic and do not include shrubs. Shrubs, in particular, are presently expanding and have complex impacts upon Arctic regions (e.g., Fig. 3 in Myers-Smith et al.2011). Fourth, orographic influences on permafrost such as slope and aspect were not resolved. Fifth, inland water bodies and their impact upon ground thermal regimes were not considered. Finally, the influence of subgrid heterogeneity was ignored as permafrost in the model grids is binary, thus excluding the simulation of discontinuous permafrost. With regard to the influence of subgrid heterogeneity, the standard deviation of ALT on the 1 km2–1 ha measurement grids at the GTN-P ALT sites, the spread in MAE in grid cells with multiple GTN-P ALT sites, and the SPSMPS collection of 35 boreholes over a 1200 km2 study area indicate that it is likely difficult to reduce the wMAE of ALT or borehole temperature much further, given the size of the model grid cells (approximately 2.8). Based on the model physics performance presented here, it appears that with the modifications described above, the land surface scheme in CLASS-CTEM is well suited to provide the physical conditions for simulating carbon fluxes in the permafrost domain.\n\nCode availability\n\nCLASS-CTEM is available as a tarball from https://doi.org/10.5281/zenodo.3369395 (Melton2019). The following code tags correspond to experiments in this paper (see Table 1), with the most strongly impacted subroutines in parentheses: (1) Base model: “archive/baseModelPermafrostPhysics”, (2) deVries thermal cond.: “archive/soilthermalcond” (TPREP), (3) Tian16 thermal cond.: “archive/Tian16SoilThermalCond” (TPREP), (4) Snow cover: Yang97/Brown03: “archive/snowcov_changes” (CLASSA), (5) Fresh snow density: “archive/snowdens” (CLASSI), (6) Snow albedo decay: “archive/snowalbedorefresh” (CLASSA, SNOALBA), (7) Super-cooled water: “archive/supercooledH2O” (CLASSB, TMCALC, TWCALC), and (8) Modif. hydrology: “archive/arman” (GRDRAN, GRDINFL). The model manual is located within the code repository (/documentation/html/index.html).\n\nAppendix A: Description of alternate parameterizations\n\n## A1 Moss parameterization of Wu et al. (2016)\n\nThe simple moss parameterization used here follows with the exception that our moss layer is non-photosynthesizing. The physical characteristics of the moss layer include a pore volume of 0.98 m3 m−3, liquid water retention capacity of 0.2 m3 m−3, the residual liquid water content after freezing or evaporation of 0.01 m3 m−3, the Clapp and Hornberger empirical b parameter set to 2.3, a soil moisture suction at saturation of 0.0103 m, a saturated hydraulic conductivity of $\\mathrm{1.83}×{\\mathrm{10}}^{-\\mathrm{3}}$ m s−1, a volumetric heat capacity of $\\mathrm{2.5}×{\\mathrm{10}}^{-\\mathrm{6}}$ J m−3 K−1, with the thermal conductivity of the moss set to that of organic matter (0.25 W m−1 K−1).\n\n## A2 Soil thermal conductivity\n\nCLASS-CTEM calculates the thermal conductivities of organic and mineral soils following . The soil thermal conductivity, λ (W m−1 K−1), is modeled via a relative thermal conductivity, λr, which varies between a value of 1 at saturation and 0 for dry soils:\n\n$\\begin{array}{}\\text{(A1)}& \\mathit{\\lambda }=\\left[{\\mathit{\\lambda }}_{\\mathrm{sat}}-{\\mathit{\\lambda }}_{\\mathrm{dry}}\\right]{\\mathit{\\lambda }}_{\\mathrm{r}}+{\\mathit{\\lambda }}_{\\mathrm{dry}}.\\end{array}$\n\nUsing the following generalized relationship, the relative thermal conductivity is obtained from the degree of saturation (the water content divided by the pore volume), Sr (unitless):\n\n$\\begin{array}{}\\text{(A2)}& {\\mathit{\\lambda }}_{\\mathrm{r}}=\\frac{\\mathit{\\kappa }{S}_{\\mathrm{r}}}{\\left[\\mathrm{1}+\\left(\\mathit{\\kappa }-\\mathrm{1}\\right){S}_{\\mathrm{r}}\\right]}.\\end{array}$\n\nBased on the soil characteristics and state, the empirical coefficient, κ (W m−1 K−1), takes the following values:\n\n1. Unfrozen coarse mineral soils: κ=4.0\n\n2. Frozen coarse mineral soils: κ=1.2\n\n3. Unfrozen fine mineral soils: κ=1.9\n\n4. Frozen fine mineral soils: κ=0.85\n\n5. Unfrozen organic soils: κ=0.6\n\n6. Frozen organic soils: κ=0.25\n\nThe dry thermal conductivity, λdry, is calculated via an empirical relationship using the pore volume, θp (m3 m−3), with different coefficients for organic and mineral soils:\n\n$\\begin{array}{}\\text{(A3)}& & {\\mathit{\\lambda }}_{\\mathrm{dry},\\mathrm{mineral}}=\\mathrm{0.75}{e}^{\\left(-\\mathrm{2.76}{\\mathit{\\theta }}_{\\mathrm{p}}\\right)}\\text{(A4)}& & {\\mathit{\\lambda }}_{\\mathrm{dry},\\mathrm{organic}}=\\mathrm{0.30}{e}^{\\left(-\\mathrm{2.0}{\\mathit{\\theta }}_{\\mathrm{p}}\\right)}.\\end{array}$\n\nWhile the saturated thermal conductivity, λsat, is calculated by as a geometric mean of the conductivities of the soil components, other studies (e.g., Zhang et al.2008) have found the linear averaging used by to be generally more accurate and this approach has been adopted by CLASS-CTEM:\n\n$\\begin{array}{}\\text{(A5)}& & {\\mathit{\\lambda }}_{\\mathrm{sat},\\mathrm{unfrozen}}={\\mathit{\\lambda }}_{\\mathrm{liq}}{\\mathit{\\theta }}_{\\mathrm{p}}+{\\mathit{\\lambda }}_{\\mathrm{s}}\\left(\\mathrm{1}-{\\mathit{\\theta }}_{\\mathrm{p}}\\right)\\text{(A6)}& & {\\mathit{\\lambda }}_{\\mathrm{sat},\\mathrm{frozen}}={\\mathit{\\lambda }}_{\\mathrm{ice}}{\\mathit{\\theta }}_{\\mathrm{p}}+{\\mathit{\\lambda }}_{\\mathrm{s}}\\left(\\mathrm{1}-{\\mathit{\\theta }}_{\\mathrm{p}}\\right),\\end{array}$\n\nwhere λice is the thermal conductivity of ice, λliq is that of liquid water, and λs is that of the soil solid particles.\n\nExp. deVries thermal cond. replaces the CLASS-CTEM default soil thermal conductivity parameterization with that of :\n\n$\\begin{array}{}\\text{(A7)}& \\mathit{\\lambda }=\\frac{{\\mathit{\\lambda }}_{\\mathrm{liq}}{\\mathit{\\theta }}_{\\mathrm{liq}}+{f}_{\\mathrm{a}}{\\mathit{\\lambda }}_{\\mathrm{a}}{\\mathit{\\theta }}_{\\mathrm{a}}+{f}_{\\mathrm{s}}{\\mathit{\\lambda }}_{\\mathrm{s}}{\\mathit{\\theta }}_{\\mathrm{s}}}{{\\mathit{\\theta }}_{\\mathrm{liq}}+{f}_{\\mathrm{a}}{\\mathit{\\theta }}_{\\mathrm{a}}+{f}_{\\mathrm{s}}{\\mathit{\\theta }}_{\\mathrm{s}}},\\end{array}$\n\nwhere the a subscript denotes the air component, θ is the volumetric fraction, and f is the “weighting” factor (unitless), which is given by\n\n$\\begin{array}{}\\text{(A8)}& & {f}_{\\mathrm{s}}=\\frac{\\mathrm{1}}{\\mathrm{3}}\\left[\\frac{\\mathrm{2}}{\\mathrm{1}+\\mathrm{0.125}\\left(\\frac{{\\mathit{\\lambda }}_{\\mathrm{s}}}{{\\mathit{\\lambda }}_{\\mathrm{liq}}}-\\mathrm{1}\\right)}+\\frac{\\mathrm{1}}{\\mathrm{1}+\\mathrm{0.75}\\left(\\frac{{\\mathit{\\lambda }}_{\\mathrm{s}}}{{\\mathit{\\lambda }}_{\\mathrm{liq}}}-\\mathrm{1}\\right)}\\right]\\text{(A9)}& & {f}_{\\mathrm{a}}=\\frac{\\mathrm{1}}{\\mathrm{3}}\\left[\\frac{\\mathrm{2}}{\\mathrm{1}+{g}_{\\mathrm{a}}\\left(\\frac{{\\mathit{\\lambda }}_{\\mathrm{a}}}{{\\mathit{\\lambda }}_{\\mathrm{liq}}}-\\mathrm{1}\\right)}+\\frac{\\mathrm{1}}{\\mathrm{1}+\\left(\\mathrm{1}-\\mathrm{2}{g}_{\\mathrm{a}}\\right)\\left(\\frac{{\\mathit{\\lambda }}_{\\mathrm{a}}}{{\\mathit{\\lambda }}_{\\mathrm{liq}}}-\\mathrm{1}\\right)}\\right],\\end{array}$\n\nwhere ga represents a unitless empirical air pore-shape factor:\n\n$\\begin{array}{}\\text{(A10)}& {g}_{\\mathrm{a}}=\\left\\{\\begin{array}{ll}\\mathrm{0.333}-\\left(\\mathrm{0.333}-\\mathrm{0.035}\\right)\\frac{{\\mathit{\\theta }}_{\\mathrm{a}}}{{\\mathit{\\theta }}_{\\mathrm{p}}},& {\\mathit{\\theta }}_{\\mathrm{liq}}>\\mathrm{0.09}\\\\ \\mathrm{0.013}+\\mathrm{0.944}{\\mathit{\\theta }}_{\\mathrm{liq}},& {\\mathit{\\theta }}_{\\mathrm{liq}}\\le \\mathrm{0.09}.\\end{array}\\right\\\\end{array}$\n\nAn alternate approach is tested in Exp. Tian16 thermal cond. The thermal conductivity parameterization is based upon the formulation but simplifies and extends it to both frozen and unfrozen soils. In their formulation, adapt Eq. (A7) to include ice and organic matter as\n\n$\\begin{array}{}\\text{(A11)}& \\mathit{\\lambda }=\\frac{{\\mathit{\\lambda }}_{\\mathrm{liq}}{\\mathit{\\theta }}_{\\mathrm{liq}}+{f}_{\\mathrm{ice}}{\\mathit{\\lambda }}_{\\mathrm{ice}}{\\mathit{\\theta }}_{\\mathrm{ice}}+{f}_{\\mathrm{a}}{\\mathit{\\lambda }}_{\\mathrm{a}}{\\mathit{\\theta }}_{\\mathrm{a}}+{f}_{\\mathrm{s}}{\\mathit{\\lambda }}_{\\mathrm{s}}{\\mathit{\\theta }}_{\\mathrm{s}}+{f}_{\\mathrm{organic}}{\\mathit{\\lambda }}_{\\mathrm{organic}}{\\mathit{\\theta }}_{\\mathrm{organic}}}{{\\mathit{\\theta }}_{\\mathrm{liq}}+{f}_{\\mathrm{ice}}{\\mathit{\\theta }}_{\\mathrm{ice}}+{f}_{\\mathrm{a}}{\\mathit{\\theta }}_{\\mathrm{a}}+{f}_{\\mathrm{s}}{\\mathit{\\theta }}_{\\mathrm{s}}+{f}_{\\mathrm{organic}}{\\mathit{\\theta }}_{\\mathrm{organic}}},\\end{array}$\n\nfor wet soil, whereas the thermal conductivity of completely dry soils is calculated by\n\n$\\begin{array}{}\\text{(A12)}& \\mathit{\\lambda }=\\mathrm{1.25}\\frac{{f}_{\\mathrm{a}}{\\mathit{\\lambda }}_{\\mathrm{a}}{\\mathit{\\theta }}_{\\mathrm{a}}+{f}_{\\mathrm{s}}{\\mathit{\\lambda }}_{\\mathrm{s}}{\\mathit{\\theta }}_{\\mathrm{s}}+{f}_{\\mathrm{organic}}{\\mathit{\\lambda }}_{\\mathrm{organic}}{\\mathit{\\theta }}_{\\mathrm{organic}}}{{f}_{\\mathrm{a}}{\\mathit{\\theta }}_{\\mathrm{a}}+{f}_{\\mathrm{s}}{\\mathit{\\theta }}_{\\mathrm{s}}+{f}_{\\mathrm{organic}}{\\mathit{\\theta }}_{\\mathrm{organic}}}.\\end{array}$\n\nThe formulation also modifies the pore-shape factor (Eq. A10) to be\n\n$\\begin{array}{}\\text{(A13)}& {g}_{\\mathrm{a}}=\\mathrm{0.333}-\\left(\\mathrm{1}-\\frac{{\\mathit{\\theta }}_{\\mathrm{a}}}{{\\mathit{\\theta }}_{\\mathrm{p}}}\\right)\\end{array}$\n\nfor air and\n\n$\\begin{array}{}\\text{(A14)}& {g}_{\\mathrm{ice}}=\\mathrm{0.333}-\\left(\\mathrm{1}-\\frac{{\\mathit{\\theta }}_{\\mathrm{ice}}}{{\\mathit{\\theta }}_{\\mathrm{p}}}\\right)\\end{array}$\n\nfor ice. introduce a shape factor for ellipsoidal soil particles, gm, as\n\n$\\begin{array}{}\\text{(A15)}& {g}_{\\mathrm{m}}={g}_{\\mathrm{sand}}{\\mathit{\\theta }}_{\\mathrm{sand}}+{g}_{\\mathrm{silt}}{\\mathit{\\theta }}_{\\mathrm{silt}}+{g}_{\\mathrm{clay}}{\\mathit{\\theta }}_{\\mathrm{clay}},\\end{array}$\n\nwhere gsand is 0.182, gsilt is 0.00775, and gclay is 0.0534. The shape factor for organic soils, gorganic, is set to 0.5. The same “weighting” factor is used for ice, air, and organic and mineral soil components and left unchanged from Eq. (A9).\n\n## A3 Snow cover fraction\n\nCLASS-CTEM relates snow depth (dsnow; m) to snow cover (fsnow; fraction) via a linear function (Fig. S2) (Verseghy2017):\n\n$\\begin{array}{}\\text{(A16)}& {f}_{\\mathrm{snow}}=\\mathrm{min}\\left[\\mathrm{1},\\left(\\frac{{d}_{\\mathrm{snow}}}{{d}_{\\mathrm{0}}}\\right)\\right],\\end{array}$\n\nwhere d0 is a limiting snow depth assigned a value of 0.1 m. Exp. Snow cover:Yang97 changes the CLASS-CTEM linear function to a hyperbolic tangent function :\n\n$\\begin{array}{}\\text{(A17)}& {f}_{\\mathrm{snow}}=\\mathrm{tanh}\\left(\\frac{{d}_{\\mathrm{snow}}}{{d}_{\\mathrm{0}}}\\right).\\end{array}$\n\nAnother alternative parameterization for snow cover from snow depth was proposed by , which was not evaluated in . This relation was developed based on analysis of a global gridded snow water equivalent product designed to evaluate general circulation models (GCMs). Exp. Snow cover:Brown03 tests the impact of that parameterization by changing the snow cover function to the proposed exponential form :\n\n$\\begin{array}{}\\text{(A18)}& {f}_{\\mathrm{snow}}=\\mathrm{1}-\\mathrm{0.01}\\left(\\mathrm{15}-\\mathrm{100}{d}_{\\mathrm{snow}}{\\right)}^{\\mathrm{1.7}}.\\end{array}$\n\n## A4 Fresh snow density\n\nThe density of freshly fallen snow is related to its ice-crystal structure and the volume of the ice crystal that is occupied by air. Generally, snow density is the result of (1) processes occurring in the cloud that affect the size and shape of the growing ice crystals, (2) processes that modify the crystal as it falls, and (3) compaction on the ground due to prevailing weather conditions and metamorphism in the snowpack .\n\nFresh snow density (ϱ; kg m−3) in CLASS-CTEM is calculated based on air temperature (Ta; K). For air temperatures below freezing, Tf, a relation from is used, while for temperatures at or above freezing CLASS-CTEM uses an equation from :\n\n$\\begin{array}{}\\text{(A19)}& \\mathit{\\varrho }=\\left\\{\\begin{array}{ll}\\mathrm{67.92}+\\mathrm{51.25}{e}^{\\left[\\frac{\\left({T}_{\\mathrm{a}}-{T}_{\\mathrm{f}}\\right)}{\\mathrm{2.59}}\\right]}& {T}_{\\mathrm{a}}<{T}_{\\mathrm{f}}\\\\ \\mathrm{119.17}+\\mathrm{20}\\left({T}_{\\mathrm{a}}-{T}_{\\mathrm{f}}\\right)& {T}_{\\mathrm{a}}\\ge {T}_{\\mathrm{f}}\\end{array}\\right\\.\\end{array}$\n\nIn Exp. Fresh snow density, the effect of wind speed (u, m s−1) is included following the approach used in the CROCUS model as detailed in with a minimum density of 50 kg m−3 following :\n\n$\\begin{array}{}\\text{(A20)}& \\mathit{\\varrho }=\\mathrm{max}\\left[\\mathrm{50},\\mathrm{109}+\\mathrm{6}\\left({T}_{\\mathrm{a}}-{T}_{\\mathrm{f}}\\right)+\\mathrm{26}{u}^{\\mathrm{1}/\\mathrm{2}}\\right].\\end{array}$\n\nWind speed may be considered important in determining fresh snow density as wind speeds greater than approximately 9 m s−1 can move ice crystals on the surface leading to crystal fractionation during saltation and surface compaction increasing the snow density (e.g., Gray and Male1981, p. 345–350).\n\n## A5 Snow albedo decay\n\nSnow albedo (αs; unitless) decreases as snow ages due to snow grain growth and deposition of soot and dirt . In CLASS-CTEM, this process is treated via empirical exponential decay functions (Verseghy2017). Freshly fallen snow is given a total albedo (αfs,total) value of 0.84, a visible (αfs,visible) value of 0.95 and a near-infrared (NIR; αfs,nir) value of 0.73 . It is assumed that the same decay function, calculated each time step (Δt; 1800 s) applies to all three albedo ranges:\n\n$\\begin{array}{}\\text{(A21)}& \\begin{array}{rl}{\\mathit{\\alpha }}_{\\mathrm{s},\\mathrm{total}}& \\left(t+\\mathrm{\\Delta }t\\right)={\\mathit{\\alpha }}_{\\mathrm{s},\\mathrm{total},\\mathrm{old}}+\\left[{\\mathit{\\alpha }}_{\\mathrm{s},\\mathrm{total}}\\left(t\\right)-{\\mathit{\\alpha }}_{\\mathrm{s},\\mathrm{total},\\mathrm{old}}\\right]\\\\ & {e}^{\\left(-\\frac{\\mathrm{0.01}\\mathrm{\\Delta }t}{\\mathrm{3600}}\\right)}.\\end{array}\\end{array}$\n\nIf the snowpack temperature is greater than −0.01C or the melt rate at the top of the snowpack is not negligible, ${\\mathit{\\alpha }}_{\\mathrm{s},\\mathrm{total},\\mathrm{old}}$ is set to a value characteristic of melting snow (0.50); otherwise, it is set a value representing old, dry snow (0.70). The total albedo at a given time step is converted to those of the visible and NIR ranges for dry snow via\n\n$\\begin{array}{}\\text{(A22)}& & {\\mathit{\\alpha }}_{\\mathrm{s},\\mathrm{visible}}=\\mathrm{0.7857}{\\mathit{\\alpha }}_{\\mathrm{s},\\mathrm{total}}+\\mathrm{0.29}\\text{(A23)}& & {\\mathit{\\alpha }}_{\\mathrm{s},\\mathrm{nir}}=\\mathrm{1.2142}{\\mathit{\\alpha }}_{\\mathrm{s},\\mathrm{total}}-\\mathrm{0.29}\\end{array}$\n\nand for melting snow,\n\n$\\begin{array}{}\\text{(A24)}& & {\\mathit{\\alpha }}_{\\mathrm{s},\\mathrm{visible}}=\\mathrm{0.9706}{\\mathit{\\alpha }}_{\\mathrm{s},\\mathrm{total}}+\\mathrm{0.1347}\\text{(A25)}& & {\\mathit{\\alpha }}_{\\mathrm{s},\\mathrm{nir}}=\\mathrm{1.0294}{\\mathit{\\alpha }}_{\\mathrm{s},\\mathrm{total}}-\\mathrm{0.1347}.\\end{array}$\n\nExp. Snow albedo decay replaces the CLASS-CTEM exponential decay function with a spectral method based on and adapted for efficiency by . This efficient spectral method first calculates the diffuse radiation albedo based on the albedo of fresh snow and the transformed snow age factor (Fage):\n\n$\\begin{array}{}\\text{(A26)}& & {\\mathit{\\alpha }}_{\\mathrm{dif},\\mathrm{visible}}=\\left(\\mathrm{1}-\\mathrm{0.2}{F}_{\\mathrm{age}}\\right){\\mathit{\\alpha }}_{\\mathrm{fs},\\mathrm{visible}}\\text{(A27)}& & {\\mathit{\\alpha }}_{\\mathrm{dif},\\mathrm{nir}}=\\left(\\mathrm{1}-\\mathrm{0.5}{F}_{\\mathrm{age}}\\right){\\mathit{\\alpha }}_{\\mathrm{fs},\\mathrm{nir}}\\text{(A28)}& & {F}_{\\mathrm{age}}=\\frac{{\\mathit{\\tau }}_{\\mathrm{s}}}{\\mathrm{1}+{\\mathit{\\tau }}_{\\mathrm{s}}},\\end{array}$\n\nwhere τs is a non-dimensional snow age at each time step found via\n\n$\\begin{array}{}\\text{(A29)}& {\\mathit{\\tau }}_{\\mathrm{s}}\\left(t+\\mathrm{\\Delta }t\\right)=\\left[{\\mathit{\\tau }}_{\\mathrm{s}}\\left(t\\right)+\\frac{\\left({r}_{\\mathrm{1}}+{r}_{\\mathrm{2}}+{r}_{\\mathrm{3}}\\right)\\mathrm{\\Delta }t}{{\\mathit{\\tau }}_{\\mathrm{0}}}\\right]\\left(\\mathrm{1}-\\frac{{S}_{\\mathrm{f}}\\mathrm{\\Delta }t}{\\mathrm{\\Delta }P}\\right),\\end{array}$\n\nwhere r1 represents the effects of grain growth due to vapor diffusion as\n\n$\\begin{array}{}\\text{(A30)}& {r}_{\\mathrm{1}}={e}^{\\left[\\mathrm{5000}\\left(\\frac{\\mathrm{1}}{{T}_{\\mathrm{f}}}-\\frac{\\mathrm{1}}{{T}_{\\mathrm{g},\\mathrm{1}}}\\right)\\right]},\\end{array}$\n\nand ${r}_{\\mathrm{2}}={r}_{\\mathrm{1}}^{\\mathrm{10}}$, representing the additional effects at or near the freezing of meltwater on grain growth. r3 represents the effects of soot and dirt and is set to 0.3. Tg,1 is the temperature of the top soil layer (K), τ0 is 106 s, Sf is the snowfall rate for that time step (kg m−2 s−1), and ΔP is the snowfall amount threshold (10 kg m−2). If, within a time step, the fresh snowfall amount exceeds ΔP, the snow age is set to that of new snow (${\\mathit{\\tau }}_{\\mathrm{s}}={F}_{\\mathrm{age}}=\\mathrm{0}$).\n\nThe direct radiation albedos are found by\n\n$\\begin{array}{}\\text{(A31)}& & {\\mathit{\\alpha }}_{\\mathrm{dir},\\mathrm{visible}}={\\mathit{\\alpha }}_{\\mathrm{dif},\\mathrm{visible}}+\\mathrm{0.4}f\\left(\\mathit{\\mu }\\right)\\left(\\mathrm{1}-{\\mathit{\\alpha }}_{\\mathrm{dif},\\mathrm{visible}}\\right)\\text{(A32)}& & {\\mathit{\\alpha }}_{\\mathrm{dir},\\mathrm{nir}}={\\mathit{\\alpha }}_{\\mathrm{dif},\\mathrm{nir}}+\\mathrm{0.4}f\\left(\\mathit{\\mu }\\right)\\left(\\mathrm{1}-{\\mathit{\\alpha }}_{\\mathrm{dif},\\mathrm{nir}}\\right),\\end{array}$\n\nwhere f(μ) is a factor that scales between 0 and 1 to give increased snow albedo due to solar zenith angles exceeding 60, calculated as\n\n$\\begin{array}{}\\text{(A33)}& f\\left(\\mathit{\\mu }\\right)=\\text{max}\\left[\\mathrm{0},\\frac{\\mathrm{1}-\\mathrm{2}\\text{cos}Z}{\\mathrm{1}+{b}_{\\mathit{\\mu }}}\\right],\\end{array}$\n\nwhere Z is the solar zenith angle and bμ is an adjustable parameter set to 2 following the BATS model .\n\n## A6 Super-cooled soil water\n\nIn Exp. Super-cooled water, unfrozen soil water in frozen soils is introduced into CLASS-CTEM following . Unfrozen water can exist in frozen soils through the capillary and absorptive forces exerted by soil particles on water in close proximity. The upper limit on the residual amount of water that can remain liquid under given soil temperature and texture conditions is parameterized by as\n\n$\\begin{array}{}\\text{(A34)}& {\\mathit{\\theta }}_{\\mathrm{liq},\\mathrm{max}}={\\mathit{\\theta }}_{\\mathrm{p}}{\\left(\\frac{-{L}_{\\mathrm{f}}\\left({T}_{\\mathrm{soil},\\mathrm{i}}-{T}_{\\mathrm{f}}\\right)}{g{\\mathit{\\psi }}_{\\mathrm{sat}}{T}_{\\mathrm{soil},i}}\\right)}^{-\\mathrm{1}/b},\\end{array}$\n\nwhere g is gravitational acceleration (m s−2), Lf is the latent heat of fusion (J kg−1), and Tsoil,i is the soil layer temperature (K). According to , unfrozen water content in moss is negligible, so θliq,max is set to zero for moss layers.\n\nTable A1Ground layer depths and thicknesses for the 20 ground layers configuration.",
null,
"## A7 Modified hydrology\n\nIn , several changes were implemented in CLASS to address how the model deals with frozen soil water. First, super-cooled soil water was added following as described above. Secondly, fractional impermeable area was introduced, also following , but this has little impact upon our model simulations (discussed in Appendix B). Their final modification was to account for the impact of frozen water on the soil matric potential (ψ; m) after and by adding a new term [(1+Ckθice)2] to the existing CLASS functional relationship:\n\n$\\begin{array}{}\\text{(A35)}& \\mathit{\\psi }={\\mathit{\\psi }}_{\\mathrm{sat}}{\\left(\\frac{{\\mathit{\\theta }}_{\\mathrm{liq}}}{{\\mathit{\\theta }}_{\\mathrm{p}}}\\right)}^{-b}\\left(\\mathrm{1}+{C}_{k}{\\mathit{\\theta }}_{\\mathrm{ice}}{\\right)}^{\\mathrm{2}},\\end{array}$\n\nwhere Ck is a constant, set to 8, that accounts for the effect of an increase in specific surface area of soil minerals and liquid water as water freezes and ice forms (Kulik1978). ψsat is the soil matric potential at saturation (m) and b is the Clapp and Hornberger empirical b parameter (unitless) . The calculation of hydraulic conductivity k (m s−1) is also modified by multiplication with a similar term [$\\left(\\mathrm{1}+{C}_{k}{\\mathit{\\theta }}_{\\mathrm{ice}}{\\right)}^{-\\mathrm{4}}$]:\n\n$\\begin{array}{}\\text{(A36)}& k={k}_{\\mathrm{sat}}{\\left(\\frac{{\\mathit{\\theta }}_{\\mathrm{liq}}}{{\\mathit{\\theta }}_{\\mathrm{p}}}\\right)}^{\\mathrm{2}b+\\mathrm{3}}\\left(\\mathrm{1}+{C}_{k}{\\mathit{\\theta }}_{\\mathrm{ice}}{\\right)}^{-\\mathrm{4}},\\end{array}$\n\nwhere ksat is saturated hydraulic conductivity. The effect of these changes is to generally increase soil matric potential and decrease hydraulic conductivity when ice is present in the soil. These modifications are tested in Exp. Modif. hydrology.\n\nAppendix B: Fractional permeable areas in frozen soils\n\nCLASS-CTEM accounts for the impact of frozen soil water through an empirical correction factor (fice; unitless), according to :\n\n$\\begin{array}{}\\text{(B1)}& {f}_{\\mathrm{ice}}={\\left[\\mathrm{1}-\\mathrm{min}\\left(\\mathrm{1},\\frac{{\\mathit{\\theta }}_{\\mathrm{ice}}}{{\\mathit{\\theta }}_{\\mathrm{p}}}\\right)\\right]}^{\\mathrm{2}}.\\end{array}$\n\nThis factor is used to correct the calculated soil hydraulic conductivity, k (m s−1), which is found via the equation:\n\n$\\begin{array}{}\\text{(B2)}& k={f}_{\\mathrm{ice}}{k}_{\\mathrm{sat}}{\\left(\\frac{{\\mathit{\\theta }}_{\\mathrm{liq}}}{{\\mathit{\\theta }}_{\\mathrm{p}}}\\right)}^{\\mathrm{2}b+\\mathrm{3}},\\end{array}$\n\nwhere ksat is the hydraulic conductivity at saturation and b is an empirical parameter. Soil moisture is related to soil matric potential (ψ; m) in CLASS-CTEM following :\n\n$\\begin{array}{}\\text{(B3)}& \\mathit{\\psi }={\\mathit{\\psi }}_{\\mathrm{sat}}{\\left(\\frac{{\\mathit{\\theta }}_{\\mathrm{liq}}}{{\\mathit{\\theta }}_{\\mathrm{p}}}\\right)}^{-b},\\end{array}$\n\nwhere ψsat is the saturated soil matric potential (m).\n\nparameterize fractional permeable areas in frozen soils. Following their formulation, within a grid cell, the permeable (perm) and impermeable (imp) patches affect the flux of water (q; m s−1) as\n\n$\\begin{array}{}\\text{(B4)}& q={F}_{\\mathrm{imp}}{q}_{\\mathrm{imp}}+\\left(\\mathrm{1}-{F}_{\\mathrm{imp}}\\right){q}_{\\mathrm{perm}},\\end{array}$\n\nwhere the impermeable grid cell fraction, Fimp, can be estimated as\n\n$\\begin{array}{}\\text{(B5)}& {F}_{\\mathrm{imp}}={e}^{-\\mathit{\\alpha }\\left(\\mathrm{1}-\\frac{{\\mathit{\\theta }}_{\\mathrm{ice}}}{{\\mathit{\\theta }}_{\\mathrm{p}}}\\right)}-{e}^{-\\mathit{\\alpha }},\\end{array}$\n\nand α is set to 3 following . Assuming qimp is set to zero, Niu and Yang parameterize the influence of the permeable areas on hydraulic conductivity as can be parameterized as\n\n$\\begin{array}{}\\text{(B6)}& k=\\left(\\mathrm{1}-{F}_{\\mathrm{imp}}\\right){k}_{\\mathrm{sat}}{\\left(\\frac{{\\mathit{\\theta }}_{\\mathrm{liq}}+{\\mathit{\\theta }}_{\\mathrm{ice}}}{{\\mathit{\\theta }}_{\\mathrm{p}}}\\right)}^{\\mathrm{2}b+\\mathrm{3}},\\end{array}$\n\nwhile the soil matric potential is calculated as\n\n$\\begin{array}{}\\text{(B7)}& \\mathit{\\psi }={\\mathit{\\psi }}_{\\mathrm{sat}}{\\left(\\frac{{\\mathit{\\theta }}_{\\mathrm{liq}}+{\\mathit{\\theta }}_{\\mathrm{ice}}}{{\\mathit{\\theta }}_{\\mathrm{p}}}\\right)}^{-b}.\\end{array}$\n\nThis formulation results in a soil matric potential that is insensitive to ice content within the soil (Fig. S9), which seems unreasonable (see, for example, Wen et al.2012). This fact is indeed noted by , who state that the soil matric potential as defined by is not appropriate for the case of frozen soil. The inclusion of θice in the numerator could be a typographical error. If it is removed, the hydraulic conductivity and soil matric potential behave quite similarly to the original CLASS relations, which make use of the factor fice in place of 1−Fimp (Fig. S10). Testing shows the model is relatively insensitive to the small changes visible in the plots (not shown).\n\nSupplement\n\nAuthor contributions\n\nJRM initiated the study, performed the model simulations and analysis, and wrote the paper. DLV led the development of the CLASS model, conducted initial research into the recommendations of MacDonald (2015), and was liaison to the Sushama group for the work of Arman Ganji. RSA performed the statistical analysis and plotting for SWE and MODIS albedo. SG provided the Lac de Gras data and participated in discussions around model evaluation. All authors contributed to the final version of the paper.\n\nCompeting interests\n\nThe authors declare that there is no conflict of interest.\n\nAcknowledgements\n\nWe thank the Global Terrestrial Network for Permafrost for generously sharing their data and for making them easily accessible online. We thank Vivek Arora for processing the CRUJRA55 meteorological data, Ed Chan for processing the MODIS data, and Christian Seiler and Paul Bartlett for providing comments on a pre-submission version of our manuscript. Sampling at the Khanovey site is supported by the RuNoCORE CPRU-2017/10015 https://www.siu.no/eng/content/view/full/81242 (last access: 8 April 2018); the SAMCoT WP6 https://www.ntnu.edu/web/samcot/home (last access: 26 September 2019) and Lomonosov Moscow State University, geology faculty, permafrost department.\n\nReview statement\n\nThis paper was edited by David Lawrence and reviewed by three anonymous referees.\n\nReferences\n\nAlexeev, V. A., Nicolsky, D. J., Romanovsky, V. E., and Lawrence, D. M.: An evaluation of deep soil configurations in the CLM3 for improved representation of permafrost, Geophys. Res. Lett., 34, L09502, https://doi.org/10.1029/2007GL029536, 2007. a, b\n\nArora, V., Seglenieks, F., Kouwen, N., and Soulis, E.: Scaling aspects of river flow routing, Hydrol. Process., 15, 461–477, https://doi.org/10.1002/hyp.161, 2001. a\n\nBartlett, P. A., MacKay, M. D., and Verseghy, D. L.: Modified snow algorithms in the Canadian land surface scheme: Model runs and sensitivity analysis at three boreal forest stands, Atmos.-Ocean, 44, 207–222, https://doi.org/10.3137/ao.440301, 2006. a\n\nBeer, C., Porada, P., Ekici, A., and Brakebusch, M.: Effects of short-term variability of meteorological variables on soil temperature in permafrost regions, The Cryosphere, 12, 741–757, https://doi.org/10.5194/tc-12-741-2018, 2018. a\n\nBellisario, L. M., Boudreau, L. D., Verseghy, D. L., Rouse, W. R., and Blanken, P. D.: Comparing the performance of the Canadian land surface scheme (CLASS) for two subarctic terrain types, Atmos.-Ocean, 38, 181–204, https://doi.org/10.1080/07055900.2000.9649645, 2000. a\n\nBiskaborn, B. K., Smith, S. L., Noetzli, J., Matthes, H., Vieira, G., Streletskiy, D. A., Schoeneich, P., Romanovsky, V. E., Lewkowicz, A. G., Abramov, A., Allard, M., Boike, J., Cable, W. L., Christiansen, H. H., Delaloye, R., Diekmann, B., Drozdov, D., Etzelmüller, B., Grosse, G., Guglielmin, M., Ingeman-Nielsen, T., Isaksen, K., Ishikawa, M., Johansson, M., Johannsson, H., Joo, A., Kaverin, D., Kholodov, A., Konstantinov, P., Kröger, T., Lambiel, C., Lanckman, J.-P., Luo, D., Malkova, G., Meiklejohn, I., Moskalenko, N., Oliva, M., Phillips, M., Ramos, M., Sannel, A. B. K., Sergeev, D., Seybold, C., Skryabin, P., Vasiliev, A., Wu, Q., Yoshikawa, K., Zheleznyak, M., and Lantuit, H.: Permafrost is warming at a global scale, Nat. Commun., 10, 264, https://doi.org/10.1038/s41467-018-08240-4, 2019. a\n\nBoeckli, L., Brenning, A., Gruber, S., and Noetzli, J.: Permafrost distribution in the European Alps: calculation and evaluation of an index map and summary statistics, The Cryosphere, 6, 807–820, https://doi.org/10.5194/tc-6-807-2012, 2012. a\n\nBrown, J., Ferrians Jr., O. J., Heginbottom, J. A., and Melnikov, E. S.: Circum-Arctic map of permafrost and ground-ice conditions, US Geological Survey Reston, 1997. a, b, c, d, e, f, g\n\nBrown, R., Bartlett, P., MacKay, M., and Verseghy, D.: Evaluation of snow cover in CLASS for SnowMIP, Atmos.-Ocean, 44, 223–238, https://doi.org/10.3137/ao.440302, 2006. a\n\nBrown, R. D., Brasnett, B., and Robinson, D.: Gridded North American monthly snow depth and snow water equivalent for GCM evaluation, Atmos.-Ocean, 41, 1–14, https://doi.org/10.3137/ao.410101, 2003. a, b, c, d\n\nChadburn, S., Burke, E., Essery, R., Boike, J., Langer, M., Heikenfeld, M., Cox, P., and Friedlingstein, P.: An improved representation of physical permafrost dynamics in the JULES land-surface model, Geosci. Model Dev., 8, 1493–1508, https://doi.org/10.5194/gmd-8-1493-2015, 2015a. a\n\nChadburn, S. E., Burke, E. J., Essery, R. L. H., Boike, J., Langer, M., Heikenfeld, M., Cox, P. M., and Friedlingstein, P.: Impact of model developments on present and future simulations of permafrost in a global land-surface model, The Cryosphere, 9, 1505–1521, https://doi.org/10.5194/tc-9-1505-2015, 2015b. a\n\nChadburn, S. E., Burke, E. J., Cox, P. M., Friedlingstein, P., Hugelius, G., and Westermann, S.: An observation-based constraint on permafrost loss as a function of global warming, Nat. Clim. Change, 7, 340, https://doi.org/10.1038/nclimate3262, 2017. a\n\nClapp, R. B. and Hornberger, G. M.: Empirical equations for some soil hydraulic properties, Water Resour. Res., 14, 601–604, https://doi.org/10.1029/WR014i004p00601, 1978. a, b, c, d\n\nCosby, B. J., Hornberger, G. M., Clapp, R. B., and Ginn, T. R.: A Statistical Exploration of the Relationships of Soil Moisture Characteristics to the Physical Properties of Soils, Water Resour. Res., 20, 682–690, https://doi.org/10.1029/WR020i006p00682, 1984. a\n\nCôté, J. and Konrad, J.-M.: A generalized thermal conductivity model for soils and construction materials, Can. Geotech. J., 42, 443–458, https://doi.org/10.1139/t04-106, 2005. a, b, c, d\n\nDall'Amico, M., Endrizzi, S., Gruber, S., and Rigon, R.: A robust and energy-conserving model of freezing variably-saturated soil, The Cryosphere, 5, 469–484, https://doi.org/10.5194/tc-5-469-2011, 2011. a\n\nDankers, R., Burke, E. J., and Price, J.: Simulation of permafrost and seasonal thaw depth in the JULES land surface scheme, The Cryosphere, 5, 773–790, https://doi.org/10.5194/tc-5-773-2011, 2011. a\n\nde Vries, D.: Thermal properties of soils, Phys. Plant Environ., 12, 33–46, 1963. a, b, c, d, e, f, g, h\n\nDickinson, R. E.: Land Surface Processes and Climate-Surface Albedos and Energy Balance, in: Advances in Geophysics, edited by: Saltzman, B., Vol. 25, pp. 305–353, Elsevier, https://doi.org/10.1016/S0065-2687(08)60176-4, 1983. a, b, c\n\nDickinson, R. E., Henderson-Sellers, A., and Kennedy, P.: Biosphere/Atmosphere Transfer Scheme (BATS) Version 1e as coupled to the NCAR Community Climate Model, Tech. rep., Climate and Global Dynamics Division, National Center for Atmospheric Research, Boulder, Colorado, 1993. a\n\nEkici, A., Beer, C., Hagemann, S., Boike, J., Langer, M., and Hauck, C.: Simulating high-latitude permafrost regions by the JSBACH terrestrial ecosystem model, Geosci. Model Dev., 7, 631–647, https://doi.org/10.5194/gmd-7-631-2014, 2014. a\n\nEssery, R., Martin, E., Douville, H., Fernández, A., and Brun, E.: A comparison of four snow models using observations from an alpine site, Clim. Dynam., 15, 583–593, https://doi.org/10.1007/s003820050302, 1999. a, b, c\n\nFarouki, O. T.: The thermal properties of soils in cold regions, Cold Reg. Sci. Technol., 5, 67–75, https://doi.org/10.1016/0165-232X(81)90041-0, 1981. a, b\n\nGanji, A., Sushama, L., Verseghy, D., and Harvey, R.: On improving cold region hydrological processes in the Canadian Land Surface Scheme, Theor. Appl. Climatol., 127, 45–59, https://doi.org/10.1007/s00704-015-1618-4, 2015. a, b, c, d, e, f, g, h\n\nGiorgi, F. and Avissar, R.: Representation of heterogeneity effects in Earth system modeling: Experience from land surface modeling, Rev. Geophys., 35, 413–437, https://doi.org/10.1029/97RG01754, 1997. a\n\nGornall, J. L., Jónsdóttir, I. S., Woodin, S. J., and Van der Wal, R.: Arctic mosses govern below-ground environment and ecosystem processes, Oecologia, 153, 931–941, https://doi.org/10.1007/s00442-007-0785-0, 2007. a\n\nGray, D. M. and Male, D. H.: Handbook of Snow: Principles, Processes, Management and Use, Pergamon Press, 1981. a\n\nGruber, S.: Derivation and analysis of a high-resolution estimate of global permafrost zonation, The Cryosphere, 6, 221–233, https://doi.org/10.5194/tc-6-221-2012, 2012 a, b, c, d, e, f\n\nGruber, S., Brown, N., Stewart-Jones, E., Karunaratne, K., Riddick, J., Peart, C., Subedi, R., and Kokelj, S.: Air and ground temperature, air humidity and site characterization data from the Canadian Shield tundra near Lac de Gras, Northwest Territories, Canada, v. 1.0 (2015–2017), https://doi.org/10.5885/45561XD-2C7AB3DCF3D24AD8, 2018. a, b, c\n\nGTN-P: Global Terrestrial Network for Permafrost Database: Active Layer Thickness Data (CALM-Circumpolar Active Layer Monitoring), GTN-P 2016, Akureyri, Iceland, ISSN 2410-2385, 2016. a, b\n\nGubler, S., Fiddes, J., Keller, M., and Gruber, S.: Scale-dependent measurement and analysis of ground surface temperature variability in alpine terrain, The Cryosphere, 5, 431–443, https://doi.org/10.5194/tc-5-431-2011, 2011. a\n\nHarris, I., Jones, P. D., Osborn, T. J., and Lister, D. H.: Updated high-resolution grids of monthly climatic observations–the CRU TS3. 10 Dataset, Int. J. Climatol., 34, 623–642, 2014. a, b, c\n\nHedstrom, N. R. and Pomeroy, J. W.: Measurements and modelling of snow interception in the boreal forest, Hydrol. Process., 12, 1611–1625, 1998. a\n\nKobayashi, S., Ota, Y., Harada, Y., Ebita, A., Moriya, M., Onoda, H., Onogi, K., Kamahori, H., Kobayashi, C., Endo, H., Miyaoka, K., and Takahashi, K.: The JRA-55 Reanalysis: General Specifications and Basic Characteristics, J. Meteorol. Soc. JPN, 93, 5–48, https://doi.org/10.2151/jmsj.2015-001, 2015. a, b\n\nKoren, V., Schaake, J., Mitchell, K., Duan, Q.-Y., Chen, F., and Baker, J. M.: A parameterization of snowpack and frozen ground intended for NCEP weather and climate models, J. Geophys. Res., 104, 19569–19585, https://doi.org/10.1029/1999JD900232, 1999. a, b\n\nKoven, C. D., Riley, W. J., and Stern, A.: Analysis of Permafrost Thermal Dynamics and Response to Climate Change in the CMIP5 Earth System Models, J. Climate, 26, 1877–1900, https://doi.org/10.1175/JCLI-D-12-00228.1, 2013. a\n\nKulik, V. Y.: Water infiltration into soil, Gidrometeoizdat, Moscow, 1978 (in Russian). a\n\nLafleur, P. M., Skarupa, M. R., and Verseghy, D. L.: Validation of the Canadian land surface scheme (class) for a subarctic open woodland, Atmos.-Ocean, 38, 205–225, https://doi.org/10.1080/07055900.2000.9649646, 2000. a\n\nLawrence, D. M., Slater, A. G., Romanovsky, V. E., and Nicolsky, D. J.: Sensitivity of a model projection of near-surface permafrost degradation to soil column depth and representation of soil organic matter, J. Geophys. Res., 113, F02011, https://doi.org/10.1029/2007JF000883, 2008. a, b, c\n\nLawrence, P. J. and Chase, T. N.: Representing a new MODIS consistent land surface in the Community Land Model (CLM 3.0), J. Geophys. Res., 112, G01023, https://doi.org/10.1029/2006JG000168, 2007. a\n\nLee, H., Swenson, S. C., Slater, A. G., and Lawrence, D. M.: Effects of excess ground ice on projections of permafrost in a warming climate, Environ. Res. Lett., 9, 124006, https://doi.org/10.1088/1748-9326/9/12/124006, 2014. a, b\n\nLetts, M. G., Roulet, N. T., Comer, N. T., Skarupa, M. R., and Verseghy, D. L.: Parametrization of peatland hydraulic properties for the Canadian land surface scheme, Atmos.-Ocean, 38, 141–160, https://doi.org/10.1080/07055900.2000.9649643, 2000. a, b, c, d\n\nLoranty, M. M., Abbott, B. W., Blok, D., Douglas, T. A., Epstein, H. E., Forbes, B. C., Jones, B. M., Kholodov, A. L., Kropp, H., Malhotra, A., Mamet, S. D., Myers-Smith, I. H., Natali, S. M., O'Donnell, J. A., Phoenix, G. K., Rocha, A. V., Sonnentag, O., Tape, K. D., and Walker, D. A.: Reviews and syntheses: Changing ecosystem influences on soil thermal regimes in northern high-latitude permafrost regions, Biogeosciences, 15, 5287–5313, https://doi.org/10.5194/bg-15-5287-2018, 2018. a, b\n\nMacDonald, M. K.: The Hydrometeorological Response to Chinook Winds in the South Saskatchewan River Basin, Ph.D. thesis, University of Edinburgh, 2015. a, b, c, d, e, f, g, h\n\nMellor, M.: Engineering Properties of Snow, J. Glaciol., 19, 15–66, https://doi.org/10.3189/S002214300002921X, 1977. a\n\nMelton, J. R.: CLASS-CTEM code for “Improving permafrost physics in the coupled Canadian Land Surface Scheme (v.3.6.2) and Canadian Terrestrial Ecosystem Model (v.2.1) (CLASS-CTEM)”, https://doi.org/10.5281/zenodo.3369396, 2019. a\n\nMelton, J. R. and Arora, V. K.: Sub-grid scale representation of vegetation in global land surface schemes: implications for estimation of the terrestrial carbon sink, Biogeosciences, 11, 1021–1036, https://doi.org/10.5194/bg-11-1021-2014, 2014. a, b\n\nMelton, J. R. and Arora, V. K.: Competition between plant functional types in the Canadian Terrestrial Ecosystem Model (CTEM) v.2.0, Geosci. Model Dev., 9, 323–361, https://doi.org/10.5194/gmd-9-323-2016, 2016. a, b, c\n\nMelton, J. R., Sospedra-Alfonso, R., and McCusker, K. E.: Tiling soil textures for terrestrial ecosystem modelling via clustering analysis: a case study with CLASS-CTEM (version 2.1), Geosci. Model Dev., 10, 2761–2783, https://doi.org/10.5194/gmd-10-2761-2017, 2017. a\n\nMODIS Adaptive Processing System, NASA: MODIS/Terra+Aqua Albedo 16-Day L3 Global 0.05Deg CMG V005, title of the publication associated with this dataset: MODIS/Terra+Aqua Albedo 16-Day L3 Global 0.05Deg CMG V005, 2016. a\n\nMorse, P. D., Burn, C. R., and Kokelj, S. V.: Influence of snow on near-surface ground temperatures in upland and alluvial environments of the outer Mackenzie Delta, Northwest Territories, Can. J. Earth Sci., 49, 895–913, 2012. a\n\nMudryk, L. R., Derksen, C., Kushner, P. J., and Brown, R.: Characterization of Northern Hemisphere snow water equivalent datasets, 1981–2010, J. Climate, 28, 8037–8051, 2015. a, b\n\nMyers-Smith, I. H., Forbes, B. C., Wilmking, M., Hallinger, M., Lantz, T., Blok, D., Tape, K. D., Macias-Fauria, M., Sass-Klaassen, U., Lévesque, E., Boudreau, S., Ropars, P., Hermanutz, L., Trant, A., Collier, L. S., Weijers, S., Rozema, J., Rayback, S. A., Schmidt, N. M., Schaepman-Strub, G., Wipf, S., Rixen, C., Ménard, C. B., Venn, S., Goetz, S., Andreu-Hayles, L., Elmendorf, S., Ravolainen, V., Welker, J., Grogan, P., Epstein, H. E., and Hik, D. S.: Shrub expansion in tundra ecosystems: dynamics, impacts and research priorities, Environ. Res. Lett., 6, 045509, https://doi.org/10.1088/1748-9326/6/4/045509, 2011. a\n\nNicolsky, D. J., Romanovsky, V. E., Alexeev, V. A., and Lawrence, D. M.: Improved modeling of permafrost dynamics in a GCM land-surface scheme, Geophys. Res. Lett., 34, L08501, https://doi.org/10.1029/2007GL029525, 2007. a\n\nNiu, G.-Y. and Yang, Z.-L.: Effects of Frozen Soil on Snowmelt Runoff and Soil Water Storage at a Continental Scale, J. Hydrometeorol., 7, 937–952, https://doi.org/10.1175/JHM538.1, 2006. a, b, c, d, e, f, g, h, i, j, k\n\nO'Neill, H. B., Wolfe, S. A., and Duchesne, C.: New ground ice maps for Canada using a paleogeographic modelling approach, The Cryosphere, 13, 753–773, https://doi.org/10.5194/tc-13-753-2019, 2019. a\n\nPaquin, J.-P. and Sushama, L.: On the Arctic near-surface permafrost and climate sensitivities to soil and snow model formulations in climate models, Clim. Dynam., 44, 203–228, https://doi.org/10.1007/s00382-014-2185-6, 2014. a, b, c, d\n\nPelletier, J. D., Broxton, P. D., Hazenberg, P., Zeng, X., Troch, P. A., Niu, G., Williams, Z. C., Brunke, M. A., and Gochis, D.: Global 1-km Gridded Thickness of Soil, Regolith, and Sedimentary Deposit Layers, ORNL DAAC, https://doi.org/10.3334/ORNLDAAC/1304, 2016. a, b, c, d\n\nPeng, Y., Arora, V. K., Kurz, W. A., Hember, R. A., Hawkins, B. J., Fyfe, J. C., and Werner, A. T.: Climate and atmospheric drivers of historical terrestrial carbon uptake in the province of British Columbia, Canada, Biogeosciences, 11, 635–649, https://doi.org/10.5194/bg-11-635-2014, 2014. a\n\nPomeroy, J. W. and Gray, D. M.: Snowcover Accumulation, Relocation and Management, National Hydrology Research Institute, Saskatchewan, Canada, ISBN 0-660-15816-7, 1995. a\n\nPorada, P., Ekici, A., and Beer, C.: Effects of bryophyte and lichen cover on permafrost soil temperature at large scale, The Cryosphere, 10, 2291–2315, https://doi.org/10.5194/tc-10-2291-2016, 2016. a, b, c, d, e\n\nRiseborough, D., Shiklomanov, N., Etzelmüller, B., Gruber, S., and Marchenko, S.: Recent advances in permafrost modelling, Permafr. Perigl. Process., 19, 137–156, https://doi.org/10.1002/ppp.615, 2008. a\n\nRoebber, P. J., Bruening, S. L., Schultz, D. M., and Cortinas, J. V.: Improving Snowfall Forecasting by Diagnosing Snow Density, Weather Forecast., 18, 264–287, https://doi.org/10.1175/1520-0434(2003)018<0264:ISFBDS>2.0.CO;2, 2003. a, b\n\nRomanovsky, V. E. and Osterkamp, T. E.: Effects of unfrozen water on heat and mass transport processes in the active layer and permafrost, Permafr. Perigl. Process., 11, 219–239, 2000. a\n\nScott, D. W.: Multivariate Density Estimation: Theory, Practice, and Visualization, John Wiley & Sons, 1992. a\n\nShangguan, W., Dai, Y., Duan, Q., Liu, B., and Yuan, H.: A global soil data set for earth system modeling, J. Adv. Model. Earth Syst., 6, 249–263, https://doi.org/10.1002/2013MS000293, 2014. a\n\nShangguan, W., Hengl, T., Mendes de Jesus, J., Yuan, H., and Dai, Y.: Mapping the global depth to bedrock for land surface modeling, J. Adv. Model. Earth Syst., 9, 65–88, https://doi.org/10.1002/2016MS000686, 2017. a, b, c, d, e, f\n\nShiklomanov, N. I., Streletskiy, D. A., Little, J. D., and Nelson, F. E.: Isotropic thaw subsidence in undisturbed permafrost landscapes, Geophys. Res. Lett., 40, 6356–6361, https://doi.org/10.1002/2013GL058295, 2013. a\n\nSmerdon, J. E. and Stieglitz, M.: Simulating heat transport of harmonic temperature signals in the Earth's shallow subsurface: Lower-boundary sensitivities, Geophys. Res. Lett., 33, L14402, https://doi.org/10.1029/2006GL026816, 2006. a, b\n\nSmith, M. W.: Microclimatic Influences on Ground Temperatures and Permafrost Distribution, Mackenzie Delta, Northwest Territories, Can. J. Earth Sci., 12, 1421–1438, https://doi.org/10.1139/e75-129, 1975. a\n\nSoulis, E. D., Snelgrove, K. R., Kouwen, N., Seglenieks, F., and Verseghy, D. L.: Towards closing the vertical water balance in Canadian atmospheric models: Coupling of the land surface scheme class with the distributed hydrological model watflood, Atmos.-Ocean, 38, 251–269, https://doi.org/10.1080/07055900.2000.9649648, 2000. a\n\nSturm, M., Holmgren, J., König, M., and Morris, K.: The thermal conductivity of seasonal snow, J. Glaciol., 43, 26–41, https://doi.org/10.1017/S0022143000002781, 1997. a, b, c\n\nTian, Z., Lu, Y., Horton, R., and Ren, T.: A simplified de Vries-based model to estimate thermal conductivity of unfrozen and frozen soil, Eur. J. Soil Sci., 67, 564–572, https://doi.org/10.1111/ejss.12366, 2016. a, b, c, d, e, f, g\n\nTilley, J. S., Chapman, W. L., and Wu, W.: Sensitivity tests of the Canadian Land Surface Scheme (CLASS) for Arctic tundra, Ann. Glaciol., 25, 46–50, https://doi.org/10.1017/s0260305500013781, 1997. a\n\nTuretsky, M. R., Bond-Lamberty, B., Euskirchen, E., Talbot, J., Frolking, S., McGuire, A. D., and Tuittila, E.-S.: The resilience and functional role of moss in boreal and arctic ecosystems, New Phytol., 196, 49–67, https://doi.org/10.1111/j.1469-8137.2012.04254.x, 2012. a\n\nUNESCO Press: Discharge of selected rivers of the world, Vol. 2, part 2, Mean Monthly and Extreme Discharges, 1965–1984, Tech. rep., UNESCO, Paris, 1993. a, b, c\n\nVan Der Wal, R. and Brooker, R. W.: Mosses mediate grazer impacts on grass abundance in arctic ecosystems, Funct. Ecol., 18, 77–86, https://doi.org/10.1111/j.1365-2435.2004.00820.x, 2004. a\n\nVaughan, D. G., Comiso, J. C., Allison, I., Carrasco, J., Kaser, G., Kwok, R., Mote, P., Murray, T., Paul, F., Ren, J., Rignot, E., Solomina, O., Steffen, K., and Zhang, T.: Observations: Cryosphere, in: Climate Change 2013 – The Physical Science Basis, edited by: Stocker, T. F., Qin, D., K. Plattner, G., Tignor, M., Allen, S. K., Boschung, J., Nauels, A., Xia, Y., Bex, V., and Midgley, P. M., Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, pp. 317–382, Cambridge University Press, Cambridge, https://doi.org/10.1017/CBO9781107415324.012, 2013. a\n\nVerseghy, D.: CLASS – The Canadian land surface scheme (v.3.6.2), Climate Research Division, Science and Technology Branch, Environment Canada, 2017. a, b, c, d, e, f, g, h\n\nViovy, N.: CRU-NCEP Version 8, title of the publication associated with this dataset: CRU-NCEP version 8, 2016. a, b\n\nWatanabe, K. and Mizoguchi, M.: Amount of unfrozen water in frozen porous media saturated with solution, Cold Reg. Sci. Technol., 34, 103–110, https://doi.org/10.1016/S0165-232X(01)00063-5, 2002. a\n\nWen, Z., Ma, W., Feng, W., Deng, Y., Wang, D., Fan, Z., and Zhou, C.: Experimental study on unfrozen water content and soil matric potential of Qinghai-Tibetan silty clay, Environ. Earth Sci., 66, 1467–1476, https://doi.org/10.1007/s12665-011-1386-0, 2012. a\n\nWiscombe, W. J. and Warren, S. G.: A Model for the Spectral Albedo of Snow. I: Pure Snow, J. Atmos. Sci., 37, 2712–2733, https://doi.org/10.1175/1520-0469(1980)037<2712:AMFTSA>2.0.CO;2, 1980. a, b, c\n\nWu, Y., Verseghy, D. L., and Melton, J. R.: Integrating peatlands into the coupled Canadian Land Surface Scheme (CLASS) v3.6 and the Canadian Terrestrial Ecosystem Model (CTEM) v2.0, Geosci. Model Dev., 9, 2639–2663, https://doi.org/10.5194/gmd-9-2639-2016, 2016. a, b, c\n\nYang, Z.-L., Dickinson, R. E., Robock, A., and Vinnikov, K. Y.: Validation of the snow submodel of the Biosphere–Atmosphere Transfer Scheme with Russian snow cover and meteorological observational data, J. Climate, 10, 353–373, 1997. a, b, c, d\n\nZhang, T., Barry, R. G., Knowles, K., Heginbottom, J. A., and Brown, J.: Statistics and characteristics of permafrost and ground‐ice distribution in the Northern Hemisphere, Polar Geogr., 23, 132–154, https://doi.org/10.1080/10889379909377670, 1999. a, b\n\nZhang, T., Heginbottom, J. A., Barry, R. G., and Brown, J.: Further statistics on the distribution of permafrost and ground ice in the Northern Hemisphere, Polar Geogr., 24, 126–131, https://doi.org/10.1080/10889370009377692, 2000. a, b, c\n\nZhang, Y., Carey, S. K., and Quinton, W. L.: Evaluation of the algorithms and parameterizations for ground thawing and freezing simulation in permafrost regions, J. Geophys. Res., 113, D17116, https://doi.org/10.1029/2007JD009343, 2008. a\n\nZhao, L. and Gray, D. M.: A parametric expression for estimating infiltration into frozen soils, Hydrol. Process., 11, 1761–1775, https://doi.org/10.1002/(SICI)1099-1085(19971030)11:13<1761::AID-HYP604>3.0.CO;2-O, 1997. a\n\nZobler, L.: A world soil file for global climate modelling, title of the publication associated with this dataset: NASA Technical Memorandum 87802, 1986. a"
]
| [
null,
"https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019-avatar-thumb150.png",
null,
"https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019-t01-thumb.png",
null,
"https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019-f01-thumb.png",
null,
"https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019-f02-thumb.png",
null,
"https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019-t02-thumb.png",
null,
"https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019-f03-thumb.png",
null,
"https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019-f04-thumb.png",
null,
"https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019-f05-thumb.png",
null,
"https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019-f06-thumb.png",
null,
"https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019-f07-thumb.png",
null,
"https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019-f08-thumb.png",
null,
"https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019-f09-thumb.png",
null,
"https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019-t03-thumb.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8626282,"math_prob":0.93351537,"size":97561,"snap":"2022-27-2022-33","text_gpt3_token_len":25453,"char_repetition_ratio":0.1534795,"word_repetition_ratio":0.035258114,"special_character_ratio":0.2602372,"punctuation_ratio":0.20585495,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96457916,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T05:52:54Z\",\"WARC-Record-ID\":\"<urn:uuid:52277af6-cc2e-4cdc-91a1-d28642f1bc17>\",\"Content-Length\":\"479705\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a23ebce7-8ff0-4db7-b343-015a91cf6485>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba8ce577-e76f-4fe1-85bc-94d2d49165eb>\",\"WARC-IP-Address\":\"81.3.21.103\",\"WARC-Target-URI\":\"https://gmd.copernicus.org/articles/12/4443/2019/gmd-12-4443-2019.html\",\"WARC-Payload-Digest\":\"sha1:7EKPKKT4LTRYIXFHZGTF67J3QMTNMEKA\",\"WARC-Block-Digest\":\"sha1:AZMGPDVMLRDPLGZBL2RHAARBF46WUGKV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103355949.26_warc_CC-MAIN-20220628050721-20220628080721-00105.warc.gz\"}"} |
https://boazcommunitycorp.org/773-trigonometric-or-polar-shape.html | [
"# Trigonometric or polar shape\n\nConsider the complex number z = a + bi, of module",
null,
"and argument",
null,
".",
null,
"We have to:",
null,
"",
null,
"Replacing in z = a + biwe have:",
null,
"",
null,
"This expression is called trigonometric form or polar of the complex z.\n\nExample 1\n\nWrite in complex form the complex number z = 1 + i:\n\nResolution",
null,
"Module:",
null,
"Argument:",
null,
"Therefore, z can be written in trigonometric form:",
null,
"Example 2\n\nWrite in complex form the complex number z = 8i:\n\nResolution",
null,
"Module:",
null,
"Argument:",
null,
"Therefore, z can be written in trigonometric form:",
null,
"Example 3\n\nWrite in algebraic form the complex number",
null,
":\nResolution\n\nThis transformation is immediate because it is enough to replace",
null,
"and",
null,
"by their values:",
null,
"Next: Multiplication and Division in the Trigonometric Form"
]
| [
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-2.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-3.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-4.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-5.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-6.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-7.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-8.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-9.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-10.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-11.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-12.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-13.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-14.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-15.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-16.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-17.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-18.gif",
null,
"https://boazcommunitycorp.org/img/forma-trigonom-trica-ou-polar-19.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.792002,"math_prob":0.9848624,"size":673,"snap":"2020-24-2020-29","text_gpt3_token_len":157,"char_repetition_ratio":0.15246637,"word_repetition_ratio":0.23684211,"special_character_ratio":0.21842496,"punctuation_ratio":0.14173229,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9874149,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-13T12:00:32Z\",\"WARC-Record-ID\":\"<urn:uuid:5371d378-0b0c-4fc1-8ec8-324ca931c1c4>\",\"Content-Length\":\"55052\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d1465d6-aa3a-49bf-a34e-e4e48c4d5fd8>\",\"WARC-Concurrent-To\":\"<urn:uuid:b9d8408f-6aa6-4d4a-bbae-3d7489705ff4>\",\"WARC-IP-Address\":\"104.18.56.143\",\"WARC-Target-URI\":\"https://boazcommunitycorp.org/773-trigonometric-or-polar-shape.html\",\"WARC-Payload-Digest\":\"sha1:W2AMRWOFMBOZWCIRF5TUFG27Y7NVIW5D\",\"WARC-Block-Digest\":\"sha1:DZG2QCJ4EODPPDFPARDGH5GCPF7EDNCC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657143365.88_warc_CC-MAIN-20200713100145-20200713130145-00538.warc.gz\"}"} |
https://schemer.in/slogan/docs/book/cdt.html | [
"# Composite Data Types\n\nThe preceding chapter discussed basic data types that serve as building blocks or atoms of large Slogan programs. This chapter is about composite data types, which are molecules created by combining the basic data types in various ways. Some of the data types we discuss here are pairs, lists, arrays, hash tables, sets and records. Pairs, lists and arrays are also known as aggregate types because they are concatenations of other values.1\n\n## 5.1 Pairs\n\nSuppose you are writing a program that deals with GPS coordinates. A GPS coordinate consists of two real numbers – one for latitude and the other for longitude. You can always use two separate variables for these, but your program will lack the ability to express the concept of location as a concrete entity. Without this ability, functions in your program will have to accept and \"interpret\" two distinct numbers as representing a single location.\n\nLet us tackle the problem of giving the GPS coordinate a concrete representation. We need a mechanism to glue together two real numbers into a single value. Slogan's pair data type is ideal for this purpose. A pair can be constructed either by calling the `pair` function or by using the colon (`:`) operator2. Here are the coordinates of two cities of the world, encoded as decimal degrees:\n\n``````\nlet new_york_coords = pair(40.7166638, -74.0)\nlet bangalore_coords = 12.97160:77.59456\n\nnew_york_coords\n// 40.7166638:-74.0\nbangalore_coords\n// 12.9716:77.59456\n``````\n\nThe values in a pair can be accessed in two ways. The first method is to use the `head` and `tail` functions.\n\n``````\n// 12.9716\ntail(bangalore_coords)\n// 77.59456\n``````\n\nIf you want to bind both members in a pair to new variables, it is more convenient to use the data destructuring mechanism built-into the `let` statement. The variable part of a `let` can be a \"pattern\". If the internal structure of the value on the right-hand side of the assignment matches this pattern, values inside the structure will be bound to variables in the pattern.\n\n``````\nlet ny_lat:ny_long = new_york_coords\nny_lat\n// 40.7166638\nny_long\n// -74.0\n``````\n\nIf you want to bind only certain elements in the structure, just replace the variable name with an underscore (`_`) in the pattern:\n\n``````\nlet _:bng_long = bangalore_coords\nbng_long\n// 77.59456\n``````\n\nNow that we have a suitable representation for GPS coordinates, it is a good idea to hide the details of this representation under suitable functions. This will enable the rest of the code to deal with location or coordinates as an abstract idea. Basically we need three functions – a constructor to build coordinates out of two real numbers and two selectors for accessing the individual parts of the location. If we ever decide to use a different internal representation, only these three functions need to change. This is because the rest of the code create and access coordinates through these functions and does not make any assumptions about how they are really represented in memory. This way we add a thin layer of data abstraction to our program.\n\n``````\nfunction make_coords(lat, long) lat:long\nfunction longitude(coord) tail(coord)\n``````\n\nAs the constructors and selectors are not doing nothing much on their own, the following definitions are also possible, in which they become just aliases for `pair`, `head` and `tail`:\n\n``````\nlet make_coords = pair\nlet longitude = tail\n``````\n\nLet us try using the new abstractions:\n\n``````\nlet new_york_coords = make_coords(40.7166638, -74.0)\nlet bangalore_coords = make_coords(12.97160, 77.59456)\nlatitude(new_york_coords)\n// 40.7166638\nlongitude(bangalore_coords)\n// 77.59456\n``````\n\nAny new functions that we write need not be bothered about the fact that we represent coordinates with pairs. As an example, we will write a function that prints the DMS value of a coordinate object.\n\n``````\nfunction show_dms(coord)\n{ show_dms_component(latitude(coord))\nshow(\" \")\nshow_dms_component(longitude(coord))\nnewline() }\n\nfunction show_dms_component(dd_component)\nshow(floor(dd_component), \"d \",\nfloor(mod(dd_component * 60, 60)), \"m \",\nmod(abs(dd_component) * 3600, 60), \"s \")\n\n// Usage:\nshow_dms(bangalore_coords)\n//> 12.0d 58.0m 17.760000000002037s 77.0d 35.0m 40.41600000002654s\nshow_dms(new_york_coords)\n//> 40.0d 42.0m 59.98967999999877s -74.0d 0.0m 0.0s\n``````\n\n## 5.2 Lists\n\nThe values glued together by a pair need not be primitives. They can be other pairs, for instance. This enables us to build hierarchical data structures like the one shown below:\n\n``````\nlet tree = (1:(2:3:(4:5)))\n// 1\n// 2\ntail(tail(tree))\n// 3:4:5\n``````\n\nThe basic sequence obtained by chaining together pairs are known as a list. A proper list will be terminated by a value that represents the empty sequence. In Slogan this value is represented by `[]`. There are various ways you could build a proper list. Some of these methods are illustrated here:\n\n``````\n// pairs terminated by an empty list ([]), is a proper list.\n1:2:3:4:5:[]\n// [1, 2, 3, 4, 5]\n\n// a list literal can also be constructed by enclosing comma-separated values in []:\n[\\a, \\e, \\i, \\o, \\u]\n// [\\a, \\e, \\i, \\o, \\u]\n\n// another way to build a proper list is to call the `list` function:\nlist(1, 2, \"hello\", 3)\n// [1, 2, hello, 3]\n``````\n\nA list of pairs can be used as a table to lookup information. The lookup function can treat the head of each pair as the `key` to find the associated data.\n\n``````\nlet price_list = ['orange:80, 'apple:120, 'grapes:72]\n\nassq('apple, price_list)\n// apple:120\nassq('mango, price_list);\n// false\n\nfunction calculate_price(fruit, kg)\n{ let entry = assq(fruit, price_list)\nwhen (entry) tail(entry) * kg }\n\ncalculate_price('apple, 2)\n// 240\ncalculate_price('grapes, 5)\n// 360\ncalculate_price('mango, 5)\n// false\n``````\n\nIf the lookup-keys are of a complex type like string, list or large integers, `assq` won't work. We need a function that inspects the structure of the key for equality. The function `assoc` is defined for this purpose. `Assoc` uses the `is_equal` predicate, which is mapped to the `==` operator, to do the equality check.\n\n``````\nlet person = [\"name\": \"nemo\", \"age\": 1]\nassoc(\"name\", person)\n// name:nemo\nassoc(\"age\", person)\n// age:1\nassq(\"age\", person)\n// false\n``````\n\nThe next program demonstrates some useful functions that can be applied on lists:\n\n``````\nlet xs = [10, 3, 45, 8, 9]\nlength(xs)\n// 5\nlength(xs) == count(xs)\n// true\nat(xs, 2)\n// 45\nxs\n// 45\nxs[1:3]\n// [3, 45]\nxs[:3]\n// [10, 3, 45]\nreverse(xs)\n// [9, 8, 45, 3, 10]\nsort(xs)\n// [3, 8, 9, 10, 45]\n\n// comparisons\n[1, 2, 3] == [1, 2, 3]\n// true\n[1, 2, 3] <> [4, 5, 6]\n// true\n[1, 2, 3] < [4, 5, 6]\n// true\n[1, 2, 3] > [4, 5, 6]\n// false\n[1, 2, 3] >= [1, 2, 3]\n// true\n\n// membership checks\nmemq(3, xs)\n// [3, 45, 8, 9]\nmemq(10, xs)\n// [10, 3, 45, 8, 9]\nmemq(1, xs)\n// false\n\nlet ys = [\"a\", \"list\", \"of\", \"strings\", [\"and\", \"lists\"]]\n// memq won't work because it uses `is_eq`\nmemq(\"of\", ys)\n// false\n\nmember(\"of\", ys)\n// [of, strings, [and, lists]]\nmember([\"and\", \"lists\"], ys)\n// [[and, lists]]\nmember([\"and\", \"list\"], ys)\n// false\n\nfunction is_vowel(c)\nmemq(c, [\\a, \\e, \\i, \\o, \\u])\n\nis_vowel(\\a)\n// [\\a, \\e, \\i, \\o, \\u]\nis_vowel(\\o)\n// [\\o, \\u]\nis_vowel(\\k)\n// false\n``````\n\n### 5.2.1 List Comprehensions\n\nA list comprehension is a notational convenience for constructing lists from other lists. It has the following general syntax:\n\n``````\n[out_expr | var_expr <- input_list where filter_expr, ...]\n``````\n\n`Out_expr` constructs each element in the output list. `Var_expr` assign values to variables used in `out_expr`. Each value is \"extracted\" from an input list. The `where_clause` is optional and is used to filter values extracted from the input list.\n\nSome examples of using list comprehensions are shown below:\n\n``````\n[x * x | x <- [1, 2, 3, 4, 5]]\n// [1, 4, 9, 16, 25]\n\n[i : j | i <- range(1, 5), j <- range(i, 5) where is_even(i)]\n// [2:2, 2:3, 2:4, 2:5, 4:4, 4:5]\n\n{ let elems = range(1, n);\n[[x, y, z] | x <- elems,\ny <- elems,\nz <- elems where x * x + y * y == z * z] }\n\n// [[3, 4, 5], [4, 3, 5], [5, 12, 13], [6, 8, 10],\n[8, 6, 10], [9, 12, 15], [12, 5, 13], [12, 9, 15]]\n\nfunction concat(xss) [x | xs <- xss, x <- xs]\nconcat([[1, 2, 3], [4, 5, 6]])\n// [1, 2, 3, 4, 5, 6]\n``````\n\n## 5.3 Arrays\n\nArrays are fixed-length sequences that provide constant-time, position-based lookup for elements. If fast lookups are required you should always prefer an array over a list because a list can only provide sequential access to its members. Just like for lists, there are multiple ways to create and initialize arrays:\n\n``````\n#[1, 2, 3]\n// #[1, 2, 3]\narray(\"hello\", \"world\")\n// #[hello, world]\nlet xs = make_array(5)\nxs\n// #[false, false, false, false, false]\narray_set(xs, 0, 120)\narray_set(xs, 2, \"hi\")\nxs\n// #[120, false, hi, false, false]\nxs\n// 120\nlet ys = #[1, 2, 3, 4, 5]\nys[2:4]\n// #[3, 4]\n``````\n\n### 5.3.1 Type specific arrays\n\nSlogan provide arrays for storing and accessing specific numeric types. For example, the byte-array is optimized for bytes. There are also arrays for 16bit/32bit/64bit signed/unsigned integers and 32bit/64bit floating-point numbers. The type of an array literal is specified by an identifier after the `#` sign. For example `#u8` means an array of unsigned bytes and `#s32` means an array of signed 32bit integers.\n\n``````\n#u8[1, 34, 250]\n// 34\n\n// precision of elements may vary based on architecture:\n#f32[1.0, 34.114, 250.12]\n// 34.11399841308594\n``````\n\nAnother useful type-specific array are bit_arrays. They are designed to efficiently store and retrieve bit-encoded information.\n\n``````\nlet flags = #b[1, 0, 1, 1]\nflags\n// true\nflags\n// false\nbit_array_clear(flags, 2)\nflags\n// #b[1, 0, 0, 1]\n``````\n\nExercise 5.1. Read about the 16 bit color encoding scheme, where the red and blue components are encoded using 5 bits and the green component is encoded in 6 bits. Implement a function, `make_color` that takes the red, green and blue components as arguments and return the encoded color value as a bit-array. Also write selectors for decoding the color object into individual red, green and blue values.\n\n#### Bloom Filter\n\nA bloom filter is a data structure that can quickly test if an element is a member of a set. A bloom filter is basically a large bit-array. An element is added to the bloom filter by first converting it into a bunch of integers called hashes and then using those as indices to be turned-on in the bit-array. Membership check also happens similarly - if all the bits at the hashes of the element are on, it is a member of the set. Bloom filters are space efficient because elements are reduced to a few bits and stored. They are ideal when fast lookups against a huge set is required and a false positive result is not catastrophic. An example application is in the domain of crawling and indexing web pages. A crawler has to retrieve and index millions or even billion of pages. When it encounters a new URL, it has to quickly figure out if that URL was already crawled or not. Bloom filter is an ideal data structure here because it is space efficient, fast and rarely re-crawling a page need not be a big deal.\n\nLet us go straight into the implementation of the bloom filter. Note that we make use of the hash functions built into Slogan. A production quality bloom filter will require better hashing techniques.\n\n``````\nfunction make_bloom_filter(size)\nmake_bit_array(size)\n\n/* Return two hashes for the string `entry`.\nThe first is generated using the built-in `string_hash`\nfunction. The second hash is generated by converting\n`entry` into a list of integers and then hashing that list.\n*/\nfunction hash_entry(entry, size)\n{ let h1 = string_hash(entry)\n// we haven't talked about `map` yet, but we will soon!\nlet h2 = equal_hash(map(char_to_integer, string_to_list(entry)))\nremainder(h1, size):remainder(h2, size) }\n\n/* Add an entry to the bloom filter.\nThe bits at the positions identified by the hashes\nare turned on.\n*/\nfunction bloom_filter_set(b, entry)\n{ let h1:h2 = hash_entry(entry, bit_array_length(b))\nbit_array_set(b, h1)\nbit_array_set(b, h2) }\n\n/* Return true if `entry` is a member of the bloom filter.\nBoth bits identified by the hashes must be on.\n*/\nfunction bloom_filter_test(b, entry)\n{ let h1:h2 = hash_entry(entry, bit_array_length(b))\nb[h1] && b[h2] }\n``````\n\nHere is our tiny bloom filter in action:\n\n``````\nlet b = make_bloom_filter(1000)\nbloom_filter_set(b, \"hello\")\nbloom_filter_set(b, \"helLO\")\nbloom_filter_set(b, \"hello world\")\n\nbloom_filter_test(b, \"hello\")\n// true\nbloom_filter_test(b, \"helLO\")\n// true\nbloom_filter_test(b, \"hello world\")\n// true\nbloom_filter_test(b, \"hello, world\")\n// false\nbloom_filter_test(b, \"HelLO\")\n// false\n``````\n\n## 5.4 Hash Tables\n\nThe hash table is one of the most ingenious and versatile of all data structures. It is an unordered collection of key/value pairs in which all the keys are distinct, and the value associated with a given key can be retrieved, updated, or removed using a constant number of key comparisons on the average, no matter how large the hash table.\n\nThe simplest way to create a hash table is to write down it as pairs enclosed in `#{}`. The head of a pair is treated as key and the tail becomes the associated value.\n\n``````\nlet ages = #{\"alice\":10, \"bob\":8, \"eve\":12}\nages[\"alice\"]\n// 10\nages[\"eve\"] = ages[\"eve\"] + 2\nages[\"eve\"]\n// 14\nages[\"olivia\"]\n// false\n\n// return a default value for a missing key\nhashtable_at(ages, \"olivia\", 7)\n// 7\nhashtable_keys(ages)\n// #[alice, eve, bob]\nhashtable_values(ages)\n// #[10, 14, 8]\n``````\n\n## 5.5 Sets\n\nA set stores an unordered sequence of objects without duplicates. It is an implementation of the mathematical concept of finite sets. Unlike most other collection types, rather than retrieving a specific element from a set, one typically tests a value for membership in a set. A set literal is written by enclosing the objects in `#()`.\n\n``````\nlet s1 = #(1, 2, 3, 4)\nlet s2 = #(3, 4, 5, 6)\nis_set_member(s1, 2)\n// true\nis_set_member(s1, 5)\n// false\nset_difference(s1, s2)\n// #(1, 2)\nset_difference(s2, s1)\n// #(5, 6)\nset_intersection(s1, s2)\n// #(3, 4)\nset_union(s1, s2)\n// #(1, 2, 3, 4, 5, 6)\nis_subset(#(1, 2), #(1, 2, 3, 4))\n// true\nis_superset(#(1, 2), #(1, 2, 3, 4))\n// false\nis_superset(#(1, 2, 3, 4), #(1, 2))\n// true\n``````\n\n## 5.6 Records\n\nRecords are a means for defining new, distinct types. The `record` statement is used to introduce a new custom type. Its general syntax is shown below:\n\n``````\nrecord <name> (<member01> where <pre-condition>, <member02> ...)\n``````\n\nFor a new record type, Slogan automatically generates a constructor and selector/modifier functions for accessing and updating its members.\n\nThe following program shows how a simple record can be defined and used.\n\n``````\nrecord employee(name, salary, dept)\nlet e1 = employee(name = \"alice\", salary = 3400, dept = \"ENG\")\nlet e2 = employee(name = \"bob\", salary = 4500, dept = \"FIN\")\n\ne1\n// #<employee #4 name: \"alice\" salary: 3400 dept: \"ENG\">\ne2\n// #<employee #5 name: \"bob\" salary: 4500 dept: \"FIN\">\n\nemployee_name(e1)\n// alice\nemployee_dept(e2)\n// FIN\nemployee_set_salary(e1, 3600)\ne1\n// #<employee #4 name: \"alice\" salary: 3600 dept: \"ENG\">\n``````\n\nOne problem with the auto-generated constructor is that it won't do any data integrity checks. For instance, you are allowed to create an employee with an invalid salary:\n\n``````\nemployee(name = \"nemo\", salary = \"#@@@#@@\\$\", dept = \"ENG\")\n#<employee #6 name: \"nemo\" salary: \"#@@@#@@\\$\" dept: \"ENG\">\n``````\n\nThe optional `where` clause allows us to specify data validation rules for record values. Let us redefine the `employee` record with some condition checks.\n\n``````\nrecord employee(name where is_string(name),\nsalary where is_integer(salary)\n&& salary > 1500\n&& salary < 10000,\ndept where is_string(dept))\n\nemployee(name = \"nemo\", salary = \"#@@@#@@\\$\", dept = \"ENG\")\n//> error: precondition_failed, #@@@#@@\\$\nemployee(name = \"nemo\", salary = 230, dept = \"ENG\")\n//> error: precondition_failed, 230\nemployee(name = \"nemo\", salary = 2300, dept = \"ENG\")\n// #<employee #7 name: \"nemo\" salary: 2300 dept: \"ENG\">\n``````\n\n## 5.7 Patterns of Data\n\nSlogan has the ability to take apart data structures and do pattern matching on them. A pattern match expression has the following general form:\n\n``````\nmatch (value)\npattern_1 -> result_1\n| pattern_2 -> result_2\n| ...\n``````\n\nIf `value` does not match any of the listed patterns, a `no_match_found` error is raised.\n\nLet us begin our exploration of pattern matching with the help of a few simple examples. Later we will see how this facility can lead to the clean and concise specification of a non-trivial algorithm.\n\nOur first example re-implements a function from Slogan `core` - the `length` function that return the number of elements in a list.\n\n``````\nfunction length(xs)\nmatch(xs)\n[] -> 0\n| h:t -> 1 + length(t)\n``````\n\nOur definition of `length` does a pattern destructuring on its argument. If the pattern matches an empty list, `0` is returned. If the pattern matches a head and a tail pair, the length is `1` added to the length of tail.\n\nWe can write this function more compactly, by eliminating the explicit declaration of `match`. We also do not need to bind the `h` variable because we don't use it. This can be replaced by the `_` wildcard character.\n\n``````\nfunction length(xs)\n| [] -> 0\n| _:t -> 1 + length(t)\n``````\n\nPattern matching can be done on any data type with a literal representation - numbers, strings, lists, arrays, hash tables, sets and so on. A few more examples follows:\n\n``````\nfunction factorial(n)\n| 0 -> 1\n| _ -> n * factorial(n-1)\n\nfactorial(10)\n// 3628800\nfactorial(3)\n//6\n\n// Evaluate arithmetic expressions in the\n// format #{operation: [expr1, expr2]}, where `operation`\n// is one of the four symbols - 'add, 'sub, 'mul and 'div.\n// An expression can also be a numeric literal.\nfunction calculate(expr)\n| #{'add: [e1, e2]} -> calculate(e1) + calculate(e2)\n| #{'sub: [e1, e2]} -> calculate(e1) - calculate(e2)\n| #{'mul: [e1, e2]} -> calculate(e1) * calculate(e2)\n| #{'div: [e1, e2]} -> calculate(e1) / calculate(e2)\n| e where is_number(e) -> e\n\n// 700\ncalculate(#{'sub: [100, 20]})\n// 80\ncalculate(#{'sub: [100, \"ok\"]})\n//> error: no_match_found``````\n\nThe `calculate` function makes use of the `where` guard in the last pattern to make sure that the value that gets bound to the variable is a number.\n\nIn addition to the built-in data structures, user defined records can also be destructured:3\n\n``````\nrecord rectangle(width, length)\n\nfunction area(shape)\n| rectangle(width, length) -> width * length\n\n// 335.88497980399995\narea(rectangle(width=20, length=52.78))\n// 1055.6\n``````\n\nA record pattern match can refer members by position. This is achieved by prefixing the member name by the `@` character. (This means record members are not allowed to start with the `@` character). Let us rewrite the `area` function by destructuring the record members by position:\n\n``````\nfunction area(shape)\n| circle(@r) -> 3.14159 * (@r * @r)\n| rectangle(@w, @l) -> @w * @l\n``````\n\nIf a function with an implicit match takes more than one parameter, all the arguments should be packaged into a list and passed to the pattern matcher:\n\n``````\nfunction f(a, b)\n| [1, b] -> b * 10\n| [2, b] -> b * 100\n\nf(1, 2)\n// 20\nf(2, 2)\n// 200\n``````\n\nSlogan support `or-patterns`, which is a feature that allows us to collapse multiple clauses with identical right-hand sides into a single clause:\n\n``````\nfunction f(xs)\n| [a, b]\n| #[a, b]\n| #(a, b) -> a * b\n\nf([10, 20])\n// 200\n\nf(#[10, 20])\n// 200\n\nf(#(10, 20))\n// 200``````\n\nRepetition of the same pattern can be avoided by using the special pattern variable `%`, which always refer to the previous pattern checked.\n\n``````\nmatch ([1, 2, 3])\n[_, b, _] where b >= 10 -> 'hi\n| % where b >= 1 -> 'hello\n// hello\n``````\n\n### 5.7.1 A self-balancing search tree\n\nNow that we have covered the basics, let us write some code that exploits the true expressive power of pattern matching! We are going to implement a data structure that is often flagged as \"advanced\" in text books. This is the red-black tree - one of the most popular of all balanced binary trees.\n\nIn a red-black tree every node is colored either red or black and it satisfies the following two balance invariants:\n\n1. No red node has a red child.\n2. Every path from the root to an empty node contains the same number of black nodes.\n\nThese invariants guarantee that the longest possible path in the tree is not longer than the shortest possible path times two. (The longest path has alternating red and black nodes and the shortest path has only black nodes.)\n\nThese invariants are enforced while inserting a new node, using a `balance` function. This function re-configures all possible black-red-red paths into a red-black-black path. The black-red-red paths can occur in four configurations, depending on whether the red node is a left or right child. The rewrite required is the same in all cases. Pattern matching makes it possible to write the `balance` function in a compact, declarative style:\n\n``````\nfunction balance(color, t, z, d)\n| ['b, ['r,['r,a,x,b],y,c], z, d]\n| ['b, ['r,a,x,['r,b,y,c]], z, d]\n| ['b, a, x, ['r,['r,b,y,c],z,d]]\n| ['b, a, x, ['r,b,y,['r,c,z,d]]]\n-> ['r,['b,a,x,b],y,['b,c,z,d]]\n| _ -> [color, t, z, d]\n``````\n\nThe complete code for the red-black tree is available for download. For a detailed description of a red-black tree structure similar to the one presented here, please see the book \"Purely functional data structures\" by Chris Okasaki.\n\n1By that definition, a string is also an aggregate data type because it is essentially an array of characters. We treated it as a basic data type because of its importance in performing many useful tasks, like searching for patterns in textual data.\n\n2Pairs can be made out of any expressions. For e.g, the expression `1 + 2:100` will create the pair `3:100`. Keep in mind that function call expressions tends to bind tightly to the right. So if you want to glue the two expressions `1 + inc(1)` and `100` into a pair, the first expression must be enclosed in parenthesis, as in, `(1 + inc(1)):100`. This will result in the pair `3:100` as expected.\n\n3Pattern based destructuring can be performed on user-defined composite types as well. Slogan allows you to define your own composites that behave like lists and hash tables. This will be the subject of a later chapter."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7548619,"math_prob":0.96419966,"size":21679,"snap":"2019-51-2020-05","text_gpt3_token_len":6108,"char_repetition_ratio":0.109250285,"word_repetition_ratio":0.017728532,"special_character_ratio":0.31763458,"punctuation_ratio":0.16570903,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9845281,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-29T05:45:11Z\",\"WARC-Record-ID\":\"<urn:uuid:c60c7f34-de17-4f6d-8716-09cb8caddf72>\",\"Content-Length\":\"38733\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1fb06ec1-1d22-4f3f-8b1b-9b60360bd5d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ce07cb6-61fa-4e0a-9a7a-f57d8be34e4d>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://schemer.in/slogan/docs/book/cdt.html\",\"WARC-Payload-Digest\":\"sha1:KZ5P3GMNXRRODU7LIHOGXCBUVRXDPJ7V\",\"WARC-Block-Digest\":\"sha1:Y547GF54F3EKPJYTGJAPZPVKUIMWFT3M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251788528.85_warc_CC-MAIN-20200129041149-20200129071149-00528.warc.gz\"}"} |
https://feed.nuget.org/packages/pi.science.api/1.2.6 | [
"# pi.science.api 1.2.6\n\nScientific library.\n\nThere is a newer version of this package available.\nSee the version list below for details.\n`Install-Package pi.science.api -Version 1.2.6`\n`dotnet add package pi.science.api --version 1.2.6`\n`<PackageReference Include=\"pi.science.api\" Version=\"1.2.6\" />`\nFor projects that support PackageReference, copy this XML node into the project file to reference the package.\n`paket add pi.science.api --version 1.2.6`\n\n## PI Science API\n\nScientific library for .NET (standard 2.0).\n\n#### Supported areas:\n\n• Statistics (descriptive statistics, statistics classes) .\n• Math (matrices, Cramer`s rule, Gamma function, Beta function, Error function,\nGamma incomplete function, numerical integration-Trapezoidal rule, numerical integration-Rectangle rule,\nnumerical integration-Simpsons rule).\n• Discrete math (primes, prime factorization, prime factorization-Fermat).\n• Probability (factorial, combination, Catalan number).\n• Regression (linear, polynomial, exponential, exponential modified, power, Gompertz, logistic).\n• Smoothing (moving average, median smoothing, simple exponential smoothing, double exponential smoothing).\n• Probability distribution (normal distribution, chi-square distribution, students distribution, f distribution, log-normal distribution, exponential distribution,\nPoisson distribution, Erlang distribution, Weibull distribution, Rayleigh distribution, Pareto distribution).\n• Hypothesis testing (Shapiro-Wilk(original), Shapiro-Wilk(expanded), Skewness normality test, Kurtosis normality test",
null,
", D`Agostino-Pearson normality test",
null,
", Jarque-Bera normality test",
null,
").\n\n#### Examples\n\nExamples for every area you can find here.",
null,
"#### Scheduled areas for next development:\n\nIntegrals, correlations, interpolations, hypothesis testing, fractions support, neural networks, graph algorithms, cluster analysis...\n\n## PI Science API\n\nScientific library for .NET (standard 2.0).\n\n#### Supported areas:\n\n• Statistics (descriptive statistics, statistics classes) .\n• Math (matrices, Cramer`s rule, Gamma function, Beta function, Error function,\nGamma incomplete function, numerical integration-Trapezoidal rule, numerical integration-Rectangle rule,\nnumerical integration-Simpsons rule).\n• Discrete math (primes, prime factorization, prime factorization-Fermat).\n• Probability (factorial, combination, Catalan number).\n• Regression (linear, polynomial, exponential, exponential modified, power, Gompertz, logistic).\n• Smoothing (moving average, median smoothing, simple exponential smoothing, double exponential smoothing).\n• Probability distribution (normal distribution, chi-square distribution, students distribution, f distribution, log-normal distribution, exponential distribution,\nPoisson distribution, Erlang distribution, Weibull distribution, Rayleigh distribution, Pareto distribution).\n• Hypothesis testing (Shapiro-Wilk(original), Shapiro-Wilk(expanded), Skewness normality test, Kurtosis normality test",
null,
", D`Agostino-Pearson normality test",
null,
", Jarque-Bera normality test",
null,
").\n\n#### Examples\n\nExamples for every area you can find here.",
null,
"#### Scheduled areas for next development:\n\nIntegrals, correlations, interpolations, hypothesis testing, fractions support, neural networks, graph algorithms, cluster analysis...\n\n## Release Notes\n\n+ Added new test class pi.science.hypothesistesting.test.PIKurtosisTestTest.\n+ Added new method pi.statistics.api.PIVariable.GetSampleKurtosis() - (Excel version).\n+ Added new test class pi.science.hypothesistesting.test.PIDAgostinoPearsonTest.\n+ Added new test class pi.science.hypothesistesting.test.PIJarqueBeraTest.\n\n## Dependencies\n\n• #### .NETStandard 2.0\n\n• No dependencies.\n\n## Used By\n\n### NuGet packages\n\nThis package is not used by any NuGet packages.\n\n### GitHub repositories\n\nThis package is not used by any popular GitHub repositories."
]
| [
null,
"https://www.josefpirkl.com/images/new.jpg",
null,
"https://www.josefpirkl.com/images/new.jpg",
null,
"https://www.josefpirkl.com/images/new.jpg",
null,
"https://www.josefpirkl.com/software/pi_science_api/images/header1.png",
null,
"https://www.josefpirkl.com/images/new.jpg",
null,
"https://www.josefpirkl.com/images/new.jpg",
null,
"https://www.josefpirkl.com/images/new.jpg",
null,
"https://www.josefpirkl.com/software/pi_science_api/images/header1.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.63931066,"math_prob":0.4458989,"size":4167,"snap":"2020-34-2020-40","text_gpt3_token_len":936,"char_repetition_ratio":0.14700937,"word_repetition_ratio":0.72540045,"special_character_ratio":0.19030477,"punctuation_ratio":0.29036826,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9978838,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-24T20:49:32Z\",\"WARC-Record-ID\":\"<urn:uuid:ebbc42fc-4473-461e-a4a1-3a4b0437f017>\",\"Content-Length\":\"53421\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bc1ddbb1-6853-413f-84fe-fe356e952780>\",\"WARC-Concurrent-To\":\"<urn:uuid:b4d71477-59bb-4706-849e-0fa95db8719a>\",\"WARC-IP-Address\":\"52.237.135.91\",\"WARC-Target-URI\":\"https://feed.nuget.org/packages/pi.science.api/1.2.6\",\"WARC-Payload-Digest\":\"sha1:TYJ57JUH2V7U4O2RQGXWMFILSQFZKJ4K\",\"WARC-Block-Digest\":\"sha1:HCDLCRFVJN63ZE3LQZCPJLBY2ZZLFF6T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400220495.39_warc_CC-MAIN-20200924194925-20200924224925-00641.warc.gz\"}"} |
https://www.spiedigitallibrary.org/ebooks/PM/Electro-Optical-System-Analysis-and-Design-A-Radiometry-Perspective/6/Sensors/10.1117/3.1001964.ch6?SSO=1 | [
"Translator Disclaimer\nChapter 6:\nSensors",
null,
"Abstract\nThis chapter provides an introductory overview of sensors. The analysis is limited to small-angle (paraxial) optics. The purpose is to equip the reader to do a first-order design in the system context of this book. Detailed sensor design is beyond the scope of this chapter and indeed not required for this text. Fundamental to the sensor concept is the geometry of solid angles and how these are effected in the sensor. The second important element is the conversion of optical energy into electrical energy, including the effect of noise. The path by which a ray propagates through an optical system can be mathematically calculated. The sine, cosine, and tangent functions are used in this calculation. These functions can be written as infinite Taylor series, i.e., sin(x) = x − x3/3! + x5/5! − x7/7! +· · ·. The paraxial approximation only uses the first term in the sum sin(x) ≈ tan(x) ≈ x, cos(x) ≈ 1. The paraxial approximation is valid only for rays at small angles with and near the optical axis. The paraxial approximation can be effectively used for first-order design and system layout despite the small-angle limitations. The coverage of detectors and noise in this chapter is, similarly, only a brief introduction: sufficient detail is given to support first-order design and modeling.",
null,
""
]
| [
null,
"https://www.spiedigitallibrary.org/Images/eBooks/VolumeCovers/PM/PM236-245.jpg",
null,
"https://www.spiedigitallibrary.org/Content/themes/SPIEImages/Share_white_icon.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.88092077,"math_prob":0.90857154,"size":1378,"snap":"2020-24-2020-29","text_gpt3_token_len":295,"char_repetition_ratio":0.11208151,"word_repetition_ratio":0.0,"special_character_ratio":0.20972423,"punctuation_ratio":0.108949415,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9891586,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-02T08:46:26Z\",\"WARC-Record-ID\":\"<urn:uuid:88503985-449e-40f8-a855-5036a993286b>\",\"Content-Length\":\"106214\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9f0f142d-0e75-408b-ade3-bf82bf576c9a>\",\"WARC-Concurrent-To\":\"<urn:uuid:94de6015-283c-4383-9c4d-34357a38fe7b>\",\"WARC-IP-Address\":\"107.154.251.12\",\"WARC-Target-URI\":\"https://www.spiedigitallibrary.org/ebooks/PM/Electro-Optical-System-Analysis-and-Design-A-Radiometry-Perspective/6/Sensors/10.1117/3.1001964.ch6?SSO=1\",\"WARC-Payload-Digest\":\"sha1:VRRXZUVPK4VUPU7B5S5VGRD2XF25CSCJ\",\"WARC-Block-Digest\":\"sha1:2H4JYWAVBOIPJFJXT2E3AXXSTBODRAB3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347423915.42_warc_CC-MAIN-20200602064854-20200602094854-00360.warc.gz\"}"} |
https://mathoverflow.net/questions/207517/understanding-of-rough-path | [
"# understanding of rough path\n\nA rough path is defined as an ordered pair $(X, \\mathbb X)$, where $X$ is a path mapping from $[0,T]$ to some Banach space $V$ and $\\mathbb X:[0,T]^2 \\mapsto V^2$ is another mapping for additional information on the curve $X$.\n\nI am not quite into their motivation, although there are some discussions online. In particular, I find the following remark in the first paragraph of chapter 9 of the book (link) `Multidimensional Stochastic Processes as Rough Paths: Theory and Applications' by Friz and Victoir: Consider $X$ of finite $p$-variation with $p\\ge 2$. ... the knowledge of higher indefinite iterated integrals up to order $N = [p]$ must be an apriori information, i.e. assumed to be known.\n\nIntuitively, I thought, as long as the curve $X$ is given, all the attached information (like $\\mathbb X$) shall not be apriori, i.e. one can obtain (may be hard) $\\mathbb X$ from the given $X$, which is contrary to the above.\n\nThere are many discussions on the rough path theory. However, is there any explanation on the above statement in a easier way, which can be understood to a person who has knowledge of Ito stochastic analysis but none of rough path theory?\n\nSome of the confusion may be caused by the use of the word \"information\". You are right that in a probabilistic context, one would typically like to build $\\mathbb{X}$ as a measurable function of $X$, so in this sense $X$ would contain all the information required to build $\\mathbb{X}$. The point they are making is that there is no canonical way to do this, so different constructions may produce different choices of $\\mathbb{X}$. In this sense, $\\mathbb{X}$ does encode some information not contained in $X$, since it indirectly tells you something about which construction you've used to produce it. For example, if $X$ is a Brownian motion, you can construct $\\mathbb{X}$ either by Itô integration or by Stratonovich integration (or in some other way), and inspecting $\\mathbb{X}$ would reveal information about your choice of integration.\nGoing back to the motivation, the aim is to use $X$ to solve differential equations of the type $$\\dot Y = F_0(Y) + \\sum_{i=1}^m F_i(Y) \\dot X_i\\;,$$ or, if you prefer, $$dY = F_0(Y)\\,dt + \\sum_{i=1}^m F_i(Y) \\,dX_i(t)\\;.$$ It turns out that the solution map $S\\colon X \\mapsto Y$ is simply not continuous in the $p$-variation topology as soon as $p \\ge 2$. Similarly, it is not continuous in the $\\alpha$-Hölder topology as soon as $\\alpha < 1/2$. What this means is that for any given $X$ (even a smooth one), you can find sequences of smooth functions $X^{(n)}$ and $\\bar X^{(n)}$ that both converge to $X$ in, say, the ${1\\over 2}$-Hölder topology, but such that the solutions $S(X^{(n)})$ and $S(\\bar X^{(n)})$ either converge to different limits, or fail to converge at all. In this sense, $X$ itself contains \"not enough information\", because different ways of approximating it may lead to different outcomes.\nThe point of rough path theory is to figure out what additional information should be added to $X$ so that continuity is restored, and this is precisely what $\\mathbb{X}$ encodes. Note also that this is quite standard procedure when weak forms of convergence are involved. Think of Young measures or of varifolds: in both cases they encode some \"additional information\" required to apply some nonlinear transformation to an object obtained as a weak limit."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.91796833,"math_prob":0.9986158,"size":1164,"snap":"2019-43-2019-47","text_gpt3_token_len":297,"char_repetition_ratio":0.09827586,"word_repetition_ratio":0.0,"special_character_ratio":0.24914089,"punctuation_ratio":0.1308017,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996793,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T08:07:29Z\",\"WARC-Record-ID\":\"<urn:uuid:a21378cb-fd02-4993-bc1c-8193c1fa5a0a>\",\"Content-Length\":\"114683\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f5eca9c6-590b-4e5c-b3f3-36f8fc9141db>\",\"WARC-Concurrent-To\":\"<urn:uuid:a56833cb-348f-4263-a19b-7f914ca07e03>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/207517/understanding-of-rough-path\",\"WARC-Payload-Digest\":\"sha1:SJZVAMFBEBR7UQ6TOCCP4SOL54YUSEOI\",\"WARC-Block-Digest\":\"sha1:C76DDELDWV4VJMBEPYOGYBYBC6IB2KCS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670743.44_warc_CC-MAIN-20191121074016-20191121102016-00113.warc.gz\"}"} |
http://www.rwgrayprojects.com/coffetables/table01/details01.html | [
"# Coffee Table #1\n\nThe first coffee table is to have a regular Dodecahedron outer shape and an Icosahedron within. Here are some illustrations. The regular Dodecahedorn is shown with blue struts and golden/orange spherical vertices/hubs. The Icosahedron is shown with red struts and small golden/orange spherical vertices/hubs. Very thin green struts join the inner Icosahedron to the outer Dodecahedron.",
null,
"",
null,
"",
null,
"## Basic Geometry Information\n\nA regular Dodecahedron has 20 vertices, 12 pentagon faces and 30 edges/struts. There are 3 struts converging at each hub.",
null,
"An Icosahedron has 12 vertices, 20 triangular faces and 30 edges/struts. There are 5 struts converging at each hub.",
null,
"If you extend the edges/struts of the Icosahedron beyond the Icosahedron's vertices, you find that they merge in 20 groups of 3. These new merged/intersection points define the vertices of a regular Icosahedron.\n\n## Construction Details\n\nNOTE: All length dimensions are in inches. All angle dimensions are in degrees.\n\n### Size\n\nHeight (without glass): 16.5 inches = 15 inches Dodecahedron + 2x0.75 inches radius of hubs.\n\nGlass thickness: 0.25 inches.\n\nTotal Height (including glass top): 16.75 inches.\n\nThe Dodecahedron's hubs are to be made out of 1.5 inch diameter spherical wooden spheres.\n\nThe Icosahedron's hubs are to be made out of 1.0 inch diameter spherical wooden spheres.\n\nDodecahedron Struts (need 30): 0.5 inches diameter.\n\nVertex to Vertex length (i.e., center of hub to center of hub): 6.735 inches.\n\n0.183 inches are removed from each end of the struts because without removing this amount, two struts will intersect each other within the hub.\n\nPhysical strut length: 6.735 – 2 x 0.183 inches = 6.369 inches.\n\n3 feet x 12 inches = 36 inch dowel length.\n36 inches / 6.369 inches = 5.6524 implies we can make 5 struts per 3 foot length.\nTotal 3 foot length dowels (0.5 inches diameter) = 6 (6 x 5 = 30).",
null,
"Icosahedron Vertex Spheres (need 12): 1.0 inch diameter.\n\nIcosahedron Struts (need 30): 0.125 inches diameter.\n\nVertex to Vertex length: 4.163 inches.\n\n36 inches / 4.163 = 8.65 => 8 struts per 3 foot length.\nTotal 3 foot length dowels (0.25 inches diameter) = 4 (4 x 8 = 32).\n\nInterPolyhedron Struts (need 60): 0.125 inches diameter.\n\nVertex to Vertex length: 6.735 inches.\n\n(NOTE: These struts are to be cut “in place” and not pre-cut. This is because accumulation of error and other factors introduce unpredictable inaccuracies. These struts are surface glued to the polyhedra hubs, i.e. not recessed into the hubs.)",
null,
"## Regular Dodecahedron Hub Construction\n\nWe need to construct a jig to mark the locations on the sphere hubs where the 3 struts of the Dodecahedron are to be inserted.\n\nThe Dodecahedron's hubs are designed as follows.",
null,
"Dodecahedron hub angles.",
null,
"Dodecahedron hub angles.",
null,
"Dodecahedron hub design strut insertion lengths.\n\nMake a jig to hold the 1.5 inch diameter spheres. The jig can be so constructed as to allow the marking of 3 points on the equator as well as the \"north pole\" of the sphere.\n\nThe 3 points on the equator are 108/2 = 54 degrees apart from each other. Number these points 1, 2, 3.",
null,
"Dodecahedron hub design.\n\nMake sure the north pole point is marked.\n\nNow imagine an arc drawn from the equator at point 2, which is half way between points 1 and 3 on the equator, to the north pole. We can continue this arc through the north pole and down to the equator on the opposite side of the sphere hub from point 2.\n\nWe now need a jig to be able to mark a point (labeled 4) on this arc. The point is 121.7° from point 2 through and beyond the north pole. This point 4 is 31.7° beyond the north pole point.",
null,
"Dodecahedron hub design.\n\nWith these points marked on the 1.5 inch diameter sphere hub, we can drill a 0.5 inch diameter hole at points 1, 3, and 4 only. The hole depth need only be to the center of the sphere.\n\nWhen a dowel is inserted into the hub, it must only be inserted to a depth of 0.524 inches. This can be marked on the dowel before insertion.\n\n## Icosahedron Hub Construction\n\nA jig is made to mark 5 points on the sphere hubs indicating where holes are to be drilled for the 5 struts to be inserted into the hub.\n\nFirst, we need to understand the hub.",
null,
"Icosahedron hub sphere accommodates 5 struts.\n\nLooking down on a hub.",
null,
"Icosahedron hub sphere accommodates 5 struts.\n\nHere is a side view.",
null,
"Icosahedron hub sphere accommodates 5 struts.\n\nNote that \"C1\" defines a (blue) circle on the surface of the hub sphere. The 5 struts intersect this circle. (\"C.O.V.\" = Center Of Volume point.)\n\nThe blue circle has the following dimension and angles with respect to the vertex at the hub sphere center.",
null,
"Icosahedron hub sphere accommodates 5 struts.\n\nIts the angular dimension we are after. We will make a jig based on this angle which will work for any size hub.",
null,
"Icosahedron hub jig design.\n\nIn the above figure, the circle is the sphere hub. Within the circle is a triangle with a small 31.7° angle, as we found from the above figures.\n\nSo, if we make 5 large triangle stands, with the appropiate angle, we will be able to arrange them so all we have to do is to put a sphere between the 5 stands. Where the sphere touches the 5 stand triangles will be the points where 5 holes need to be drilled for the 5 struts entering the hub. This works independent of the sphere hub radius.\n\nThe arrangement of the 5 stand triangles is shown in the next figure.",
null,
"Icosahedron hub jig design."
]
| [
null,
"http://www.rwgrayprojects.com/coffetables/table01/table101.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/table102.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/table103.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/dodeca01.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/icosa01.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/dim01.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/dim02.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/dim03.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/dim04.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/hub01.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/hub03.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/hub04.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/ihub01.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/ihub02.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/ihub03.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/ihub06.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/ihub07.jpg",
null,
"http://www.rwgrayprojects.com/coffetables/table01/ihub08.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8753586,"math_prob":0.95688593,"size":5375,"snap":"2022-05-2022-21","text_gpt3_token_len":1471,"char_repetition_ratio":0.16533235,"word_repetition_ratio":0.059811123,"special_character_ratio":0.25693023,"punctuation_ratio":0.1346831,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9912853,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-29T11:59:24Z\",\"WARC-Record-ID\":\"<urn:uuid:76c57d2a-bc6c-4435-8434-f0719ebe623e>\",\"Content-Length\":\"8916\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:abc6a139-de85-4042-9b83-3786284fcbca>\",\"WARC-Concurrent-To\":\"<urn:uuid:580727e8-4a08-4038-8580-b90b5f37ade1>\",\"WARC-IP-Address\":\"173.236.225.222\",\"WARC-Target-URI\":\"http://www.rwgrayprojects.com/coffetables/table01/details01.html\",\"WARC-Payload-Digest\":\"sha1:LYHPDT2HEN2WBWKQMDQGJZIATHIPTOTK\",\"WARC-Block-Digest\":\"sha1:B5NXZD2PA7JR7XULIQM5DTPODHKBKDTI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662644142.66_warc_CC-MAIN-20220529103854-20220529133854-00537.warc.gz\"}"} |
https://becheler.github.io/software/quetzal-CoalTL/tutorials/niche | [
"*\n\n# Linking environments to species ecology\n\nBy niche functions, we mean here any quantity of an ecological model that is linked to environmental quantities.\n\nFor example the following function is what we would call a niche function: the growth rate is a function of the temperature.",
null,
"Usually in demogenetic models, the “true” niche functions are not precisely known, so their forms have to be inferred. In the previous picture, we would typically try to estimate the parameters $T_{opt}$, $T_{min}$ and $T_{max}$.\n\nIf you are reading this tutorial, it means that you are probably interested in simulation-based inference: in such frameworks a huge amount of simulations is needed to explore the parameters space. Quetzal allows to explore the parameter space of the niche functions with a priori better efficiency than achieved by previous simulation resources.\n\nWhy ?\n\nTypically, an ABC analysis would require:\n\n• the raw geographic dataset to be read and tranformed using an external software with a given set of parameters\n• The tranformed data to be written in memory\n• The transformed data to be read by the demogenetic program to run a simulation.\n\nAnd this read-write-read cycle would repeat millions of times as the parameters are resampled.\n\nWe advocate that it is a costly way to compute things. Instead, we prefer to integrate the model choice into the demogenetic simulation program, so the data transformations are computed on the fly rather than written in memory. Plus, it fosters scientific reproducibility.\n\nAs there are a open-ended number of possible models, and that their relevancy is very specific to the question at hand, we choose to leave the user free to define its own niche functions, and we give him the right tools to do so.\n\n## An example: the logistic growth\n\nFor example, let’s consider the typical logistic function that is used in the literature to represent the local growth process, and let’s couple the growth rate and the carrying capacity to the environmental heterogeneity.\n\nThe following picture illustrate a one-deme growing population size following a logistic growth, with carrying capacity $K=500$, for different values of $r$:",
null,
"### Mathematical description\n\nThe number of descendants $$\\tilde{N}_{x}^{t}$$ in each deme can be sampled in a distribution conditionally to a function of the the local density of parents, for example\n\n$$\\tilde{N}_{x}^{t} \\sim Poisson(g(x,t))$$,\n\nwhere $g$ can be for example a discrete version of the logistic growth:\n\n[\\begin{array}{cc|ccc} g & : & \\mathbb{X}\\times \\mathbb{N} & \\mapsto & \\mathbb{R}^{+}\n& & (x,t) & \\mapsto & \\frac{N_{x}^{t}\\times(1+r(x,t))}{1+\\frac{r(x,t)\\times N_{x}^{t}}{K(x,t)}} ~.\n\\end{array}]\n\nThe $r$, respectively $k$, term is the growth rate, respectively the carrying capacity, defined as a function of the environmental quantities with parameter $\\theta$:\n\n[\\begin{array}{ccccl} K & : & \\mathbb{X}\\times \\mathbb{N} & \\mapsto & \\mathbb{R}^{+}\n& & (x,t) & \\mapsto & f_{K}^{\\theta}(E(x,t))~,\n\\end{array}]\n\n[\\begin{array}{ccccl} r & : & \\mathbb{X} & \\mapsto & \\mathbb{R}\n& & (x,t) & \\mapsto & f_{r}^{\\theta}(E(x,t)) ~.\n\\end{array}]\n\nWe will show how to implement this model with toy functions.\n\n### Step-by-step implementation\n\n#### About the need to build callable expressions\n\nSo you learned in the geography tutorial how to retrieve the environmental functions:\n\nauto f = env[\"rain\"];\nauto g = env[\"temperature\"];\n\n\nNote: Remember that you can call f and g with space and time arguments by writing f(x,t) and g(x,t).\n\nAs the demographic expansion loop over space and time lays in the core of complex simulation objects, you do not want to pass each of these values one-by-one across the multiple layers of these objects: that would be very inefficient.\n\nInstead, it is better to give to the simulator the expression that it will call.\n\nYou just need to code a function simulating the number of children in deme $x$ at time $t$. Any expression would work: the core algorithm will deal with it if it has the right signature.\n\nIn the demography tutorial, you already learned how to build a very simple version of such expressions: it was simply twice the number of parents. Let’s remember this simple example code:\n\n// access to the demographic history database\nauto N = std::cref(history.pop_sizes());\n// capture N in a lambda expression\nauto growth = [N](auto& gen, coord_type x, time_type t){ return 2*N(x,t) ; };\n\n\nHere we will just learn how to define more complicated expressions that are mathematical compositions of the environmental functions.\n\n### Composing functions of space and time\n\nWhat you want to do is to build an expression that is the result of composing other functions, and you expect this to be easy. You would actually expect to be able to write something like:\n\nauto f = env[\"rain\"];\nauto g = env[\"temperature\"];\nauto h = f + g; // compilation error, undefined operator +\n\n\nThis code would not compile, as C++ does not natively know what adding $f$ and $g$ mean.\n\nTo enable the composition, you need to use the expressive module. This module is actually a library written by Ambre Marques, that allows to compose expressions at compile-time:\n\n#include \"my_path/quetzal/expressive.h\"\n\n// ... some code to build the environment object env\n\nauto f = env[\"rain\"];\nauto g = env[\"temperature\"];\n\nusing quetzal::expressive::use;\nauto h = use(f) + use(g); // expressive automatically define the operator +\n\n\nIn this code, h is a new object. Its type is automatically built by expressive and is unknown by the user: that is actually a good thing, as it can be very complicated. More importantly, as the h object is cheap to copy, it can be passed around the simulation context to the appropriate function where it will be called with spatio-temporal coordinates:\n\n// file main.cpp\nauto h = use(f) + use(g);\n\ncoord_type x;\ntime_type t;\nstd::cout << h(x,t) << std::endl;\n\n\n### Composing constant functions\n\nIn the same way can not expect the following line to work:\n\nauto e = h - 4; // // compilation error, undefined operator -\n\n\nIt is expected, as C++ does not natively know how to add an integer to a function. So first you have to transform the number 4 to a constant function of space and time:\n\nusing quetzal::expressive::literal_factory;\n// a small object able to produce callables:\nliteral_factory<coord_type, time_type> lit;\nauto e = h - lit(4); // now it works\n\n\nSee ? You can actually freely compose any user-defined function.\n\n## Coupling environment, logistic growth model and stochastic sampling:\n\nHere are some code lines implementing a possible variant of the previously described mathematical model where the number of children is a function of the number of parents, of a constant growth rate, and where the carrying capacity is the mean of the environmental variables:\n\n// ... build the environment before this\n\nusing quetzal::expressive::literal_factory;\nusing quetzal::expressive::use;\nliteral_factory<coord_type, time_type> lit;\n\n// constant growth rate\nauto r = lit(10);\n\n// carrying capacity averging over rain and temperature\nauto K = ( use(f) + use(g) ) / lit(2) ;\n\n// retrieving the population size history\nauto N_cref = std::cref(history.pop_sizes());\n\n// Enabling its use with expressive:\nauto N_expr = use([N_cref](coord_type x, time_type t){return N_cref(x,t);});\n\n// Making the logistic growth expression:\nauto g = N_expr*(lit(1)+r)/ (lit(1)+((r * N_expr)/K));\n\n// capturing g to build a sampling distribution\nauto sim_N_tilde = [g](generator_type& gen, coord_type x, time_type t){\nstd::poisson_distribution<history_type::N_type::value_type> poisson(g(x,t));\nreturn poisson(gen);\n};\n\n\n\nThen you can pass the sim_N_tilde expression to a demographic simulator. Remarkably, even if you change some lines you will always be able to pass this expression to the demographic simulator as long as you don’t modify its signature (generator_type& gen, coord_type x, time_type t).\n\n# Conclusion\n\nOf course this niche model is not relevant: it’s a toy model. The main point is that it is very easy for the user to modify it.\n\nBut what if $r$ is unknown and you want to estimate it ?\n\nThe ABC tutorial will give you insights on how manipulating the parameters of the niche functions in an ABC framework."
]
| [
null,
"https://Becheler.github.io/pictures/niche.png",
null,
"https://Becheler.github.io/pictures/logistic.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.82058334,"math_prob":0.98405874,"size":8127,"snap":"2022-40-2023-06","text_gpt3_token_len":1940,"char_repetition_ratio":0.11301243,"word_repetition_ratio":0.04075235,"special_character_ratio":0.24363233,"punctuation_ratio":0.13212435,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99771446,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T10:37:09Z\",\"WARC-Record-ID\":\"<urn:uuid:defb371e-a994-4406-b700-29b5458bafe1>\",\"Content-Length\":\"28922\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11801bf5-71a8-4d51-887a-2b297ef26868>\",\"WARC-Concurrent-To\":\"<urn:uuid:63ebcfea-5812-402f-80f6-f649d6922a4e>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://becheler.github.io/software/quetzal-CoalTL/tutorials/niche\",\"WARC-Payload-Digest\":\"sha1:RSDTWBH4267FE5HGQD5O36S2EEWIXXMR\",\"WARC-Block-Digest\":\"sha1:IMGKGHZWGVA7MKZEYWJERGJGSELRJKPI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335350.36_warc_CC-MAIN-20220929100506-20220929130506-00782.warc.gz\"}"} |
https://crypto.stackexchange.com/questions/70797/is-there-a-concept-of-embedding-degree-for-non-pairing-based-elliptic-curves/70819 | [
"# Is there a concept of embedding degree for non-pairing based elliptic curves?\n\nFrom this post, I learned the concept of embedding degree. Intuitively, if embedding degree of an elliptic curve $$E(F_p)$$ is $$k$$, it means there is a way to transform points in $$E(F_p)$$ to $$F_{p^k}$$. Is the concept of embedding degree only valid for pairing-based elliptic curves, or does the same hold even for non-pairing based elliptic curves?\n\n• Every elliptic curve admits a pairing, and hence the embedding degree makes sense. \"Pairing-friendly\" means that the embedding degree is particularly small and therefore the pairing can be computed efficiently, but the Weil pairing itself exists for all curves. – yyyyyyy May 24 '19 at 14:08\n• @yyyyyyy I heard that there are 3 types of pairings: weil, tate and ate pairings. So, do all 3 pairings exist on every elliptic curve? – satya May 24 '19 at 14:44\n\nAs pointed out by @yyyyyyy, every curve does have an embedding degree, i.e., there is some $$k$$ for which $$p^k - 1$$ is a multiple of $$r$$, the order one of the subgroups of a curve defined over $$\\mathbb{F}_p$$.\nThere is a relevant result from Koblitz and Balasubramanian that establishes that the probability that the embedding degree of a random $$n$$-bit curve of prime order is \"small\" is vanishingly low: $$\\mathbf{Pr}[l \\mid p^k - 1 \\text{ and } k \\le (\\log p)^2] \\le c_3 \\frac{(\\log 2^n)^9(\\log \\log 2^n)^2}{2^n} \\,.$$ As such, only \"special\" curves that are explicitly designed to have small embedding degree $$k$$, i.e., pairing-friendly curves, are effectively computable; but the pairing does exist for all of them.\n• What does it mean to say n-bit curve? If the elliptic curve is of the form $y^2 = x^3 + ax + b$, do you mean $a, b$ are randomly sampled $n$-bit values? – satya May 26 '19 at 1:02\n• I mean that $p$ is an $n$-bit value, and $a$ and $b$ are sampled from $[0, p-1]$. – Samuel Neves May 26 '19 at 1:31"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.888431,"math_prob":0.9985976,"size":1931,"snap":"2020-24-2020-29","text_gpt3_token_len":568,"char_repetition_ratio":0.1530877,"word_repetition_ratio":0.0,"special_character_ratio":0.29777318,"punctuation_ratio":0.113065325,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99983686,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-09T04:54:29Z\",\"WARC-Record-ID\":\"<urn:uuid:3d0f6933-cdee-40c9-a6cd-b77396926381>\",\"Content-Length\":\"149681\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:198427a0-f7c3-4bd8-b3b7-259bdd941c50>\",\"WARC-Concurrent-To\":\"<urn:uuid:7e62f841-225c-425b-9c21-f4ac446d8df8>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/70797/is-there-a-concept-of-embedding-degree-for-non-pairing-based-elliptic-curves/70819\",\"WARC-Payload-Digest\":\"sha1:KYJJCPT2WOYUOEFIFVORAUMJA4NIMWQI\",\"WARC-Block-Digest\":\"sha1:VHRL4CLQ6BK3X7UWVYOHRKWYQDYBGBUY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655898347.42_warc_CC-MAIN-20200709034306-20200709064306-00077.warc.gz\"}"} |
https://www.geeksforgeeks.org/10s-compliment-of-a-decimal-number/?ref=lbp | [
"Skip to content\nRelated Articles\n10’s Complement of a decimal number\n• Last Updated : 30 Apr, 2021\n\nGiven a decimal number N. The task is to find 10’s complement of the number N.\nExample:\n\n```Input : 25\nOutput : 10's complement is : 75\n\nInput : 456\nOutput : 10's complement is : 544```\n\n10’s complement of a decimal number can be found by adding 1 to the 9’s complement of that decimal number. It is just like 2s compliment in binary number representation.\nMathematically,\n\n10’s complement = 9’s complement + 1\n\nFor example, let us take a decimal number 456, 9’s complement of this number will be 999-456 which will be 543. Now 10s compliment will be 543+1=544.\nTherefore,\n\n10’s complement = 10len – num\n\n`Where, len = total number of digits in num.`\n\nBelow is the program to find 10’s complement of a given number:\n\n## C++\n\n `// C++ program to find 10's complement` `#include``#include` `using` `namespace` `std;` `// Function to find 10's complement``int` `complement(``int` `num)``{`` ``int` `i,len=0,temp,comp;`` ` ` ``// Calculating total digits`` ``// in num`` ``temp = num;`` ``while``(1)`` ``{`` ``len++;`` ``num=num/10;`` ``if``(``abs``(num)==0)`` ``break``; `` ``}`` ` ` ``// restore num`` ``num = temp;`` ` ` ``// calculate 10's complement`` ``comp = ``pow``(10,len) - num;`` ` ` ``return` `comp;``}` `// Driver code``int` `main()``{`` ``cout<\n\n## Java\n\n `// Java program to find 10's complement``import` `java.io.*;` `class` `GFG``{``// Function to find 10's complement``static` `int` `complement(``int` `num)``{`` ``int` `i, len = ``0``, temp, comp;`` ` ` ``// Calculating total`` ``// digits in num`` ``temp = num;`` ``while``(``true``)`` ``{`` ``len++;`` ``num = num / ``10``;`` ``if``(Math.abs(num) == ``0``)`` ``break``;`` ``}`` ` ` ``// restore num`` ``num = temp;`` ` ` ``// calculate 10's complement`` ``comp = (``int``)Math.pow(``10``,len) - num;`` ` ` ``return` `comp;``}` `// Driver code``public` `static` `void` `main (String[] args)``{`` ``System.out.println(complement(``25``));`` ` ` ``System.out.println(complement(``456``));``}``}` `// This code is contributed``// by chandan_jnu.`\n\n## Python3\n\n `# Python3 program to find``# 10's complement``import` `math` `# Function to find 10's complement``def` `complement(num):`` ``i ``=` `0``;`` ``len` `=` `0``;`` ``comp ``=` `0``;`` ` ` ``# Calculating total`` ``# digits in num`` ``temp ``=` `num;`` ``while``(``1``):`` ``len` `+``=` `1``;`` ``num ``=` `int``(num ``/` `10``);`` ``if``(``abs``(num) ``=``=` `0``):`` ``break``;`` ` ` ``# restore num`` ``num ``=` `temp;`` ` ` ``# calculate 10's complement`` ``comp ``=` `math.``pow``(``10``, ``len``) ``-` `num;`` ` ` ``return` `int``(comp);` `# Driver code``print``(complement(``25``));``print``(complement(``456``));` `# This code is contributed by mits`\n\n## C#\n\n `// C# program to find``// 10's complement``using` `System;` `class` `GFG``{``// Function to find 10's complement``static` `int` `complement(``int` `num)``{`` ``int` `len = 0, temp, comp;`` ` ` ``// Calculating total`` ``// digits in num`` ``temp = num;`` ``while``(``true``)`` ``{`` ``len++;`` ``num = num / 10;`` ``if``(Math.Abs(num) == 0)`` ``break``;`` ``}`` ` ` ``// restore num`` ``num = temp;`` ` ` ``// calculate 10's complement`` ``comp = (``int``)Math.Pow(10, len) - num;`` ` ` ``return` `comp;``}` `// Driver code``public` `static` `void` `Main ()``{`` ``Console.WriteLine(complement(25));`` ` ` ``Console.WriteLine(complement(456));``}``}` `// This code is contributed``// by chandan_jnu.`\n\n## PHP\n\n ``\n\n## Javascript\n\n ``\nOutput:\n```75\n544```\n\nAttention reader! Don’t stop learning now. Get hold of all the important mathematical concepts for competitive programming with the Essential Maths for CP Course at a student-friendly price. To complete your preparation from learning a language to DS Algo and many more, please refer Complete Interview Preparation Course.\n\nMy Personal Notes arrow_drop_up"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.54796815,"math_prob":0.9844321,"size":4193,"snap":"2021-21-2021-25","text_gpt3_token_len":1277,"char_repetition_ratio":0.18954404,"word_repetition_ratio":0.2814136,"special_character_ratio":0.35320774,"punctuation_ratio":0.17419355,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999395,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-18T20:36:47Z\",\"WARC-Record-ID\":\"<urn:uuid:913742eb-f44c-4ec6-a82b-9cc1593d6b8e>\",\"Content-Length\":\"142511\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7496300-ee24-4be3-bea3-31c32298ac7e>\",\"WARC-Concurrent-To\":\"<urn:uuid:6c759f44-1a07-4340-ab53-0c4a52296571>\",\"WARC-IP-Address\":\"23.205.105.180\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/10s-compliment-of-a-decimal-number/?ref=lbp\",\"WARC-Payload-Digest\":\"sha1:EY5BM7O66PG6M67WLWWE2YOP3DGZUF5K\",\"WARC-Block-Digest\":\"sha1:XMT4DPXGLN2WGYZYAYE44KSOP3HSK2DS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487641593.43_warc_CC-MAIN-20210618200114-20210618230114-00600.warc.gz\"}"} |
http://swmath.org/software/28257 | [
"# UGM\n\nUGM: Matlab code for undirected graphical models. UGM is a set of Matlab functions implementing various tasks in probabilistic undirected graphical models of discrete data with pairwise (and unary) potentials. Specifically, it implements a variety of methods for the following four tasks: Decoding: Computing the most likely configuration. Inference: Computing the partition function and marginal probabilities. Sampling: Generating samples from the distribution. Training: Fitting a model to a given dataset. The first three tasks are implemented for arbitrary discrete undirected graphical models with pairwise potentials. The last task focuses on Markov random fields and conditional random fields with log-linear potentials. The code is written entirely in Matlab, although more efficient mex versions of many parts of the code are also available.\n\n##",
null,
"Keywords for this software\n\nAnything in here will be replaced on browsers that support the canvas element\n\n## References in zbMATH (referenced in 1 article )\n\nShowing result 1 of 1.\nSorted by year (citations)"
]
| [
null,
"http://swmath.org/media/img/minus.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.87356013,"math_prob":0.6453334,"size":1139,"snap":"2021-04-2021-17","text_gpt3_token_len":226,"char_repetition_ratio":0.109251104,"word_repetition_ratio":0.0,"special_character_ratio":0.17822652,"punctuation_ratio":0.13297872,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97560084,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-12T16:11:58Z\",\"WARC-Record-ID\":\"<urn:uuid:748c4004-7bc9-4664-ba87-1c5e9f313723>\",\"Content-Length\":\"19516\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99d9c00d-11ba-465f-85a4-f273aedcce37>\",\"WARC-Concurrent-To\":\"<urn:uuid:3e2d1e3a-a122-4bc5-8890-dff3c1089fd9>\",\"WARC-IP-Address\":\"141.66.193.30\",\"WARC-Target-URI\":\"http://swmath.org/software/28257\",\"WARC-Payload-Digest\":\"sha1:NO4R6VS5OCB6XDXANB5EM4DP2WS6MCVB\",\"WARC-Block-Digest\":\"sha1:MSG4T7C65L6TCHVLKQZXJQ73WOTM3BYU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038067870.12_warc_CC-MAIN-20210412144351-20210412174351-00354.warc.gz\"}"} |
https://sport-net.org/can-a-ray-have-3-points/ | [
"",
null,
"Encyclopedia and sports reference site, we share sports news and information on a daily basis. Quality articles, guides and questions-answers.\n\n# Can a ray have 3 points?\n\nC\n\nPossible Answers: Ray has two end points. A line segments connects to itself forming a shape, a ray does not. A line segment has two end points, a ray only has one.\n\nIn addition, Can a line have 3 points?\n\nThese three points all lie on the same line. This line could be called ‘Line AB’, ‘Line BA’, ‘Line AC’, ‘Line CA’, ‘Line BC’, or ‘LineCB’ .\n\nFurthermore, Can a ray be called SR and RS?\n\nRay SR can not be called RS because rays only go in one direction is the ray were RS it would be going the opposite direction or ray SR.\n\nAlso, What do you call the points lying on the same line? Points that lie on the same line are called collinear points. If there is no line on which all of the points lie, then they are noncollinear points.\n\nWhat are three points on a line called?\nThree or more points that lie on the same line are collinear points . Example : The points A , B and C lie on the line m .\n\n## What do you call a line with 3 points?\n\nThree or more points , , , …, are said to be collinear if they lie on a single straight line. . A line on which points lie, especially if it is related to a geometric figure such as a triangle, is sometimes called an axis. Two points are trivially collinear since two points determine a line.\n\n## What three points are collinear?\n\nThree or more points are said to be collinear if they all lie on the same straight line. If A, B and C are collinear then. If you want to show that three points are collinear, choose two line segments, for example.\n\n## What is a ray Sr?\n\nIn ray SR, S is the initial point and R is the terminal point. In ray RS, these points are swapped. That is, the initial point is R while the terminal point is S. So, ray RS is oppositely oriented to SR.\n\n## Can you reverse the name of a Ray?\n\nThe ray AB consists of the endpoint A and all points on line AB that lie on the same side of A as B. The letters can not be reversed or you are referring to a different ray.\n\n## What is the endpoint of Ray Sr?\n\nThe endpoint of ray SR is S. 8. A line segment has definite length. … If ray YX and YZ have common endpoint, then they are opposite rays.\n\n## What is XY and Z called?\n\nThere are no standard names for the coordinates in the three axes (however, the terms abscissa, ordinate and applicate are sometimes used). The coordinates are often denoted by the letters X, Y, and Z, or x, y, and z. The axes may then be referred to as the X-axis, Y-axis, and Z-axis, respectively.\n\n## Are points on the same line?\n\nCollinear Points: points that lie on the same line. Coplanar Points: points that lie in the same plane. Opposite Rays: 2 rays that lie on the same line, with a common endpoint and no other points in common.\n\n## What do you call the points that do not lie on the same line?\n\nA set of points which do not lie on the same line are called as non collinear points.\n\n## What is the formula of collinear points?\n\nSol: If the A, B and C are three collinear points then AB + BC = AC or AB = AC – BC or BC = AC – AB. If the area of triangle is zero then the points are called collinear points.\n\n## What do you call a set of collinear points?\n\nExplanation: line is a set of collinear points that extends indefinitely into two opposite direction.\n\n## Are there three points that will not be contained in one line?\n\nFor any two points, there is exactly one line containing them. … Any three points lie in at least one plane, and any three points not on the same line lie in exactly one plane. If two planes intersect, their intersection is a line.\n\n## What are three non collinear points?\n\nPoints B, E, C and F do not lie on that line. Hence, these points A, B, C, D, E, F are called non – collinear points. If we join three non – collinear points L, M and N lie on the plane of paper, then we will get a closed figure bounded by three line segments LM, MN and NL.\n\n## Which figure is formed by three noncollinear points?\n\nA triangle is a figure formed by three segments joining three noncollinear points. Each of the three points joining the sides of a triangle is a vertex.\n\nAlso read How many draft picks do the Broncos have in 2021?\n\n## How does a ray look like?\n\nLike a sunray, a ray is part of a line that has a fixed starting point but does not have an endpoint. A ray can extend infinitely in one direction, meaning that a ray can go on forever in one direction.\n\n## What figure has one endpoint?\n\nA ray has one endpoint and continues forever in one direction. A ray has one endpoint and continues forever in one direction.\n\n## What is formed when two rays are joined with a common endpoint?\n\nAngle. An angle is formed by two rays with a common endpoint. … The common endpoint is called the vertex of the angle.\n\n## Are opposite rays equal?\n\nA pair of opposite rays are two rays that have the ‘same endpoint and extend in opposite directions. So, together a pair of opposite rays always forms a straight line. … So, when you name opposite rays, the first letter in the name of both rays must be the same.\n\n## What is the real number that corresponds to a point?\n\nA point is chosen on the number line as representing the real number 0. This point is called the origin on the number line. Points on the number line to the left of 0 represent negative real numbers.\n\n## What is the opposite of a Ray?\n\nOpposite rays are two rays that both start from a common point and go off in exactly opposite directions. Because of this the two rays (QA and QB in the figure above) form a single straight line through the common endpoint Q. When the two rays are opposite, the points A,Q and B are collinear.",
null,
"Answred by. Andrew Brost",
null,
"Encyclopedia and sports reference site, we share sports news and information on a daily basis. Quality articles, guides and questions-answers."
]
| [
null,
"https://sport-net.org/wp-content/uploads/2021/01/logo-sport-net-official.png",
null,
"https://i0.wp.com/sport-net.org/wp-content/uploads/2021/01/cropped-logo-square.png",
null,
"https://sport-net.org/wp-content/uploads/2021/01/logo-sport-net-official.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.94827485,"math_prob":0.98269856,"size":5414,"snap":"2023-14-2023-23","text_gpt3_token_len":1269,"char_repetition_ratio":0.19463956,"word_repetition_ratio":0.05178908,"special_character_ratio":0.23162173,"punctuation_ratio":0.12520868,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9893727,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-10T15:37:21Z\",\"WARC-Record-ID\":\"<urn:uuid:48225c3f-2036-40c7-af06-58061446e319>\",\"Content-Length\":\"133660\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a49cab0f-191e-4161-9e09-2570adc4185b>\",\"WARC-Concurrent-To\":\"<urn:uuid:bf8b6ab1-e7ee-4901-912c-ee63db86b371>\",\"WARC-IP-Address\":\"149.56.14.162\",\"WARC-Target-URI\":\"https://sport-net.org/can-a-ray-have-3-points/\",\"WARC-Payload-Digest\":\"sha1:5FKZO3WXBLMAZH6632HMKMEJJIHJJECK\",\"WARC-Block-Digest\":\"sha1:T3KMBB6IQXVT4XNX5BPQKCQ7K3LIMP3X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224657720.82_warc_CC-MAIN-20230610131939-20230610161939-00478.warc.gz\"}"} |
https://trumpexcel.com/unique-items-from-a-list-in-excel/ | [
"# How to Get Unique Items from a List in Excel Using Formulas\n\nIn this blog post, I will show you a formula to get a list unique items from a list in excel that has repetitions. While this can be done using Advanced Filter or Conditional Formatting, the benefit of using a formula is that it makes your unique list dynamic. This means that you continue to get a unique list even when you add more data to the original list.",
null,
"##### Get Unique Items from a List in Excel Using Formulas\n\nSuppose you have a list as shown above (which has repetitions) and you want to get unique items as shown on the right.\n\nHere is a combination of INDEX, MATCH and COUNTIF formulas that can get this done:\n\n`=IFERROR(INDEX(\\$A\\$2:\\$A\\$11,MATCH(0,COUNTIF(\\$C\\$1:C1,\\$A\\$2:\\$A\\$11),0)),\"\")`\n##### How it works",
null,
"When there are no more unique items, the formula displays an error. To handle it, I have used the Excel IFERROR function to replace the error message with a blank.\n\nSince this is an array formula, use Control + Shift + Enter instead of Enter.\n\nThis is a smart way to exploit the fact that MATCH() will always return the first matching value from a range of values. For example, in this case, MATCH returns the position of the first 0, which represents the first non-matching item.\n\nI also came up with another formula that can do the same thing (its longer but uses a smart MATCH formula trick)\n\n`=IFERROR(INDEX(\\$A\\$2:\\$A\\$11,SMALL(MATCH(\\$A\\$2:\\$A\\$11,\\$A\\$2:\\$A\\$11,0), SUM((COUNTIF(\\$A\\$2:\\$A\\$11,\\$C\\$1:C1)))+1)),\"\")`\n\nI will leave it for you to decode. This is again an array formula, so use Control + Shift + Enter instead of Enter.\n\nIn case you come up with a better formula or a smart trick, do share it with me.\n\n##### Related Tutorials:",
null,
"FREE EXCEL BOOK\n\n## Get 51 Excel Tips Ebook to skyrocket your productivity and get work done faster\n\n### 24 thoughts on “How to Get Unique Items from a List in Excel Using Formulas”\n\n1. How can I do this without using Array Formulas? It is now slowing down my data sheet with only over 180 rows\n\n2. This Formular returns always 0. What do I do wrong? Where do I use CTRL+Shift+Enter? Why?\n\n3. Great post. How would the formula look if you were wanting to do the same thing (Return a list of unique values) but with the data sourced from across 2 or more columns????\n\n4. This formula is not working for me, I always get ‘0’, I have dragged the formula to columns by double clicking the + sign\n\n5. Hai sumit, This formula is ok for small data sets but if we need to work with a huge data set i want to know how to use the unique values generated from a pivot table in excel forumlas and functions.\n\n6. But isn’t Pivot Table can do the same job easier\n\n• It can, but Pivot table is not dynamic so you need to refresh it every time there is a change in the back-end data.\n\n7. In which cell this formula will go?\n\n• In this example, the formula is in C2:C11\n\n8. In which cell this formula will go?\n\n9. Hi Sumit i’M looking for this formula quite a long time.Great work!!. Also Created a table version of this formula using structured references and working fine and would like to share this\n\nWhere Unique is the Table name and Original List ,Unique Value are its column.\n\n• Hi Karthik.. Thanks for sharing the formula. It is almost always a good idea to convert data range into a table.\n\n• May i Get the example excel file link for this.\n\n10. Hi! I have fixed the error and now the formula works well. Thanks a lot! It’s a very useful formula. But now I have two columns of data, do you know how can I combine multiple columns of data and remove the duplicates?\n\n• Hi Wang.. Glad it was helpful. In your query, do you have numbers in the 2 columns, or there is a mix of numbers and alphabets.\n\nAlso, can you create a helper (additional) column, copy paste this data to get it in one column and use this formula, or you looking for a dynamic formula?\n\n• Hi! Thanks for your reply! Yes I have numbers in 2 columns. Right now I’m doing with a helper (additional) column, but is there a dynamic formula that can do this without copy and paste?\n\n• Try this: (Assuming you have data in A2:B18)\n\n=IFERROR(IF(ROWS(\\$C\\$2:C2)<=COUNT(\\$A\\$2:\\$B\\$18),INDEX(\\$A\\$2:\\$B\\$18,IF(ROWS(\\$C\\$2:C2)<=COUNT(\\$A\\$2:\\$A\\$18),ROWS(\\$C\\$2:C2),ROWS(\\$C\\$2:C2)-COUNT(\\$A\\$2:\\$A\\$18)),IF(ROWS(\\$C\\$2:C2)<=COUNT(\\$A\\$2:\\$A\\$18),1,2)),\"\"),\"\")\n\nThis would give you a single column list with data from both the columns (and this is dynamic)\n\nI am sure there could be a shorter way, but if this works for you, nothing like that.\n\n• Thanks! I think your formula would work. But actually the data are not in two adjacent columns, meaning they are in \\$A\\$1:\\$A\\$10 and \\$E\\$1:\\$E\\$10. How should I change your formula to make it work with these two column?\n\n• Try this:\n\n=IFERROR(IF(ROWS(\\$C\\$2:C2)<=COUNT(\\$A\\$2:\\$A\\$18,\\$E\\$2:\\$E\\$18),INDEX(\\$A\\$2:\\$E\\$18,IF(ROWS(\\$C\\$2:C2)<=COUNT(\\$A\\$2:\\$A\\$18),ROWS(\\$C\\$2:C2),ROWS(\\$C\\$2:C2)-COUNT(\\$A\\$2:\\$A\\$18)),IF(ROWS(\\$C\\$2:C2)<=COUNT(\\$A\\$2:\\$A\\$18),1,5)),\"\"),\"\")\n\n• Thanks! I’ll try it out.\n\n11. Hi! This formula is exactly what I’m looking for, but I got an error with it. I attach the screenshot here. Sorry that I’m kind of new to excel..hope you can help me solve this problem..\n\n• Hello, I got the same exception. How can I fix it?\n\n12. Very usefull……….\n\n• Thanks Ankur. Glad you found this useful"
]
| [
null,
"data:image/svg+xml;base64,PHN2ZyB2aWV3Qm94PSIwIDAgNDkyIDM5MSIgd2lkdGg9IjQ5MiIgaGVpZ2h0PSIzOTEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PC9zdmc+",
null,
"data:image/svg+xml;base64,PHN2ZyB2aWV3Qm94PSIwIDAgMTQ2NSA0MzgiIHdpZHRoPSIxNDY1IiBoZWlnaHQ9IjQzOCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48L3N2Zz4=",
null,
"data:image/svg+xml;base64,PHN2ZyB2aWV3Qm94PSIwIDAgMjUwIDMwMCIgd2lkdGg9IjI1MCIgaGVpZ2h0PSIzMDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PC9zdmc+",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8721197,"math_prob":0.9072759,"size":5165,"snap":"2021-43-2021-49","text_gpt3_token_len":1407,"char_repetition_ratio":0.13912033,"word_repetition_ratio":0.03084223,"special_character_ratio":0.27686352,"punctuation_ratio":0.14510779,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99054325,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T22:24:45Z\",\"WARC-Record-ID\":\"<urn:uuid:4f29f9d5-95bd-41b8-8561-2a7fa18d0902>\",\"Content-Length\":\"234064\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a8280a30-a272-4381-80fa-0a1dc7d57b15>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ac394bd-2c87-4da0-926a-7acb2921b7f8>\",\"WARC-IP-Address\":\"162.159.135.42\",\"WARC-Target-URI\":\"https://trumpexcel.com/unique-items-from-a-list-in-excel/\",\"WARC-Payload-Digest\":\"sha1:4MZ6AZRAFDFUGTWJT6A7CIOHG7JEWG7S\",\"WARC-Block-Digest\":\"sha1:MXT5HJD7ZN4MLKR6GBEOEG6AFJZ44OHH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585522.78_warc_CC-MAIN-20211022212051-20211023002051-00614.warc.gz\"}"} |
https://google.github.io/comprehensive-rust/control-flow/blocks.html | [
"# Blocks\n\nA block in Rust contains a sequence of expressions. Each block has a value and a type, which are those of the last expression of the block:\n\n``````fn main() {\nlet x = {\nlet y = 10;\nprintln!(\"y: {y}\");\nlet z = {\nlet w = {\n3 + 4\n};\nprintln!(\"w: {w}\");\ny * w\n};\nprintln!(\"z: {z}\");\nz - y\n};\nprintln!(\"x: {x}\");\n}``````\n\nIf the last expression ends with `;`, then the resulting value and type is `()`.\n\nThe same rule is used for functions: the value of the function body is the return value:\n\n``````fn double(x: i32) -> i32 {\nx + x\n}\n\nfn main() {\nprintln!(\"double: {}\", double(7));\n}``````\n\nKey Points:\n\n• The point of this slide is to show that blocks have a type and value in Rust.\n• You can show how the value of the block changes by changing the last line in the block. For instance, adding/removing a semicolon or using a `return`."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7675727,"math_prob":0.9840112,"size":802,"snap":"2023-40-2023-50","text_gpt3_token_len":223,"char_repetition_ratio":0.14786968,"word_repetition_ratio":0.0,"special_character_ratio":0.33915213,"punctuation_ratio":0.1878453,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97962105,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T18:46:35Z\",\"WARC-Record-ID\":\"<urn:uuid:229a393d-a1f1-40f6-ac25-8adbbcd41bcf>\",\"Content-Length\":\"58375\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3feda68a-1bb1-4f05-8160-496358316dec>\",\"WARC-Concurrent-To\":\"<urn:uuid:68948857-59ca-456c-8cdb-4d0416717190>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://google.github.io/comprehensive-rust/control-flow/blocks.html\",\"WARC-Payload-Digest\":\"sha1:LLGQV7E2IFECNKLKCEX46V227M3TPODS\",\"WARC-Block-Digest\":\"sha1:LMJJVUITHGWS67E44ZDKN4YPXOPYGDEE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510924.74_warc_CC-MAIN-20231001173415-20231001203415-00037.warc.gz\"}"} |
https://www.clutchprep.com/physics/practice-problems/147884/part-awhat-is-the-amplitude-of-this-wave-part-bwhat-is-the-frequency-of-this-wav | [
"# Problem: Part AWhat is the amplitude of this wave?Part BWhat is the frequency of this wave?Part CWhat is the wavelength of this wave?\n\n###### FREE Expert Solution\n\nAmplitude is the maximum displacement from the equilibrium position.\n\nFrequency:\n\n$\\overline{){\\mathbf{f}}{\\mathbf{=}}\\frac{\\mathbf{1}}{\\mathbf{T}}}$, where T is period.\n\nThe velocity of wave propagation:\n\n$\\overline{){\\mathbf{v}}{\\mathbf{=}}{\\mathbf{f}}{\\mathbf{\\lambda }}}$, where f is frequency and λ is the wavelength.\n\n93% (302 ratings)",
null,
"###### Problem Details\n\nPart A\n\nWhat is the amplitude of this wave?\n\nPart B\n\nWhat is the frequency of this wave?\n\nPart C\n\nWhat is the wavelength of this wave?",
null,
"Frequently Asked Questions\n\nWhat scientific concept do you need to know in order to solve this problem?\n\nOur tutors have indicated that to solve this problem you will need to apply the What is a Wave? concept. You can view video lessons to learn What is a Wave?. Or if you need more What is a Wave? practice, you can also practice What is a Wave? practice problems."
]
| [
null,
"https://cdn.clutchprep.com/assets/button-view-text-solution.png",
null,
"https://lightcat-files.s3.amazonaws.com/problem_images/920f4bf799941b0e-1612449225760.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9445473,"math_prob":0.9751912,"size":676,"snap":"2021-04-2021-17","text_gpt3_token_len":147,"char_repetition_ratio":0.1592262,"word_repetition_ratio":0.0,"special_character_ratio":0.21153846,"punctuation_ratio":0.13380282,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991959,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-13T19:16:45Z\",\"WARC-Record-ID\":\"<urn:uuid:0a73cd18-49f3-4d4d-aca5-b2fe6e3785f5>\",\"Content-Length\":\"108328\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2852277d-9108-4b86-b4dc-3310724c7bde>\",\"WARC-Concurrent-To\":\"<urn:uuid:a3332b7b-04d2-49fc-bc71-af01fac0674d>\",\"WARC-IP-Address\":\"3.223.71.232\",\"WARC-Target-URI\":\"https://www.clutchprep.com/physics/practice-problems/147884/part-awhat-is-the-amplitude-of-this-wave-part-bwhat-is-the-frequency-of-this-wav\",\"WARC-Payload-Digest\":\"sha1:QXVOGLL4ORGE2VU5EFMOVJBARB5OEOUZ\",\"WARC-Block-Digest\":\"sha1:JFXOSH5HV5BQHUVV6U4MZAHT47BNNN6F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038074941.13_warc_CC-MAIN-20210413183055-20210413213055-00595.warc.gz\"}"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=42&t=38133&view=print | [
"Page 1 of 1\n\n### hybridization\n\nPosted: Sun Dec 02, 2018 7:17 pm\nDoes anyone have an easy way to find the hybridization of the central atom they are able to share? Much appreciated.\n\n### Re: hybridization\n\nPosted: Sun Dec 02, 2018 7:21 pm\nCount the number of electron densities that are surrounding the central atom. For example, if there are 4, then the hybridization is sp3. If there are 5, then it is sp3d, since the p orbital can only contain three, and s can only have one, the 5th electron goes to the d orbital.\n\n### Re: hybridization\n\nPosted: Sun Dec 02, 2018 7:22 pm\nI usually look at how many other atoms are being attached to the central atom and if it is say 3 other atoms the hybridization is just one less; sp2. For 5 and 6atoms attached to the central atom you just have to remember you are using d orbitals so it will always have an sp3 and then however many d orbitals are being used. If it is 5 it will be dsp3 and 6 is d2sp3\n\n### Re: hybridization\n\nPosted: Sun Dec 02, 2018 9:51 pm\nthe number of hybridized orbitals is the same as the number of electron densities. e.g. a tetrahedral molecule has four electron densities, so it's sp3 hybridized.\n\n### Re: hybridization\n\nPosted: Sun Dec 02, 2018 10:01 pm\nLike other people said before, count the number of areas of electron density. Lone pairs count as one area of electron density. Single, double, and triple bonds also only count as one are of electron density. It might be weird to think that a double and triple bond count only as a single area of electron density since they have four and six electrons involved, respectively, but I guess it kind of makes sense since they attach only 2 atoms together. Once you have the number of areas of electron density, then you should have a corresponding number of orbitals in the hybridized orbital. E.g. 1 area of electron density = s, 2 areas = sp, 3 areas = sp2, 4 areas = sp3, 5 areas = sp3d, 6 areas = sp3d2\n\n### Re: hybridization\n\nPosted: Sun Dec 02, 2018 10:19 pm\ncount the number of electron densities then use \"spppdd\" and add one letter for each density. Ex. 1 region=s , 4 regions=sp^3, 5 regions=sp^3d\n\n### Re: hybridization\n\nPosted: Sun Dec 02, 2018 10:22 pm\nI think you have to draw out the Lewis structure for sure, then count electron densities and find subsequent and matching spd number\ns=1\nsp=2\nsp2 =3\nsp3 =4\nsp3d =5 etc\n\n### Re: hybridization\n\nPosted: Sun Dec 02, 2018 10:41 pm\nOne less than the electron groups that surround it (lone and bonding pairs)\n\n2 = sp\n3 = sp2\n4 = sp3\n5 = sp3d\n6 = sp3d2\n\n### Re: hybridization\n\nPosted: Sun Dec 02, 2018 11:03 pm\nYou have to first draw out the lewis structure, then count the electron densities and then the matching spd number. For 1 electron density it is s, 2=sp, 3=sp^2, etc.\n\n### Re: hybridization\n\nPosted: Sun Dec 02, 2018 11:05 pm\nDraw the lewis structure, and calculate the steric number which is the number of sigma bonds and number of lone pairs. Based on the steric number we can figure out the hybridization. E.g. steric number 2 = sp, 3= sp2 and so on.\n\n### Re: hybridization\n\nPosted: Sun Dec 02, 2018 11:13 pm\nYou should draw the Lewis structure first then count the electron densities. What I also do is I look at the shape that the central atom is a part of and determine its hybridization from it.\n\n### Re: hybridization\n\nPosted: Thu Dec 06, 2018 6:02 pm\nKarlaArevalo4D wrote:Does anyone have an easy way to find the hybridization of the central atom they are able to share? Much appreciated.\n\nThe easiest way is to consider the number of regions with electron density. This includes lone pairs.\n\n### Re: hybridization\n\nPosted: Thu Dec 06, 2018 6:03 pm\nDraw the lewis structure and count the number of electron densities around the atom you are looking at.\n\n### Re: hybridization\n\nPosted: Sun Dec 09, 2018 11:35 pm\nI would just count the number of electron densities"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8962944,"math_prob":0.9659721,"size":4011,"snap":"2019-43-2019-47","text_gpt3_token_len":1167,"char_repetition_ratio":0.19590716,"word_repetition_ratio":0.11956522,"special_character_ratio":0.27998006,"punctuation_ratio":0.1351661,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9579101,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T22:10:31Z\",\"WARC-Record-ID\":\"<urn:uuid:057fe044-4211-485f-9944-22acb05320ba>\",\"Content-Length\":\"8715\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec1e0c05-1179-4dcb-8df0-6b59154b7123>\",\"WARC-Concurrent-To\":\"<urn:uuid:6db34670-4582-4730-8b4b-efc627646e73>\",\"WARC-IP-Address\":\"169.232.134.130\",\"WARC-Target-URI\":\"https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=42&t=38133&view=print\",\"WARC-Payload-Digest\":\"sha1:4I2P52P2CHSSPQNB3VRZA2LN4SYUFKZ2\",\"WARC-Block-Digest\":\"sha1:PICFUYRGNWW3UYHSTB5GBYBWITCYOMBI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670987.78_warc_CC-MAIN-20191121204227-20191121232227-00128.warc.gz\"}"} |
https://m-phi.blogspot.com/2014/02/the-deductive-use-of-logic-in.html | [
"## Thursday, 13 February 2014\n\n### The deductive use of logic in mathematics (Part III of 'Axiomatizations of arithmetic...')\n\n(This is the third part of the series of posts with sections of the paper on axiomatizations of arithmetic and the first-order/second-order divide that I am working on at the moment. Part I is here, and Part II is here.)\n=============================\n\n2. The deductive use\n\nHintikka describes the deductive use of logic for investigations in the foundations of mathematics in the following terms:\nIn order to facilitate, systematize, and criticize mathematicians’ reasoning about the structures they are interested in, logicians have isolated various valid inference patterns, systematized them, and even constructed various ways of mechanically generating an infinity of such inference patterns. I shall call this the deductive use of logic in mathematics. (Hintikka 1989, 64)\nSo the main difference between the descriptive and the deductive uses, as Hintikka conceives of them, seems to be that the objects of the descriptive use are the mathematical structures themselves, whereas the object of the deductive use is the mathematician’s reasoning about these very structures. This is an important distinction, but it would be a mistake to view the deductive use merely as seeking to emulate the actual reasoning practices of mathematicians. Typically, the idea is to produce a rational reconstruction that does not necessarily mirror the actual inferential steps of an ordinary mathematical proof, but which shows that the theorem in question indeed follows from the assumptions of the proof, through evidently valid inferential steps.\n\nFrege’s Begriffsschrift project is arguably the first (and for a long time the only) example of the deductive use of logic in mathematics; one of his main goals was to create a tool to make explicit all presuppositions which would ‘sneak in unnoticed’ in ordinary mathematical proofs. Here is the famous passage from the preface of the Begriffsschrift where he presents this point:\nTo prevent anything intuitive from penetrating here unnoticed, I had to bend every effort to keep the chain of inferences free of gaps. In attempting to comply with this requirement in the strictest possible way I found the inadequacy of language to be an obstacle; no matter how unwieldy the expressions I was ready to accept, I was less and less able, as the relations became more and more complex, to attain the precision that my purpose required. This deficiency led me to the idea of the present ideography. Its first purpose, therefore, is to provide us with the most reliable test of the validity of a chain of inferences and to point out every presupposition that tries to sneak in unnoticed, so that its origin can be investigated. (Frege 1879/1977, 5-6, emphasis added)\nAgain, it is important to bear in mind that Frege’s project (and similar projects) is not that of describing the actual chains of inference of mathematicians in mathematical proofs. It is a normative project, even if he is not a revisionist who thinks that mathematicians make systematic mistakes in their practices (as Brouwer would later claim). He wants to formulate a tool that could put any given chain of inferences to test, and thus also to isolate presuppositions not made explicit in the proof. If these presuppositions happen to be true statements, then the proof is still valid, but we thereby become aware of all the premises that it in fact relies on.\n\nFor the success of this essentially epistemic project, the language in question should preferably operate on the basis of mechanical procedures, so that the test in question would always produce reliable results, i.e. ensuring that no hidden contentual considerations be incorporated into the application of rules (Sieg 1994, section 1.1). It is thus clear why Frege’s project required a deductively well-behaved system, one with a precisely formulated underlying notion of deductive consequence. Indeed, in the Grundgesetze Frege criticizes Dedekind’s lack of explicitness concerning inferential steps – incidentally, not an entirely fair criticism, given the different nature of Dedekind’s project.\n\nIt is well known that Frege’s deductive concerns were not particularly influential in the early days of formal axiomatics (and it is also well known that his own system in fact does not satisfy this desideratum entirely). In effect, in the works of pioneers such as Dedekind, Peano, Hilbert etc., a precise and purely formal notion of deductive consequence was still missing (Awodey & Reck 2002, section 3.1). It was only with Whitehead & Russell’s Principia Mathematica, published in the 1910s, that the importance of this notion started to be recognized (among other reasons, because they were the first to take Frege’s deductive project seriously). What this means for the present purposes is that Hintikka’s notion of the deductive use of logic in the foundations of mathematics is virtually entirely absent in the early days of applications of logic to mathematics, i.e. the final decades of the 19th century and the first decade of the 20th century – with the very notable exception of Frege, that is.\n\nHowever, with the ‘push’ generated by the publication of Principia Mathematica, the deductive approach became increasingly pervasive in the 1910’s, reaching its pinnacle in Hilbert’s meta-mathematical program in the 1920s. Hilbert, whose earlier work in geometry represents a paradigmatic case of the descriptive use of logic, famously proposed a new approach to the foundations of mathematics in the 1920s, one in which meta-mathematical questions were to be treated as mathematical questions themselves.\n\nHilbert’s program was not a purely deductive program as Frege’s had been. Indeed, the general idea was to treat axiomatizations/theories as mathematical objects in themselves so as to address meta-mathematical questions, but this required that not only the axioms but also the rules of inference within the theories be fully specified. Moreover, one of the key questions motivating Hilbert’s program, the famous Entscheidungsproblem, and more generally the idea of a decision procedure for all of mathematics, has a very distinctive deductive flavor: is there a decision procedure which would allow us, for every mathematical statement, to ascertain whether it is or it is not a mathematical theorem?\n\nSo the golden era of the deductive use of logic in the foundations of mathematics started in the 1910s, after the publication of Principia Mathematica, and culminated in the 1920s, with Hilbert’s program. Naturally, Gödel’s discovery that there can be no complete and computable axiomatization of the first-order theory of the natural numbers in the early 1930s (and later on, Turing’s and Church’s negative answers to the Entscheidungsproblem) was a real cold shower for such deductive aspirations. Indeed, the advent of model-theory in the late 1930s and 1940s can be viewed as a return to the predominance of the descriptive project at the expenses of the deductive project.\n\nCurrently, both projects survive in different guises, but it is fair to say that the general optimism regarding the reach of each of them in the early days of formal axiomatics, especially the deductive project, has somewhat diminished. Moreover, the extent to which expressiveness and tractability come apart has become even more conspicuous with the realization that decidable logical systems tend to be expressively very weak, even weaker than first-order logic (which is not decidable).\n\nTO BE CONTINUED...\n\n1.",
null,
"2.",
null,
""
]
| [
null,
"https://m-phi.blogspot.com/2014/02/the-deductive-use-of-logic-in.html",
null,
"https://m-phi.blogspot.com/2014/02/the-deductive-use-of-logic-in.html",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9312517,"math_prob":0.82907647,"size":7703,"snap":"2021-31-2021-39","text_gpt3_token_len":1761,"char_repetition_ratio":0.1487206,"word_repetition_ratio":0.028192371,"special_character_ratio":0.19395041,"punctuation_ratio":0.087628864,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9598413,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-26T12:57:28Z\",\"WARC-Record-ID\":\"<urn:uuid:c3225694-9fb6-4be0-b090-71224a43464a>\",\"Content-Length\":\"88773\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3746576c-e8d1-4b9d-bc66-c8d4d4df2b3a>\",\"WARC-Concurrent-To\":\"<urn:uuid:7041ca9b-cc00-45b5-8ebc-4ebb127b536b>\",\"WARC-IP-Address\":\"142.250.65.65\",\"WARC-Target-URI\":\"https://m-phi.blogspot.com/2014/02/the-deductive-use-of-logic-in.html\",\"WARC-Payload-Digest\":\"sha1:VGZPY3CBVQVCTA5FRF5ADGL4SAMWYBGP\",\"WARC-Block-Digest\":\"sha1:67AV5ZXVGDT4WUAHSOS4T5QRZSARWDW6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057861.0_warc_CC-MAIN-20210926114012-20210926144012-00595.warc.gz\"}"} |
https://www.mis.mpg.de/publications/preprints/2004/prepr2004-10.html | [
"# Preprint 10/2004\n\n## Fast Parallel Solution of Boundary Integral Equations and Related Problems\n\n### Mario Bebendorf and Ronald Kriemann\n\nContact the author: Please use for correspondence this email.\nSubmission date: 05. Mar. 2004\nPages: 25\npublished in: Computing and visualization in science, 8 (2005) 3/4, p. 121-135",
null,
"DOI number (of the published article): 10.1007/s00791-005-0001-x\nBibtex\nMSC-Numbers: 65D05, 65D15, 65F05, 65F30\nKeywords and phrases: integral equations, hierarchical matrices, parallel solvers\nThis article is concerned with the efficient numerical solution of Fredholm integral equations on a parallel computer with shared or distributed memory. Parallel algorithms for both, the approximation of the discrete operator by hierarchical matrices and the parallel matrix-vector multiplication of such matrices by a vector, are presented. The first algorithm has a complexity of order",
null,
", while the latter is of order",
null,
", where N and p are the number of unknowns and the number of processors, respectively. The",
null,
"-approximant needs",
null,
"units of storage on each processor."
]
| [
null,
"https://sfx.mpg.de/sfx_local/sfx.gif",
null,
"https://www.mis.mpg.de/fileadmin/preprint_img/2004/img_10_1a.gif",
null,
"https://www.mis.mpg.de/fileadmin/preprint_img/2004/img_10_2a.gif",
null,
"https://www.mis.mpg.de/fileadmin/preprint_img/2004/img_10_3a.gif",
null,
"https://www.mis.mpg.de/fileadmin/preprint_img/2004/img_10_4a.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8165187,"math_prob":0.81254834,"size":1134,"snap":"2022-40-2023-06","text_gpt3_token_len":271,"char_repetition_ratio":0.10619469,"word_repetition_ratio":0.0,"special_character_ratio":0.24779542,"punctuation_ratio":0.15121952,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96231437,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,8,null,8,null,8,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T08:30:29Z\",\"WARC-Record-ID\":\"<urn:uuid:ef9db5d5-5dc1-4f3d-b069-20d5dc4a19bd>\",\"Content-Length\":\"29174\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:83a79eb5-a19f-4c0a-817b-ffc4cdb42f1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:91b0993c-a96a-495a-85f2-39545f59bff2>\",\"WARC-IP-Address\":\"194.95.185.89\",\"WARC-Target-URI\":\"https://www.mis.mpg.de/publications/preprints/2004/prepr2004-10.html\",\"WARC-Payload-Digest\":\"sha1:YOIHL4OTSGTRSKHJCZTC6USHG4UEY2OZ\",\"WARC-Block-Digest\":\"sha1:IGOKK5ICDIKUUKZLFTJRGORDCHICQANB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494974.98_warc_CC-MAIN-20230127065356-20230127095356-00511.warc.gz\"}"} |
http://portaldelfreelancer.com/New-York/interpreting-root-mean-square-error.html | [
"",
null,
"Address 85 Main St, Stamford, NY 12167 (607) 652-1600\n\ninterpreting root mean square error East Worcester, New York\n\nWould it be easy or hard to explain this model to someone else? Like the variance, MSE has the same units of measurement as the square of the quantity being estimated. What are the legal consequences for a tourist who runs out of gas on the Autobahn? Just one way to get rid of the scaling, it seems.\n\nReply Karen February 22, 2016 at 2:25 pm Ruoqi, Yes, exactly. Are D&D PDFs sold in multiple versions of different quality? Those three ways are used the most often in Statistics classes. To do this, we use the root-mean-square error (r.m.s.\n\nMy initial response was it's just not available-mean square error just isn't calculated. I am still finding it a little bit challenging to understand what is the difference between RMSE and MBD. If you have less than 10 data points per coefficient estimated, you should be alert to the possibility of overfitting. There are also efficiencies to be gained when estimating multiple coefficients simultaneously from the same data.\n\nFind the Infinity Words! Linked 52 Understanding “variance” intuitively 26 A statistics book that explains using more images than equations Related 7Reliability of mean of standard deviations10Root mean square vs average absolute deviation?2Does BIAS equal Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view Linear regression models Notes on linear regression analysis (pdf file) Introduction to linear regression analysis Mathematics of simple Why mount doesn't respect option ro How to photograph distant objects (10km)?\n\nHowever, when comparing regression models in which the dependent variables were transformed in different ways (e.g., differenced in one case and undifferenced in another, or logged in one case and unlogged Think of it this way: how large a sample of data would you want in order to estimate a single parameter, namely the mean? The statistics discussed above are applicable to regression models that use OLS estimation. The 13 Steps for Statistical Modeling in any Regression or ANOVA { 20 comments… read them below or add one } Noah September 19, 2016 at 6:20 am Hi am doing\n\nIt measures how far the aimpoint is away from the target. For the first, i.e., the question in the title, it is important to recall that RMSE has the same unit as the dependent variable (DV). Although the confidence intervals for one-step-ahead forecasts are based almost entirely on RMSE, the confidence intervals for the longer-horizon forecasts that can be produced by time-series models depend heavily on the Values of MSE may be used for comparative purposes.\n\nHow to unlink (remove) the special hardlink \".\" created for a folder? Reply ADIL August 24, 2014 at 7:56 pm hi, how method to calculat the RMSE, RMB betweene 2 data Hp(10) et Hr(10) thank you Reply Shailen July 25, 2014 at 10:12 For instance, by transforming it in a percentage: RMSE/(max(DV)-min(DV)) –R.Astur Apr 17 '13 at 18:40 That normalisation doesn't really produce a percentage (e.g. 1 doesn't mean anything in particular), If one model's errors are adjusted for inflation while those of another or not, or if one model's errors are in absolute units while another's are in logged units, their error\n\nSo you cannot justify if the model becomes better just by R square, right? These approximations assume that the data set is football-shaped. the bottom line is that you should put the most weight on the error measures in the estimation period--most often the RMSE (or standard error of the regression, which is RMSE That is: MSE = VAR(E) + (ME)^2.\n\nNow if your arrows scatter evenly arround the center then the shooter has no aiming bias and the mean square error is the same as the variance. You're always trying to minimize the error when building a model. Finally, remember to K.I.S.S. (keep it simple...) If two models are generally similar in terms of their error statistics and other diagnostics, you should prefer the one that is simpler and/or The residuals do still have a variance and there's no reason to not take a square root.\n\nThere is lots of literature on pseudo R-square options, but it is hard to find something credible on RMSE in this regard, so very curious to see what your books say. All rights reserved. error from the regression. Again, it depends on the situation, in particular, on the \"signal-to-noise ratio\" in the dependent variable. (Sometimes much of the signal can be explained away by an appropriate data transformation, before\n\nSo how to figure out based on data properties if the RMSE values really imply that our algorithm has learned something? –Shishir Pandey Apr 17 '13 at 8:07 1 Sure, Since an MSE is an expectation, it is not technically a random variable. Further, while the corrected sample variance is the best unbiased estimator (minimum mean square error among unbiased estimators) of variance for Gaussian distributions, if the distribution is not Gaussian then even The denominator is the sample size reduced by the number of model parameters estimated from the same data, (n-p) for p regressors or (n-p-1) if an intercept is used. For more\n\nif i fited 3 parameters, i shoud report them as: (FittedVarable1 +- sse), or (FittedVarable1, sse) thanks Reply Grateful2U September 24, 2013 at 9:06 pm Hi Karen, Yet another great explanation. Reply Karen August 20, 2015 at 5:29 pm Hi Bn Adam, No, it's not. An equivalent null hypothesis is that R-squared equals zero. What is the normally accepted way to calculate these two measures, and how should I report them in a journal article paper?\n\nThey can be positive or negative as the predicted value under or over estimates the actual value. The caveat here is the validation period is often a much smaller sample of data than the estimation period. Since Karen is also busy teaching workshops, consulting with clients, and running a membership program, she seldom has time to respond to these comments anymore. How to say you go first in German What would You-Know-Who want with Lily Potter?\n\nAs before, you can usually expect 68% of the y values to be within one r.m.s. However, although the smaller the RMSE, the better, you can make theoretical claims on levels of the RMSE by knowing what is expected from your DV in your field of research. Likewise, it will increase as predictors are added if the increase in model fit is worthwhile. This is an easily computable quantity for a particular sample (and hence is sample-dependent).\n\nSuppose the sample units were chosen with replacement. The usual estimator for the mean is the sample average X ¯ = 1 n ∑ i = 1 n X i {\\displaystyle {\\overline {X}}={\\frac {1}{n}}\\sum _{i=1}^{n}X_{i}} which has an expected Related TILs: TIL 1869: How do we calculate linear fits in Logger Pro? Lower values of RMSE indicate better fit.\n\nIn order to initialize a seasonal ARIMA model, it is necessary to estimate the seasonal pattern that occurred in \"year 0,\" which is comparable to the problem of estimating a full Likewise, it will increase as predictors are added if the increase in model fit is worthwhile. I denoted them by , where is the observed value for the ith observation and is the predicted value. error, you first need to determine the residuals.\n\nWhen it is adjusted for the degrees of freedom for error (sample size minus number of model coefficients), it is known as the standard error of the regression or standard error For an unbiased estimator, the MSE is the variance of the estimator. up vote 20 down vote favorite 6 Suppose I have some dataset."
]
| [
null,
"http://portaldelfreelancer.com/maps/East Worcester_NY.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9235743,"math_prob":0.9254419,"size":7483,"snap":"2019-26-2019-30","text_gpt3_token_len":1668,"char_repetition_ratio":0.10536168,"word_repetition_ratio":0.01910828,"special_character_ratio":0.22116798,"punctuation_ratio":0.10295127,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98508304,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T14:35:42Z\",\"WARC-Record-ID\":\"<urn:uuid:2826f719-aa7f-4781-a0b5-fed2ad5958b4>\",\"Content-Length\":\"24843\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e5150250-d2cb-4486-ad41-1ef30fcf2d13>\",\"WARC-Concurrent-To\":\"<urn:uuid:5d039f9e-7204-4371-b149-2521c71706b4>\",\"WARC-IP-Address\":\"104.31.89.191\",\"WARC-Target-URI\":\"http://portaldelfreelancer.com/New-York/interpreting-root-mean-square-error.html\",\"WARC-Payload-Digest\":\"sha1:LG7OUKGSXNV6I7KUXYP4OXMBPODKZRCC\",\"WARC-Block-Digest\":\"sha1:GBKUMDPEHQIFC7EXMQP3ZLISXV7MG3C7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998509.15_warc_CC-MAIN-20190617143050-20190617164636-00020.warc.gz\"}"} |
https://numerologybasics.com/2001/07/ | [
"## people with a 23 (King of Wands) life lesson\n\n### July 5, 2001",
null,
"people with a 23 life lesson:",
null,
"Willie Stargell born March 6th, 1940\n\nMarch 6th, 1940\n\n3 + 6 +1+9+4+0 = 23 = his life lesson = what he was here to learn = Leadership. Zest. Action. Exercise. Physical fitness. Working out. Gym. Calisthenics. Athletic. Sports. Entrepreneur. Self-starter. Enterprising.\n\nRoger Staubach\n\nborn February 5th, 1942 2 + 5 +1+9+4+2 = 23\n\nFloyd Patterson\n\nborn January 4th, 1935 1 + 4 +1+9+3+5 = 23\n\nMarlon Brando\n\nborn April 3rd, 1924 4 + 3 +1+9+2+4 = 23\n\nGary Cooper\n\nborn May 7th, 1901 5 + 7 +1+9+0+1 = 23\n\nChristie Brinkley\n\nborn February 2nd, 1954 2 + 2 +1+9+5+4 = 23\n\nJon Bon Jovi\n\nborn March 2nd, 1962 3 + 2 +1+9+6+2 = 23\n\nLana Turner\n\nborn February 8th, 1921 2 + 8 +1+9+2+1 = 23\n\nHarry Belafonte\n\nborn March 1st, 1927 3 + 1 +1+9+2+7 = 23\n\nWayne Newton\n\nborn April 3rd, 1942 4 + 3 +1+9+4+2 = 23\n\nWalter Matthau\n\nborn October 1st, 1920 10 + 1 +1+9+2+0 = 23\n\nRon Howard\n\nMarch 1st, 1954 3 + 1 +1+9+5+4 = 23\n\nStacy Keach\n\nborn June 2nd, 1941 6 + 2 +1+9+4+1 = 23\n\nRavi Shankar\n\nborn April 7th, 1920 4 + 7 +1+9+2+0 = 23\n\nCatherine Bach\n\nborn March 1st, 1954 3 + 1 +1+9+5+4 = 23\n\nSharon Bruneau\n\nborn February 1st, 1964 2 + 1 +1+9+6+4 = 23\n\nBetty Pariso\n\nborn January 1st, 1956 1 + 1 +1+9+5+6 = 23\n\nDirk Benedict\n\nborn March 1st, 1945 3 + 1 +1+9+4+5 = 23\n\nRichard Lugar\nborn April 4th, 1932 4 + 4 +1+9+3+2 = 23\n\nRuss Feingold\nborn March 2nd, 1953 3 + 2 +1+9+5+3 = 23\n\nEarl Scruggs\nborn January 6th, 1924 1 + 6 +1+9+2+4 = 23\n\nMitch Miller\nborn July 4th, 1911 7 + 4 +1+9+1+1 = 23\n\nRobert King Merton\nborn July 5th, 1910 7 + 5 +1+9+1+0 = 23\n\nFrancis Marion Smith\nborn February 2nd, 1846 2 + 2 +1+8+4+6 = 23\n\nThomas Wolfe\nborn October 3rd, 1900 10 + 3 +1+9+0+0 = 23\n\nNatalie Cole\nborn February 6th, 1950 2 + 6 +1+9+5+0 = 23\n\nVictor Borge\nborn January 3rd, 1909 1 + 3 +1+9+0+9 = 23\n\nShelley Berman\nborn February 3rd, 1926 2 + 3 +1+9+2+6 = 23\n\nTheodore Bikel\nborn May 2nd, 1924 5 + 2 +1+9+2+4 = 23\n\nAugustine Tolton\nborn April 1st, 1854 4 + 1 +1+8+5+4 = 23\n\nNigella Lawson\nborn January 6th, 1960 1 + 6 +1+9+6+0 = 23"
]
| [
null,
"https://i0.wp.com/www.abuddhistlibrary.com/Buddhism/F-%20Miscellaneous/General%20Miscellaneous/The%20Tarot/Kings%20and%20Queens/King%20and%20Queen%20of%20Wands_files/wands14.jpg",
null,
"https://edpetersonnumerology.files.wordpress.com/2001/07/5f409-stargell.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.89370894,"math_prob":0.9924696,"size":2041,"snap":"2023-14-2023-23","text_gpt3_token_len":1034,"char_repetition_ratio":0.20127639,"word_repetition_ratio":0.023923445,"special_character_ratio":0.52866244,"punctuation_ratio":0.09437751,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98893774,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-05T00:09:53Z\",\"WARC-Record-ID\":\"<urn:uuid:1d243c2b-dff5-4810-a43e-22025dd5cb80>\",\"Content-Length\":\"76862\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ff027a40-e461-4c6e-91d4-a60d2c321e20>\",\"WARC-Concurrent-To\":\"<urn:uuid:91c007f1-37e9-4bad-bc2a-ada58929e494>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://numerologybasics.com/2001/07/\",\"WARC-Payload-Digest\":\"sha1:XD5MDO4LYXMOEIDGPC5YPRRVW7KATRLK\",\"WARC-Block-Digest\":\"sha1:ZKI7LDMA6HTTMCAYMYXZGCLDJJPW4COK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224650409.64_warc_CC-MAIN-20230604225057-20230605015057-00078.warc.gz\"}"} |
https://us.sofatutor.com/mathematics/videos/the-slope-of-the-line-y-mx-b | [
"# The Slope of the Line y=mx+b",
null,
"",
null,
"Rate this video\n\nØ 4.5 / 2 ratings\n\nThe author",
null,
"Team Digital\n\n## DescriptionThe Slope of the Line y=mx+b\n\nAfter this lesson, you will be able to graph linear equations in slope-intercept form.\n\nThe lesson begins by teaching you how to write a linear equation in slope-intercept form. It leads you to learn how to identify the slope and the y-intercept from the equation. It concludes with showing you how to graph the equation by using the slope and y-intercept.\n\nLearn about slope of the line by seeing how Furious Films plan and shoot for their next blockbuster movie.\n\nThis video includes key concepts, notation, and vocabulary such as slope (the ratio of the change in vertical movement, or the change in x over the change in y); y-intercept (the value of y when the value of x is equal to zero); and slope-intercept form (y=mx+b, where m is the slope and b is the y-intercept).\n\nBefore watching this video, you should already be familiar with linear equations, properties of equality, and solving equations by isolating a variable.\n\nAfter watching this video, you will be prepared to solve real world problems that involve constant rates, which can be considered as the slope of a line when the problem is represented by an equation.\n\nCommon Core Standard(s) in focus: 8.EE.B.5 & 8.EE.B.6 A video intended for math students in the 8th grade Recommended for students who are 13 - 14 years old\n\n### TranscriptThe Slope of the Line y=mx+b\n\nThere's trouble on the set of FISTS OF DANGER, the new, big-budget action movie from Furious Films. The movie is missing something...but what? According to the latest research, adding scenes with adorable animals to any movie will increase ticket sales! This may sound like a weird idea for an action film but with some cute animals and the slope of the line y = mx + b, we could save this movie yet! Studies show that for each second of baby koala footage featured in a film, ticket sales go up.\n\nThis linear relationship is expressed in the following equation: 3y equals 500x where 'x' represents seconds of koala footage and 'y' is the number of tickets sold. To better understand this equation, let’s rewrite it using slope-intercept form, or 'y equals mx + b'. Using this form, it will be easier for us to understand how 'x' and 'y' change in relation to each other. To get our equation in this form, all we have to do is isolate the variable, 'y'. To get 'y' by itself, we can divide both sides of the equation by 3, thanks to the division property of equality. Great! Now our equation is in slope-intercept form, so we can easily identify the slope, 'm', which will always be the coefficient of 'x'. Here, the slope 'm' equals 500 over 3, or 500 thirds. Remember that slope represents the change in 'y' over the change in 'x'.So that means, for every 3 seconds of footage of baby koalas, we sell 500 more tickets.\n\nWait, what about the 'b' in our 'y equals mx + b'? The 'b' term tells us the y-intercept, or the value of 'y' when 'x' equals zero. It may not look like it, but our equation actually DOES have a 'b' term. Here, 'b' is just zero. So that means when 'x' is 0, 'y' is also 0, so the graph of this line passes through the origin at (0, 0). To sketch this graph, start with a known point on the line, like (0, 0). Then go up and over according to your slope. Up for change in 'y', over for change in 'x'. Since our slope is 500 over 3, we go up 500 and over 3. Now draw a straight line through the points! Whoa, we've got to get some more baby koalas in on this project!\n\nWhat other animals can we get on board? Research also shows that having a miniature teacup pig on screen can boost ticket sales. This can be represented by using the equation, 120x minus 4y equals 20 where 'x' represents seconds of pig footage and 'y' is the number of tickets sold. Let's transform this equation into 'y equals mx + b' so we can figure out how the teacup pigs are going to affect ticket sales. To get 'y' alone, subtract 120x from both sides and divide every term in the equation by negative 4, to cancel out the coefficient of 'y'. Finally, we can rearrange the terms on the right side of the equation so it matches the more familiar form of 'y equals mx plus b' that we recognize. Now we can easily identify the slope, 'm', as the coefficient of 'x'. The slope tells us that for every second of teacup pig footage, we sell 30 more tickets. We can also easily identify 'b', or the y-intercept, at (0, -5). To sketch this graph, let's start with the point we just found. Now use what we know about the slope and go up 30, to the right 1, and draw a line through these points. Those pigs have got a future in Hollywood!\n\nBut not all animals are so good for the movies. The notorious honey badger has a crippling effect on ticket sales. Let’s take a look at this formula. 3x plus 5y equals 10 where 'x' represents seconds of honey badger footage and 'y' is the number of tickets sold. Let’s isolate the variable, 'y'. Start by subtracting 3x from both sides and divide all the terms in the equation by 5. Much better! The slope in this equation is negative 3 over 5. This means for every 5 seconds of honey badger screen time, the movie actually sells 3 fewer tickets. We can also easily identify the y-intercept, or 'b', at (0, 2). So how do we graph this line? Start with a point you know, like the y-intercept at (0, 2). Because the slope is negative, we then go down 3, to the right 5. Now we draw a line through these points man those badgers are really dragging sales down!\n\nTo review: We can write linear equations using the form 'y = mx + b'. This is known as Slope-Intercept Form. In this form, 'm' represents the slope the line, or change in 'y' over change in 'x' and 'b' is the y-intercept, or where the line crosses the y-axis.\n\nUsing the market research, the studio executives have gone ahead and reshot their film using a new title... …TROUBLE CUTIES?´Man, Hollywood really has changed these cute little guys...\n\n## 1 comment\n\n1 comment\n1.",
null,
"this website changed my life\n\nPosted by Nunnally, almost 4 years ago\n\n## The Slope of the Line y=mx+b Exercise\n\nWould you like to practice what you’ve just learned? Practice problems for this video The Slope of the Line y=mx+b help you practice and recap your knowledge.\n• ### Recall how to write equations in slope-intercept form.\n\nHints\n\nIsolate the variable $y$ to put the linear relationship into slope-intercept form.\n\nSubtract the $x$ term from both sides of the equation.\n\nDivide every term by the coefficient of $y$ to cancel it out.\n\nSolution\n\nAll linear relationships can be written in slope-intercept form, $y=mx+b$, by isolating the variable $y$.\n\n1. For the koala example, the linear relationship is $3y=500x$.\n\n• Divide both sides by $3$.\n• $y=\\frac{500}{3}x$\n2. For the pig example, the linear relationship is $120x-4y=20$.\n• Subtract $120x$ from both sides of the equation.\n• $-4y=20-120x$\n• Divide all terms by $-4$.\n• $y=-5+30x$.\n• Rearrange the terms on the right.\n• $y=30x-5$\n3. For the honey badger example, the linear relationship is $3x+5y=10$.\n• Subtract $3x$ from both sides of the equation.\n• $5y=10-3x$.\n• Divide all terms by 5.\n• $y=2-\\frac{3}{5}x$\n• Rearrange the terms on the right.\n• $y=-\\frac{3}{5}x+2$\n\n• ### Identify the slope and the $y$-intercept of each equation.\n\nHints\n\nSlope-intercept form is $y=mx+b$.\n\nThe slope is $m$ and the $y$-intercept is $b$.\n\nTo get $y$ alone, subtract the $x$ term if necessary, then divide each term by the coefficient of $y$.\n\nSolution\n\nSlope-intercept form is $y=mx+b$ where $m$ is the slope and $b$ is the $y$-intercept. To find $m$ and $b$, put all of the equations in slope-intercept form $y=mx+b$.\n\n• $y=2x+1$ is in slope-intercept form. Therefore, the slope is $2$ and the $y$-intercept is $1$.\n• $y=\\frac{8}{5}x$ is in slope-intercept form. Therefore, the slope is $\\frac{8}{5}$ and the $y$-intercept is $0$ since there is no $b$ in the equation.\n• $y=-\\frac{1}{2}x-3$ is in slope-intercept form. Therefore the slope is $-\\frac{1}{2}$ and the $y$-intercept is -3.\n• $4x+y=1$ is not in slope-intercept form. Subtracting $4x$ from both sides of the equation yields $y=1-4x$. Then, rearrange the terms on the right so the equation is $y=-4x+1$. Now it looks like $y=mx+b$ which means the slope is $-4$ and the $y$-intercept is $1$.\n• $-2y=3x-6$ is not in slope-intercept form. To get $y$ alone, divide every term by $-2$: $y=-\\frac{3}{2}+3$. The equation is now in the form $y=mx+b$ which means the slope is $-\\frac{3}{2}$ and the $y$-intercept is $3$.\n• ### Match the equations with their slope-intercept form.\n\nHints\n\nIsolate the variable $y$ by subtracting the $x$ term from both sides.\n\nRearrange the terms or flip the equation to make it look like $y=mx+b$.\n\nDivide every term by the coefficient of $y$.\n\nSolution\n\n1. $-2y+3x=8$\n\n• Subtract $3x$ from both sides\n• $-2y=8-3x$\n• Divide every term by $-2$\n• $y=-4+\\frac{3}{2}x$\n• Rearrange the terms to look like $y=mx+b$\n• $y=\\frac{3}{2}x-4$\n2. $12x+3y=6$\n• Subtract $12x$ from both sides\n• $3y=6-12x$\n• Divide every term by $3$\n• $y=2-4x$\n• Rearrange the terms on the right\n• $y=-4x+2$\n3. $\\frac{1}{3}y=2x$\n• Multiply both sides by $3$\n• $y=6x$\n4. $5y=20x+30$\n• Divide every term by 5\n• $y=4x+6$\n5. $9x+8=3y$\n• Flip the equation\n• $3y=9x+8$\n• Divide every term by $3$\n• $y=3x+\\frac{8}{3}$\n6. $4x=6y$\n• Flip the equation\n• $6y=4x$\n• Divide both sides by $6$ and simplify\n• $y=\\frac{2}{3}$\n\n• ### Graph the equation $-y-3x+4=0$.\n\nHints\n\nThe graph of a line is $y=mx+b$, where $m$ is the slope and $b$ is the $y$-intercept.\n\nTo graph the $y$-intercept, plug $x=0$ into the equation.\n\nThe slope, $m$, is the change in $y$ divided by the change in $x$.\n\nSolution\n\nThe line $-y-3x+4=0$ can be rewritten in slope-intercept form by solving for $y$.\n\n$~$\n\nTo solve for $y$:\n\n1. Add $3x$ to both sides: $-y+4=3x$.\n\n2. Subtract $4$ from both sides: $-y=3x-4$.\n\n3. Divide both sides by $-1$: $y=-3x+4$.\n\n$~$\n\nTo identify the slope and $y$-intercept:\n\n• The slope is $m=-3$.\n• The $y$-intercept is $b=4$.\n$~$\n\nTo identify the point at $x=1$:\n\n• Start at the $y$-intercept, $(0,4)$.\n• As the slope is $m=-3$, if we subtract $3$ from the $y$-coordinate and add $1$ to the $x$-coordinate of a point on our line, we get another point on our line.\n• Doing this with our $y$-intercept, we get $(0+1,4-3)=(1,1)$, which is the point on our line at $x=1$.\n• Use the slope $m=-3$ to find the point at $x=1$.\n$~$\n\nTo graph:\n\n1. Plot the $y$-intercept $(0,4)$.\n\n2. Plot the point $(1,1)$.\n\n3. Draw a line connecting these two points.\n\n• ### Determine if the equation is written in slope-intercept form or not.\n\nHints\n\nSlope-intercept form is $y=mx+b$.\n\nSlope-intercept form can also look like $y=b+mx$, $mx+b=y$ or $b+mx=y$.\n\nIf there is no $y$-intercept $b$, slope-intercept form will look like $y=mx$.\n\nSolution\n\nSlope-intercept form is $y=mx+b$. However, the terms can be rearranged and still be in slope-intercept form. The following equations are $4$ different ways to represent slope-intercept form.\n\n• $y=mx+b$: the original slope-intercept form\n• $y=b+mx$: the terms on the right side are switched\n• $mx+b=y$: the equation is flipped so that y is on the right\n• $b+mx=y$: the equation is flipped with y on the right, and the $mx$ and $b$ terms are switched\nIf there is no $y$-intercept, the equation will look like $y=mx$ or $mx=y$.\n\nThe following equations are in slope-intercept form because they match one of the equations listed above.\n\n• $y=5x-3$\n• $y=4+2x$\n• $y=-\\frac{2}{5}x$\n• $y=-4x$\n• $1-3x=y$\n• $-x+1=y$\nThe following equations are not in slope-intercept form because they do not match any of the slope-intercept form equations above.\n• $x=-\\frac{1}{2}y+1$\n• $\\frac{3}{2}x+y=2$\n• $2y=x+1$\n• $x=-\\frac{1}{2}y+1$\n• $-y=2x-3$\n• $7y=2x$\n\n• ### Determine the line.\n\nHints\n\nSlope-intercept form is $y=mx+b$, where $m$ is the slope and $b$ is the $y$-intercept.\n\nIf $x=0$ in the point $(x,y)$, then the $y$-coordinate is the $y$-intercept $b$.\n\nIf $x\\neq 0$ in the point $(x,y)$, substitute the slope $m$ and the point $(x,y)$ into $y=mx+b$, and solve for $b$.\n\nSolution\n\nSlope-intercept form, $y=mx+b$, consists of a point $(x,y)$, a slope $m$, and a $y$-intercept $b$. Given a point and a slope you can uniquely determine a line.\n\n1. $(0,-4)$, $m=\\frac{1}{2}$\n\n• Since $x=0$, we know that the $y$-intercept is $-4$, $b=-4$\n• Now we can substitute $m$ and $b$ into $y=mx+b$\n• $y=\\frac{1}{2}x-4$\n2. $(0,\\frac{35}{2})$, $m=\\frac{3}{5}$\n• Since $x=0$, we know that the $y$-intercept is $\\frac{35}{2}$, $b=\\frac{35}{2}$\n• Now we can substitute $m$ and $b$ into $y=mx+b$\n• $y=\\frac{3}{5}x+\\frac{35}{2}$\n3. $(1,2)$, $m=2$\n• Since $x\\neq 0$, we substitute the values of $x$, $y$, and $m$ into $y=mx+b$, to solve for $b$.\n• $2=2(1)+b$\n• $2=2+b$\n• $0=b$\n• Now that we know $b=0$, we can substitute $b$ and $m$ into $y=mx+b$\n• $y=2x+0$\n• $y=2x$\n4. $(-1,-3)$, $m=-2$\n• Since $x\\neq 0$, we substitute the values of $x$, $y$, and $m$ into $y=mx+b$, to solve for $b$.\n• $-3=-2(-1)+b$\n• $-3=2+b$\n• $-5=b$\n• Now that we know $b=-5$, we can substitute $b$ and $m$ into $y=mx+b$\n• $y=-2x-5$\n5. $(0,-6)$, $m=-\\frac{17}{3}$\n• Since $x=0$, we know that the $y$-intercept is $-6$, $b=-6$\n• Now we can substitute $m$ and $b$ into $y=mx+b$\n• $y=-\\frac{17}{3}x-6$\n6. $(4,4)$, $m=6$\n• Since $x\\neq 0$, we substitute the values of $x$, $y$, and $m$ into $y=mx+b$, to solve for $b$.\n• $4=6(4)+b$\n• $4=24+b$\n• $-20=b$\n• Now that we know $b=-20$, we can substitute $b$ and $m$ into $y=mx+b$\n• $y=6x-20$\n7. $(-4,3)$, $m=\\frac{1}{4}$\n• Since $x\\neq 0$, we substitute the values of $x$, $y$, and $m$ into $y=mx+b$, to solve for $b$.\n• $3=\\frac{1}{4}(-4)+b$\n• $3= -1+b$\n• $4=b$\n• Now that we know $b=4$, we can substitute $b$ and $m$ into $y=mx+b$\n• $y=\\frac{1}{4}x+4$"
]
| [
null,
"https://d1u2r2pnzqmal.cloudfront.net/videos/pictures/21604/normal/US21604.jpg",
null,
"https://dkckbwr4t7ug6.cloudfront.net/assets/application/videos/visitors/exercise_placeholder-d03f1e82c2fdfd1f690d75166c0c923692831cc73444edcffe26306d0b8f19ae.png",
null,
"https://dkckbwr4t7ug6.cloudfront.net/assets/application/layouts/lazy_load_placeholder-131eb55a8b4e203b5c63caa4f2fd5d218ba8ff4bb32caa6f6e055df07beb4845.svg",
null,
"https://dkckbwr4t7ug6.cloudfront.net/assets/application/characters/people/student-4302de8562d5c758425f5e53217e5aa15191f2913e13b5039f8e0831154bb50d.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8798286,"math_prob":0.99993813,"size":12794,"snap":"2021-43-2021-49","text_gpt3_token_len":4074,"char_repetition_ratio":0.18279906,"word_repetition_ratio":0.11678508,"special_character_ratio":0.32366735,"punctuation_ratio":0.111032784,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000076,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,3,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T15:53:11Z\",\"WARC-Record-ID\":\"<urn:uuid:c01f85b7-6a43-402f-bc79-3e9b12f64ee1>\",\"Content-Length\":\"134375\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6dbcd19e-c8b4-43aa-9be3-1f00eb8e4e11>\",\"WARC-Concurrent-To\":\"<urn:uuid:21306dde-ed50-4296-a11c-87d0964cb071>\",\"WARC-IP-Address\":\"52.29.228.5\",\"WARC-Target-URI\":\"https://us.sofatutor.com/mathematics/videos/the-slope-of-the-line-y-mx-b\",\"WARC-Payload-Digest\":\"sha1:GIKLG6WFNQVLRJGKVVYOPCIHDLLQ6XE7\",\"WARC-Block-Digest\":\"sha1:WNALCDYO477KM4GHESTUI6ROPF3ZJ45L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323586043.75_warc_CC-MAIN-20211024142824-20211024172824-00042.warc.gz\"}"} |
https://gitlab.linphone.org/BC/public/external/libvpx/commit/773768ae27bfe427f153c8e6fadb3912b8f94c1f | [
"### Removed B_MODE_INFO\n\n```Declared the bmi in BLOCKD as a union instead of B_MODE_INFO.\nThen removed B_MODE_INFO completely.\n\nChange-Id: Ieb7469899e265892c66f7aeac87b7f2bf38e7a67```\nparent 9e4f76c1\n ... ... @@ -137,12 +137,6 @@ typedef enum modes for the Y blocks to the left and above us; for interframes, there is a single probability table. */ typedef struct { B_PREDICTION_MODE mode; int_mv mv; } B_MODE_INFO; union b_mode_info { B_PREDICTION_MODE as_mode; ... ... @@ -182,8 +176,6 @@ typedef struct short *dqcoeff; unsigned char *predictor; short *diff; short *reference; short *dequant; /* 16 Y blocks, 4 U blocks, 4 V blocks each with 16 entries */ ... ... @@ -197,14 +189,13 @@ typedef struct int eob; B_MODE_INFO bmi; union b_mode_info bmi; } BLOCKD; typedef struct { DECLARE_ALIGNED(16, short, diff); /* from idct diff */ DECLARE_ALIGNED(16, unsigned char, predictor); /* not used DECLARE_ALIGNED(16, short, reference); */ DECLARE_ALIGNED(16, short, qcoeff); DECLARE_ALIGNED(16, short, dqcoeff); DECLARE_ALIGNED(16, char, eobs); ... ... @@ -284,19 +275,15 @@ extern void vp8_setup_block_dptrs(MACROBLOCKD *x); static void update_blockd_bmi(MACROBLOCKD *xd) { int i; if (xd->mode_info_context->mbmi.mode == SPLITMV) { for (i = 0; i < 16; i++) { BLOCKD *d = &xd->block[i]; d->bmi.mv.as_int = xd->mode_info_context->bmi[i].mv.as_int; } }else if (xd->mode_info_context->mbmi.mode == B_PRED) int is_4x4; is_4x4 = (xd->mode_info_context->mbmi.mode == SPLITMV) || (xd->mode_info_context->mbmi.mode == B_PRED); if (is_4x4) { for (i = 0; i < 16; i++) { BLOCKD *d = &xd->block[i]; d->bmi.mode = xd->mode_info_context->bmi[i].as_mode; xd->block[i].bmi = xd->mode_info_context->bmi[i]; } } } ... ...\n ... ... @@ -355,7 +355,7 @@ static void read_mb_modes_mv(VP8D_COMP *pbi, MODE_INFO *mi, MB_MODE_INFO *mbmi, do /* for each subset j */ { int_mv leftmv, abovemv; B_MODE_INFO bmi; int_mv blockmv; int k; /* first block in subset j */ int mv_contz; k = vp8_mbsplit_offset[s][j]; ... ... @@ -364,30 +364,30 @@ static void read_mb_modes_mv(VP8D_COMP *pbi, MODE_INFO *mi, MB_MODE_INFO *mbmi, abovemv.as_int = above_block_mv(mi, k, mis); mv_contz = vp8_mv_cont(&leftmv, &abovemv); switch (bmi.mode = (B_PREDICTION_MODE) sub_mv_ref(bc, vp8_sub_mv_ref_prob2 [mv_contz])) /*pc->fc.sub_mv_ref_prob))*/ switch ((B_PREDICTION_MODE) sub_mv_ref(bc, vp8_sub_mv_ref_prob2 [mv_contz])) /*pc->fc.sub_mv_ref_prob))*/ { case NEW4X4: read_mv(bc, &bmi.mv.as_mv, (const MV_CONTEXT *) mvc); bmi.mv.as_mv.row += best_mv.as_mv.row; bmi.mv.as_mv.col += best_mv.as_mv.col; read_mv(bc, &blockmv.as_mv, (const MV_CONTEXT *) mvc); blockmv.as_mv.row += best_mv.as_mv.row; blockmv.as_mv.col += best_mv.as_mv.col; #ifdef VPX_MODE_COUNT vp8_mv_cont_count[mv_contz]++; #endif break; case LEFT4X4: bmi.mv.as_int = leftmv.as_int; blockmv.as_int = leftmv.as_int; #ifdef VPX_MODE_COUNT vp8_mv_cont_count[mv_contz]++; #endif break; case ABOVE4X4: bmi.mv.as_int = abovemv.as_int; blockmv.as_int = abovemv.as_int; #ifdef VPX_MODE_COUNT vp8_mv_cont_count[mv_contz]++; #endif break; case ZERO4X4: bmi.mv.as_int = 0; blockmv.as_int = 0; #ifdef VPX_MODE_COUNT vp8_mv_cont_count[mv_contz]++; #endif ... ... @@ -396,7 +396,7 @@ static void read_mb_modes_mv(VP8D_COMP *pbi, MODE_INFO *mi, MB_MODE_INFO *mbmi, break; } mbmi->need_to_clamp_mvs = vp8_check_mv_bounds(&bmi.mv, mbmi->need_to_clamp_mvs = vp8_check_mv_bounds(&blockmv, mb_to_left_edge, mb_to_right_edge, mb_to_top_edge, ... ... @@ -412,7 +412,7 @@ static void read_mb_modes_mv(VP8D_COMP *pbi, MODE_INFO *mi, MB_MODE_INFO *mbmi, fill_offset = &mbsplit_fill_offset[s][(unsigned char)j * mbsplit_fill_count[s]]; do { mi->bmi[ *fill_offset].mv.as_int = bmi.mv.as_int; mi->bmi[ *fill_offset].mv.as_int = blockmv.as_int; fill_offset++; }while (--fill_count); } ... ...\n ... ... @@ -288,7 +288,7 @@ static void decode_macroblock(VP8D_COMP *pbi, MACROBLOCKD *xd, BLOCKD *b = &xd->block[i]; RECON_INVOKE(RTCD_VTABLE(recon), intra4x4_predict) (b, b->bmi.mode, b->predictor); (b, b->bmi.as_mode, b->predictor); if (xd->eobs[i] > 1) { ... ... @@ -974,8 +974,6 @@ int vp8_decode_frame(VP8D_COMP *pbi) vpx_memset(pc->above_context, 0, sizeof(ENTROPY_CONTEXT_PLANES) * pc->mb_cols); vpx_memcpy(&xd->block.bmi, &xd->mode_info_context->bmi, sizeof(B_MODE_INFO)); #if CONFIG_MULTITHREAD if (pbi->b_multithreaded_rd && pc->multi_token_partition != ONE_PARTITION) { ... ...\n ... ... @@ -186,7 +186,9 @@ static void decode_macroblock(VP8D_COMP *pbi, MACROBLOCKD *xd, int mb_row, int m for (i = 0; i < 16; i++) { BLOCKD *b = &xd->block[i]; vp8mt_predict_intra4x4(pbi, xd, b->bmi.mode, b->predictor, mb_row, mb_col, i); vp8mt_predict_intra4x4(pbi, xd, b->bmi.as_mode, b->predictor, mb_row, mb_col, i); if (xd->eobs[i] > 1) { DEQUANT_INVOKE(&pbi->dequant, idct_add) ... ...\n ... ... @@ -1008,28 +1008,32 @@ static void pack_inter_mode_mvs(VP8_COMP *const cpi) do { const B_MODE_INFO *const b = cpi->mb.partition_info->bmi + j; B_PREDICTION_MODE blockmode; int_mv blockmv; const int *const L = vp8_mbsplits [mi->partitioning]; int k = -1; /* first block in subset j */ int mv_contz; int_mv leftmv, abovemv; blockmode = cpi->mb.partition_info->bmi[j].mode; blockmv = cpi->mb.partition_info->bmi[j].mv; while (j != L[++k]) if (k >= 16) assert(0); leftmv.as_int = left_block_mv(m, k); abovemv.as_int = above_block_mv(m, k, mis); mv_contz = vp8_mv_cont(&leftmv, &abovemv); write_sub_mv_ref(w, b->mode, vp8_sub_mv_ref_prob2 [mv_contz]); //pc->fc.sub_mv_ref_prob); write_sub_mv_ref(w, blockmode, vp8_sub_mv_ref_prob2 [mv_contz]); if (b->mode == NEW4X4) if (blockmode == NEW4X4) { #ifdef ENTROPY_STATS active_section = 11; #endif write_mv(w, &b->mv.as_mv, &best_mv, (const MV_CONTEXT *) mvc); write_mv(w, &blockmv.as_mv, &best_mv, (const MV_CONTEXT *) mvc); } } while (++j < cpi->mb.partition_info->count); ... ...\n ... ... @@ -54,7 +54,11 @@ typedef struct typedef struct { int count; B_MODE_INFO bmi; struct { B_PREDICTION_MODE mode; int_mv mv; } bmi; } PARTITION_INFO; typedef struct ... ...\n ... ... @@ -272,6 +272,7 @@ static void build_activity_map( VP8_COMP *cpi ) // Activity masking based on Tim T's original code void vp8_activity_masking(VP8_COMP *cpi, MACROBLOCK *x) { unsigned int a; unsigned int b; unsigned int act = *(x->mb_activity_ptr); ... ... @@ -477,24 +478,9 @@ void encode_mb_row(VP8_COMP *cpi, x->mb_activity_ptr++; x->mb_norm_activity_ptr++; if(cm->frame_type != INTRA_FRAME) { if (xd->mode_info_context->mbmi.mode != B_PRED) { for (i = 0; i < 16; i++) xd->mode_info_context->bmi[i].mv.as_int = xd->block[i].bmi.mv.as_int; }else { for (i = 0; i < 16; i++) xd->mode_info_context->bmi[i].as_mode = xd->block[i].bmi.mode; } } else { if(xd->mode_info_context->mbmi.mode != B_PRED) for (i = 0; i < 16; i++) xd->mode_info_context->bmi[i].as_mode = xd->block[i].bmi.mode; } /* save the block info */ for (i = 0; i < 16; i++) xd->mode_info_context->bmi[i] = xd->block[i].bmi; // adjust to the next column of macroblocks x->src.y_buffer += 16; ... ...\n ... ... @@ -36,7 +36,7 @@ void vp8_encode_intra4x4block(const VP8_ENCODER_RTCD *rtcd, BLOCK *be = &x->block[ib]; RECON_INVOKE(&rtcd->common->recon, intra4x4_predict) (b, b->bmi.mode, b->predictor); (b, b->bmi.as_mode, b->predictor); ENCODEMB_INVOKE(&rtcd->encodemb, subb)(be, b, 16); ... ... @@ -89,19 +89,19 @@ void vp8_encode_intra16x16mby(const VP8_ENCODER_RTCD *rtcd, MACROBLOCK *x) switch (x->e_mbd.mode_info_context->mbmi.mode) { case DC_PRED: d->bmi.mode = B_DC_PRED; d->bmi.as_mode = B_DC_PRED; break; case V_PRED: d->bmi.mode = B_VE_PRED; d->bmi.as_mode = B_VE_PRED; break; case H_PRED: d->bmi.mode = B_HE_PRED; d->bmi.as_mode = B_HE_PRED; break; case TM_PRED: d->bmi.mode = B_TM_PRED; d->bmi.as_mode = B_TM_PRED; break; default: d->bmi.mode = B_DC_PRED; d->bmi.as_mode = B_DC_PRED; break; } } ... ...\n ... ... @@ -232,23 +232,9 @@ THREAD_FUNCTION thread_encoding_proc(void *p_data) x->mb_activity_ptr++; x->mb_norm_activity_ptr++; if(cm->frame_type != INTRA_FRAME) { if (xd->mode_info_context->mbmi.mode != B_PRED) { for (i = 0; i < 16; i++) xd->mode_info_context->bmi[i].mv.as_int = xd->block[i].bmi.mv.as_int; }else { for (i = 0; i < 16; i++) xd->mode_info_context->bmi[i].as_mode = xd->block[i].bmi.mode; } } else { if(xd->mode_info_context->mbmi.mode != B_PRED) for (i = 0; i < 16; i++) xd->mode_info_context->bmi[i].as_mode = xd->block[i].bmi.mode; } /* save the block info */ for (i = 0; i < 16; i++) xd->mode_info_context->bmi[i] = xd->block[i].bmi; // adjust to the next column of macroblocks x->src.y_buffer += 16; ... ...\n ... ... @@ -100,7 +100,7 @@ static int encode_intra(VP8_COMP *cpi, MACROBLOCK *x, int use_dc_pred) { for (i = 0; i < 16; i++) { x->e_mbd.block[i].bmi.mode = B_DC_PRED; x->e_mbd.block[i].bmi.as_mode = B_DC_PRED; vp8_encode_intra4x4block(IF_RTCD(&cpi->rtcd), x, i); } } ... ...\n ... ... @@ -47,7 +47,6 @@ extern unsigned int (*vp8_get16x16pred_error)(unsigned char *src_ptr, int src_st extern unsigned int (*vp8_get4x4sse_cs)(unsigned char *src_ptr, int source_stride, unsigned char *ref_ptr, int recon_stride); extern int vp8_rd_pick_best_mbsegmentation(VP8_COMP *cpi, MACROBLOCK *x, MV *best_ref_mv, int best_rd, int *, int *, int *, int, int *mvcost, int, int fullpixel); extern int vp8_cost_mv_ref(MB_PREDICTION_MODE m, const int near_mv_ref_ct); extern void vp8_set_mbmode_and_mvs(MACROBLOCK *x, MB_PREDICTION_MODE mb, int_mv *mv); int vp8_skip_fractional_mv_step(MACROBLOCK *mb, BLOCK *b, BLOCKD *d, ... ... @@ -215,7 +214,8 @@ static int pick_intra4x4block( *best_mode = mode; } } b->bmi.mode = (B_PREDICTION_MODE)(*best_mode); b->bmi.as_mode = (B_PREDICTION_MODE)(*best_mode); vp8_encode_intra4x4block(rtcd, x, ib); return best_rd; } ... ... @@ -251,7 +251,7 @@ int vp8_pick_intra4x4mby_modes cost += r; distortion += d; mic->bmi[i].as_mode = xd->block[i].bmi.mode = best_mode; mic->bmi[i].as_mode = best_mode; // Break out case where we have already exceeded best so far value // that was passed in ... ... @@ -443,7 +443,7 @@ void vp8_pick_inter_mode(VP8_COMP *cpi, MACROBLOCK *x, int recon_yoffset, BLOCK *b = &x->block; BLOCKD *d = &x->e_mbd.block; MACROBLOCKD *xd = &x->e_mbd; B_MODE_INFO best_bmodes; union b_mode_info best_bmodes; MB_MODE_INFO best_mbmode; int_mv best_ref_mv; ... ... @@ -485,6 +485,7 @@ void vp8_pick_inter_mode(VP8_COMP *cpi, MACROBLOCK *x, int recon_yoffset, vpx_memset(nearest_mv, 0, sizeof(nearest_mv)); vpx_memset(near_mv, 0, sizeof(near_mv)); vpx_memset(&best_mbmode, 0, sizeof(best_mbmode)); vpx_memset(&best_bmodes, 0, sizeof(best_bmodes)); // set up all the refframe dependent pointers. ... ... @@ -885,7 +886,7 @@ void vp8_pick_inter_mode(VP8_COMP *cpi, MACROBLOCK *x, int recon_yoffset, if (this_mode == B_PRED) for (i = 0; i < 16; i++) { vpx_memcpy(&best_bmodes[i], &x->e_mbd.block[i].bmi, sizeof(B_MODE_INFO)); best_bmodes[i].as_mode = x->e_mbd.block[i].bmi.as_mode; } // Testing this mode gave rise to an improvement in best error score. Lower threshold a bit for next time ... ... @@ -953,10 +954,11 @@ void vp8_pick_inter_mode(VP8_COMP *cpi, MACROBLOCK *x, int recon_yoffset, } if (x->e_mbd.mode_info_context->mbmi.mode == B_PRED) { for (i = 0; i < 16; i++) { x->e_mbd.block[i].bmi.mode = best_bmodes[i].mode; x->e_mbd.block[i].bmi.as_mode = best_bmodes[i].as_mode; } } update_mvcount(cpi, &x->e_mbd, &frame_best_ref_mv[xd->mode_info_context->mbmi.ref_frame]); }\n ... ... @@ -650,7 +650,7 @@ static int rd_pick_intra4x4block( vpx_memcpy(best_dqcoeff, b->dqcoeff, 32); } } b->bmi.mode = (B_PREDICTION_MODE)(*best_mode); b->bmi.as_mode = (B_PREDICTION_MODE)(*best_mode); IDCT_INVOKE(IF_RTCD(&cpi->rtcd.common->idct), idct16)(best_dqcoeff, b->diff, 32); RECON_INVOKE(IF_RTCD(&cpi->rtcd.common->recon), recon)(best_predictor, b->diff, *(b->base_dst) + b->dst, b->dst_stride); ... ... @@ -1398,8 +1398,7 @@ static int vp8_rd_pick_best_mbsegmentation(VP8_COMP *cpi, MACROBLOCK *x, { BLOCKD *bd = &x->e_mbd.block[i]; bd->bmi.mv.as_mv = bsi.mvs[i].as_mv; bd->bmi.mode = bsi.modes[i]; bd->bmi.mv.as_int = bsi.mvs[i].as_int; bd->eob = bsi.eobs[i]; } ... ... @@ -1714,7 +1713,7 @@ void vp8_rd_pick_inter_mode(VP8_COMP *cpi, MACROBLOCK *x, int recon_yoffset, int BLOCK *b = &x->block; BLOCKD *d = &x->e_mbd.block; MACROBLOCKD *xd = &x->e_mbd; B_MODE_INFO best_bmodes; union b_mode_info best_bmodes; MB_MODE_INFO best_mbmode; PARTITION_INFO best_partition; int_mv best_ref_mv; ... ... @@ -1758,6 +1757,7 @@ void vp8_rd_pick_inter_mode(VP8_COMP *cpi, MACROBLOCK *x, int recon_yoffset, int unsigned char *v_buffer; vpx_memset(&best_mbmode, 0, sizeof(best_mbmode)); vpx_memset(&best_bmodes, 0, sizeof(best_bmodes)); if (cpi->ref_frame_flags & VP8_LAST_FLAG) { ... ... @@ -2319,10 +2319,12 @@ void vp8_rd_pick_inter_mode(VP8_COMP *cpi, MACROBLOCK *x, int recon_yoffset, int vpx_memcpy(&best_mbmode, &x->e_mbd.mode_info_context->mbmi, sizeof(MB_MODE_INFO)); vpx_memcpy(&best_partition, x->partition_info, sizeof(PARTITION_INFO)); for (i = 0; i < 16; i++) { vpx_memcpy(&best_bmodes[i], &x->e_mbd.block[i].bmi, sizeof(B_MODE_INFO)); } if ((this_mode == B_PRED) || (this_mode == SPLITMV)) for (i = 0; i < 16; i++) { best_bmodes[i] = x->e_mbd.block[i].bmi; } // Testing this mode gave rise to an improvement in best error score. Lower threshold a bit for next time cpi->rd_thresh_mult[mode_index] = (cpi->rd_thresh_mult[mode_index] >= (MIN_THRESHMULT + 2)) ? cpi->rd_thresh_mult[mode_index] - 2 : MIN_THRESHMULT; ... ... @@ -2396,7 +2398,7 @@ void vp8_rd_pick_inter_mode(VP8_COMP *cpi, MACROBLOCK *x, int recon_yoffset, int if (best_mbmode.mode == B_PRED) { for (i = 0; i < 16; i++) x->e_mbd.block[i].bmi.mode = best_bmodes[i].mode; x->e_mbd.block[i].bmi.as_mode = best_bmodes[i].as_mode; } if (best_mbmode.mode == SPLITMV) ... ...\nMarkdown is supported\n0% or\nYou are about to add 0 people to the discussion. Proceed with caution.\nFinish editing this message first!"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6760575,"math_prob":0.969433,"size":233,"snap":"2020-10-2020-16","text_gpt3_token_len":76,"char_repetition_ratio":0.15720524,"word_repetition_ratio":0.0,"special_character_ratio":0.2918455,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96146685,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-28T18:48:37Z\",\"WARC-Record-ID\":\"<urn:uuid:9d0fc010-d1e7-47b3-be85-868357646f0f>\",\"Content-Length\":\"674306\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ff2fd8e1-1c78-4978-82f1-2040a1f3a945>\",\"WARC-Concurrent-To\":\"<urn:uuid:64a31af5-9192-4378-b54b-2b8a08223fa9>\",\"WARC-IP-Address\":\"54.37.202.230\",\"WARC-Target-URI\":\"https://gitlab.linphone.org/BC/public/external/libvpx/commit/773768ae27bfe427f153c8e6fadb3912b8f94c1f\",\"WARC-Payload-Digest\":\"sha1:VTOHG5OZX3IDO7UK7XWYM27UTBH6FSZI\",\"WARC-Block-Digest\":\"sha1:5G5E3ASI23BAKSVW5CRW7UKBJ53BT7O3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370492125.18_warc_CC-MAIN-20200328164156-20200328194156-00168.warc.gz\"}"} |
https://nige.wordpress.com/2009/08/25/casimir-force/ | [
"## Casimir force",
null,
"In the previous post, the Casimir force was discussed. It was discovered theoretically by Casimir in 1948 and experimentally proven in 1996 by Steve Lamoreaux and Dev Sen. Depending on the geometry of the situation, i.e., the shape of the plates, it can be either an attractive or a repulsive force.\n\nThe Casimir force between two parallel flat conducting metal conductors is attractive because the full spectrum (all wavelengths) of electromagnetic radiation fluctuations in the vacuum bombard the outside area of the plates (pushing them together), but only wavelengths smaller than the distance between the plates can arise in the space between the plates:",
null,
"In other words, the shortest wavelengths of the “zero-point” (ground state) electromagnetic energy fluctuations of the vacuum bombard each plate equally from each side, so there is no asymmetry and no net force. Only the longer wavelengths contribute to the Casimir force, for they don’t exist in the small space between the plates but do bombard the plates from the outside, pushing them together in the LeSage fashion.\n\nLooking at the Wiki page on the Casimir effect, they derive the Casimir force from the force equation\n\nF = dE/dx\n\nwhich we can use to formulate the basic (unshielded) QED force from Heisenberg’s minimal energy-time uncertainty relation, h-bar = Et.\n\nF = dE/dx = d(h-bar/t)/dx = d[h-bar/(x/c)]/dx = -h-bar*c/x2.\n\nThis inverse-square law force is a factor of 1/alpha times the Coulomb force between two electrons (i.e. it doesn’t incorporate the polarized vacuum shielding factor of alpha).\n\nThe Casimir force calculation is considerably more complex. It relies on only wavelengths longer than the gap between two parallel metal plates pushing them together by acting on the outside, but not in between, the plates. According to the discussion of thie Casimir force mechanism in Zee’s QFT textbook p. 66: ‘Physical plates cannot keep arbitrarily high frequency waves from leaking out.’ This is one way of explaining why short wavelengths don’t contribute to the Casimir effect significantly: like very high energy gamma rays, they penetrate straight through the thin Casimir plates without interacting significantly with them. Longer wavelengths, on the other hand, are all stopped and impart momentum, producing the Casimir force. However, Zee’s explanation – just like his flawed explanation of Feynman’s path integrals using the double-slit experiment (where he doesn’t seem to grasp that the diffraction of the photons is physically caused by the interaction of the photon with the electromagnetic fields from the physical material at the edges of the slits in the screen, which doesn’t exist in the vacuum below Schwinger’s 1.3 * 1018 v/m IR cutoff) – is physically wrong.\n\nZee is wrong because if the shorter wavelengths were excluded from contributing by merely penetrating the Casimir plates, the wavelengths cutoff from the integral would depend not on the distance between the plates, but just on the nature of the plates themselves (their mass per unit area for example, as in gamma radiation shielding).\n\nSo rather than Zee’s theory of the plates shielding (stopping) long wavelengths and letting short wavelengths (high frequencies) penetrate by leaking through and thus not contributing, the Casimir mechanism must be one that explains why the wavelength cutoff is equal to the distance between the plates.\n\nNotice that Zee is right that higher frequencies (shorter wavelengths) are more penetrating: I’m not disputing that. What I’m saying is that his shielding mechanism neglects to explain the wavelength dependence upon the distance between the plates.\n\nThe only way that the distance between the plates can determine the wavelengths contributing to the Casimir force is if wavelengths longer than the distance between the plates are unable to exist between the plates in the first place.\n\nIt simply doesn’t matter what happens to the shorter wavelengths, because it is only the longer wavelengths that contribute to the Casimir effect. Zee should be explaining what the mechanism is for the asummetry in the energy density of the longer wavelengths, not discussing the shorter wavelengths, because it’s just the asymmetry between the energy density of the longer wavelengths on each side of each metal plate which causes the Casimir force.\n\nThe actual mechanism for the exclusion from the space between the plates of wavelengths longer than the distance between the plates is simply the waveguide effect. When you have a radio frequency resonator (source) and want to send the radiation to a dish antenna for transmission, you can pipe the radiation inside a conductive metal tube or box (a so-called ‘waveguide’) with an internal size at least equal to the wavelength you’re using. If the wavelength is longer than the diameter of the metal tube, the radiation can’t propagate: it is absorbed by the sides and heats them up.\n\nWhat happens is that the electromagnetic radiation is simply shorted out by the waveguide if its wavelength is bigger than the size of the waveguide, since the oscillation of the electric field strength in the photon is transverse (perpendicular to the direction it propagates in), not longitudinal. (Ignore the usual obfuscating ‘pictures’ of a Maxwellian photon in textbooks, since they are one-dimensional and merely plot field strength and magnetic field strength versus the one-dimension of propagation. Anyone glancing at those pictures is misled that they are looking at a 3-dimensional spatial illustration of the photon, when in fact two axes are field strengths, not spatial dimensions! It’s as nutty as plotting a graph of speed versus distance for an oscillating pendulum, and claiming that the sine wave graph is the real 2-dimensional outline of the pendulum.)\n\nOn p. 66, Zee calculates the Casimir force to be\n\nF = dE/dt = Pi*h-bar*c/(24d2),\n\nwhere d is the distance between the plates. Notice the inverse-square law! But the Wiki page on the Casimir effect calculates the following for the Casimir pressure (force per unit area):\n\nP = F/A = -Pi2*h-bar*c/(240d4).\n\nIf both Zee and Wiki are correct, then the effective area of the Casimir plates will be the Zee formula for F divided by the Wiki formula for P:\n\nA = F/P = 10d2/Pi.\n\nHence the distance of separation between the plates is d = (Pi*A/10)1/2. For the simplest geometric situation of circular shaped plates with area A = Pi*R2, the distance of separation is\n\nd = (Pi*A/10)1/2 = (Pi2R2/10)1/2 = Pi*R/101/2."
]
| [
null,
"https://i0.wp.com/nige.wordpress.com/files/2009/08/casimir-mechanism.jpg",
null,
"https://nige.files.wordpress.com/2009/08/casimir-mechanism.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.92246926,"math_prob":0.99075663,"size":6514,"snap":"2022-27-2022-33","text_gpt3_token_len":1378,"char_repetition_ratio":0.18095239,"word_repetition_ratio":0.024952015,"special_character_ratio":0.20402211,"punctuation_ratio":0.07245156,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99015576,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T19:04:18Z\",\"WARC-Record-ID\":\"<urn:uuid:d37c2403-1151-4941-88a2-0e36ec0761b9>\",\"Content-Length\":\"97716\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1469a06b-c2e3-4783-b80f-8a844e55feae>\",\"WARC-Concurrent-To\":\"<urn:uuid:4964d142-34b5-4a21-9399-0f7f68fb0816>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://nige.wordpress.com/2009/08/25/casimir-force/\",\"WARC-Payload-Digest\":\"sha1:HQJJFNN7NRCFCSWCDIWMI6GRF2ZCLB5S\",\"WARC-Block-Digest\":\"sha1:ICEQJXVCTNY2J4W4FYRK4ZITWBKVOJKJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104597905.85_warc_CC-MAIN-20220705174927-20220705204927-00020.warc.gz\"}"} |
https://hackernoon.com/data-structures-in-javascript-pt-1-binary-search-trees-2c231cf2c8e1 | [
"",
null,
"Data Structures in JavaScript pt 1 — Binary Search Trees by@xeracon\n\n# Data Structures in JavaScript pt 1 — Binary Search Trees",
null,
"Binary trees are important to know whether you are a budding developer, or a back-end engineer fine-tuning performance. It is a fundamental family of data structures used in computer science to visualize various datasets. In this blog we’ll go over a high level view of binary trees and some of their implementations in JavaScript. You can see my full repo here, in which I have begun coding out other data structures such hash tables and linked lists, which I plan on blogging about over the next couple of weeks — https://github.com/mega0319/data-structures\n\nBinary trees tend to be recursive in nature. Think about it. The tree starts at a root node and must adhere to the rule of only having up to two child nodes. The left child and the right child. So if a node can have only two children, how do we get the larger trees?\n\nWell, each child node in this example can also be a parent that has two child nodes of its own. The left child and right child are known as sibling nodes.\n\nSo each tree can be composed of many subtrees. Nesting trees in this manner are the reason why we can write search and sort algorithms recursively, which we’ll take a look at later.\n\nThis tree has the root-node of 2 and has two child-nodes 7 and 5. Node 5 has one child, 9, which has a child of its own, node 4. Node 7 has two children of its own, 2 and 6. You get the point.\n\n### Coding Binary Search Trees in JS\n\nBinary search trees are powerful constructs. We can organize and store data in them and use powerful methods to traverse them.\n\nBinary search trees are a type of binary tree that are organized in the following way:\nThe value of each left child node must be less than its parent node.\nThe value of each right child node must be greater than its parent node.\n\nImplementing data structures in actual code will help you solidify your understanding of the structures. I highly encourage you to do so if you have even the slightest bit of curiosity.\n\nLet’s build out our constructor function for the binary search tree first.\n\nVoila! Easy enough. We have our binary search tree constructor function. We pass in a value for the tree and set its left child and right child to null, as we know that a binary tree can only have a maximum of two child nodes. This was the easy part. Let’s add more functions to the prototype.\n\nThe next function we will build out is insert. This will allow us to add nodes to our binary search tree. We will use recursion to do so. As I had mentioned earlier that binary trees are recursive in nature and we can do very powerful things with recursion in just a few lines of code.\n\nIn this function, the first thing we check for is whether the value we are inserting is less than or equal to our root node. If it is less, then we will check to see if our root node has a left child. If it does not, we will hit our base case and create a new binary search subtree in that location. If there is a left child node, we will hit our recursive case and call insert again, but this time on the left child node in question. If you are familiar with recursion this will make sense. If not, I suggest you take a look at my blog on recursion here.\n\nInsertion in binary search trees have a logarithmic run-time of O(log n).\n\nIf the value is not less than or equal to our root node, we traverse the right side of the tree. Technically, I could’ve used an else statement here but for specificity, I wanted to be explicit.\n\nOk, next we will build out our contains function.\n\nOur contains function will search through our binary search tree and return true if it finds the value passed in, and false if it does not. Notice that this is also a recursive function. We have our base case, which will check to see if the current node value is equivalent to our search value. Then we check to see if the search value is less than or equal to the root node’s value. Similar to our insert method, we will recursively traverse the tree in search of the value. Binary searches like this also have logarithmic run-times and have an O(log n) time complexity. Great! Now we can insert nodes into our tree and search it. What’s next?\n\n### Traversing the Tree\n\nWhat if we wanted to touch on every node in our trees? If we are able to touch every node, we would be able perform some function on the nodes, such as printing each node out. We will implement two ways to achieve this; using depth-first search and breadth-first search.\n\n### Depth-First Search\n\nIn depth-first search, you start at the root node and traverse a branch all the way down to the bottom most node or leaf node.\n\nHere is one implementation of depth-first search.\n\nOur depth-first search function has two parameters: iteratorFunc, and order.\n\nAs you can see, we are utilizing recursion once again. Order types aside, if our root node has a left child node, it will call the function. The base case is implied as if there are no more left child nodes, it will no longer call the function. Depending on which order we pass in, it will call our iterator function at different points in time.\n\nThere are three different types of depth-first search methods:\n\n1. Pre-order — hits the current node data before traversing both left and right subtrees\n2. In-order — hits the current node data after traversing the left subtree but before the right subtree\n3. Post-order — hits the current node data after traversing both left and right subtrees\n\nOur iterator function is a function we can pass in. This function can take action on each node as we traverse the tree. We can use something as simple as this.\n\nIn breadth-first search, we traverse each level of the tree systematically before moving on to the next level of the tree. Here is an example of how that looks.\n\nOh look, a while loop! To implement breadth-first search, we first set an array, “queue”, to have on element, ‘this’. ‘This’ is basically the root node. Our while loop will run as long as there is something in the queue array. Once in our while loop, we will shift the first item out of the queue, and set it to treeNode. We will then run out iterator function on this node. Once this is complete, we will check for left and right subtrees and push them into the queue. This will sweep the tree level by level and hit each node accordingly, until there are no nodes left. Brilliant!\n\n### DFS vs BFS\n\nThere are many different use cases for DFS and BFS. Let’s say we built a family tree and stored it in a binary search tree structure. Each family member node also had an attribute that held data on whether or not the member is deceased. We want to find all the members that are still currently alive. In this case, depth-first search would be a good solution since we want to traverse to the deepest possible node or leaf node of each branch and the data we want is most likely deeper in the tree.\n\nWhat if we wanted the ancestors instead? If this were the case, then we want to collect all the nodes that are up near the top or the root node. It would be better for us to sweep the tree level by level via breadth-first search here.\n\n### Min/Max\n\nThe last couple of functions to wrap up our binary search tree would be to find the minimum value, and the maximum value. Since binary search trees must adhere to the rule of left child node being less than parent node and right child node being greater than the parent node, we can easily conclude that the smallest value in the binary search tree must the bottom-most-left node of the tree. The maximum, then must be on the bottom-most-right. Here is the code to implement those two functions.\n\nAnd with that, hopefully I was able to help you better understand how binary search trees work, and how we can implement them in JavaScript. As you can see, these data structures, with the average case for accessing, inserting, searching, and deletion being O(log n) across the board, along with its recursive nature, make it a powerful tool that every programmer should have in his or her toolkit.\n\n### END",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Join Hacker Noon"
]
| [
null,
"https://hackernoon.com/hn-icon.png",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://hackernoon.com/emojis/heart.png",
null,
"https://hackernoon.com/emojis/heart.png",
null,
"https://hackernoon.com/emojis/heart.png",
null,
"https://hackernoon.com/emojis/heart.png",
null,
"https://hackernoon.com/emojis/light.png",
null,
"https://hackernoon.com/emojis/light.png",
null,
"https://hackernoon.com/emojis/light.png",
null,
"https://hackernoon.com/emojis/light.png",
null,
"https://hackernoon.com/emojis/boat.png",
null,
"https://hackernoon.com/emojis/boat.png",
null,
"https://hackernoon.com/emojis/boat.png",
null,
"https://hackernoon.com/emojis/boat.png",
null,
"https://hackernoon.com/emojis/money.png",
null,
"https://hackernoon.com/emojis/money.png",
null,
"https://hackernoon.com/emojis/money.png",
null,
"https://hackernoon.com/emojis/money.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9365986,"math_prob":0.76371646,"size":7606,"snap":"2021-31-2021-39","text_gpt3_token_len":1632,"char_repetition_ratio":0.1495659,"word_repetition_ratio":0.02062589,"special_character_ratio":0.2132527,"punctuation_ratio":0.09631019,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96503776,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T10:18:07Z\",\"WARC-Record-ID\":\"<urn:uuid:a7d6ee0a-0895-4c64-b455-4c10b9e06184>\",\"Content-Length\":\"239875\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b2442c67-2c58-41a9-a601-1640cd7cc78f>\",\"WARC-Concurrent-To\":\"<urn:uuid:7b4a7f02-5f36-4203-a836-5e66d2ce9c46>\",\"WARC-IP-Address\":\"172.67.138.135\",\"WARC-Target-URI\":\"https://hackernoon.com/data-structures-in-javascript-pt-1-binary-search-trees-2c231cf2c8e1\",\"WARC-Payload-Digest\":\"sha1:YNSIP2NVPMLJMKC56LK4Q44RJA7NOIQD\",\"WARC-Block-Digest\":\"sha1:XW2UOJYYS6U4UPM6BZ5T6TOU2SKTNZMX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056392.79_warc_CC-MAIN-20210918093220-20210918123220-00455.warc.gz\"}"} |
https://www.colorhexa.com/5ff3a3 | [
"# #5ff3a3 Color Information\n\nIn a RGB color space, hex #5ff3a3 is composed of 37.3% red, 95.3% green and 63.9% blue. Whereas in a CMYK color space, it is composed of 60.9% cyan, 0% magenta, 32.9% yellow and 4.7% black. It has a hue angle of 147.6 degrees, a saturation of 86% and a lightness of 66.3%. #5ff3a3 color hex could be obtained by blending #beffff with #00e747. Closest websafe color is: #66ff99.\n\n• R 37\n• G 95\n• B 64\nRGB color chart\n• C 61\n• M 0\n• Y 33\n• K 5\nCMYK color chart\n\n#5ff3a3 color description : Soft cyan - lime green.\n\n# #5ff3a3 Color Conversion\n\nThe hexadecimal color #5ff3a3 has RGB values of R:95, G:243, B:163 and CMYK values of C:0.61, M:0, Y:0.33, K:0.05. Its decimal value is 6288291.\n\nHex triplet RGB Decimal 5ff3a3 `#5ff3a3` 95, 243, 163 `rgb(95,243,163)` 37.3, 95.3, 63.9 `rgb(37.3%,95.3%,63.9%)` 61, 0, 33, 5 147.6°, 86, 66.3 `hsl(147.6,86%,66.3%)` 147.6°, 60.9, 95.3 66ff99 `#66ff99`\nCIE-LAB 86.591, -57.243, 27.121 43.378, 69.175, 45.714 0.274, 0.437, 69.175 86.591, 63.342, 154.649 86.591, -62.363, 48.12 83.171, -52.454, 25.632 01011111, 11110011, 10100011\n\n# Color Schemes with #5ff3a3\n\n• #5ff3a3\n``#5ff3a3` `rgb(95,243,163)``\n• #f35faf\n``#f35faf` `rgb(243,95,175)``\nComplementary Color\n• #65f35f\n``#65f35f` `rgb(101,243,95)``\n• #5ff3a3\n``#5ff3a3` `rgb(95,243,163)``\n• #5ff3ed\n``#5ff3ed` `rgb(95,243,237)``\nAnalogous Color\n• #f35f65\n``#f35f65` `rgb(243,95,101)``\n• #5ff3a3\n``#5ff3a3` `rgb(95,243,163)``\n• #ed5ff3\n``#ed5ff3` `rgb(237,95,243)``\nSplit Complementary Color\n• #f3a35f\n``#f3a35f` `rgb(243,163,95)``\n• #5ff3a3\n``#5ff3a3` `rgb(95,243,163)``\n• #a35ff3\n``#a35ff3` `rgb(163,95,243)``\n• #aff35f\n``#aff35f` `rgb(175,243,95)``\n• #5ff3a3\n``#5ff3a3` `rgb(95,243,163)``\n• #a35ff3\n``#a35ff3` `rgb(163,95,243)``\n• #f35faf\n``#f35faf` `rgb(243,95,175)``\n• #18ee7a\n``#18ee7a` `rgb(24,238,122)``\n• #30ef88\n``#30ef88` `rgb(48,239,136)``\n• #47f195\n``#47f195` `rgb(71,241,149)``\n• #5ff3a3\n``#5ff3a3` `rgb(95,243,163)``\n• #77f5b1\n``#77f5b1` `rgb(119,245,177)``\n• #8ef7be\n``#8ef7be` `rgb(142,247,190)``\n• #a6f8cc\n``#a6f8cc` `rgb(166,248,204)``\nMonochromatic Color\n\n# Alternatives to #5ff3a3\n\nBelow, you can see some colors close to #5ff3a3. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #5ff37e\n``#5ff37e` `rgb(95,243,126)``\n• #5ff38a\n``#5ff38a` `rgb(95,243,138)``\n• #5ff397\n``#5ff397` `rgb(95,243,151)``\n• #5ff3a3\n``#5ff3a3` `rgb(95,243,163)``\n• #5ff3af\n``#5ff3af` `rgb(95,243,175)``\n• #5ff3bc\n``#5ff3bc` `rgb(95,243,188)``\n• #5ff3c8\n``#5ff3c8` `rgb(95,243,200)``\nSimilar Colors\n\n# #5ff3a3 Preview\n\nThis text has a font color of #5ff3a3.\n\n``<span style=\"color:#5ff3a3;\">Text here</span>``\n#5ff3a3 background color\n\nThis paragraph has a background color of #5ff3a3.\n\n``<p style=\"background-color:#5ff3a3;\">Content here</p>``\n#5ff3a3 border color\n\nThis element has a border color of #5ff3a3.\n\n``<div style=\"border:1px solid #5ff3a3;\">Content here</div>``\nCSS codes\n``.text {color:#5ff3a3;}``\n``.background {background-color:#5ff3a3;}``\n``.border {border:1px solid #5ff3a3;}``\n\n# Shades and Tints of #5ff3a3\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000402 is the darkest color, while #f1fef7 is the lightest one.\n\n• #000402\n``#000402` `rgb(0,4,2)``\n• #02160b\n``#02160b` `rgb(2,22,11)``\n• #032914\n``#032914` `rgb(3,41,20)``\n• #043b1d\n``#043b1d` `rgb(4,59,29)``\n• #064d27\n``#064d27` `rgb(6,77,39)``\n• #075f30\n``#075f30` `rgb(7,95,48)``\n• #097239\n``#097239` `rgb(9,114,57)``\n• #0a8442\n``#0a8442` `rgb(10,132,66)``\n• #0b964b\n``#0b964b` `rgb(11,150,75)``\n• #0da854\n``#0da854` `rgb(13,168,84)``\n• #0ebb5d\n``#0ebb5d` `rgb(14,187,93)``\n• #0fcd66\n``#0fcd66` `rgb(15,205,102)``\n• #11df70\n``#11df70` `rgb(17,223,112)``\n• #16ee79\n``#16ee79` `rgb(22,238,121)``\n• #28ef84\n``#28ef84` `rgb(40,239,132)``\n• #3bf08e\n``#3bf08e` `rgb(59,240,142)``\n• #4df299\n``#4df299` `rgb(77,242,153)``\n• #5ff3a3\n``#5ff3a3` `rgb(95,243,163)``\n``#71f4ad` `rgb(113,244,173)``\n• #83f6b8\n``#83f6b8` `rgb(131,246,184)``\n• #96f7c2\n``#96f7c2` `rgb(150,247,194)``\n• #a8f8cd\n``#a8f8cd` `rgb(168,248,205)``\n``#bafad7` `rgb(186,250,215)``\n• #ccfbe2\n``#ccfbe2` `rgb(204,251,226)``\n• #dffdec\n``#dffdec` `rgb(223,253,236)``\n• #f1fef7\n``#f1fef7` `rgb(241,254,247)``\nTint Color Variation\n\n# Tones of #5ff3a3\n\nA tone is produced by adding gray to any pure hue. In this case, #a8aaa9 is the less saturated color, while #58faa2 is the most saturated one.\n\n• #a8aaa9\n``#a8aaa9` `rgb(168,170,169)``\n• #a1b1a8\n``#a1b1a8` `rgb(161,177,168)``\n• #9bb7a8\n``#9bb7a8` `rgb(155,183,168)``\n• #94bea7\n``#94bea7` `rgb(148,190,167)``\n• #8dc5a7\n``#8dc5a7` `rgb(141,197,167)``\n• #87cba6\n``#87cba6` `rgb(135,203,166)``\n• #80d2a6\n``#80d2a6` `rgb(128,210,166)``\n• #79d9a5\n``#79d9a5` `rgb(121,217,165)``\n• #73dfa5\n``#73dfa5` `rgb(115,223,165)``\n• #6ce6a4\n``#6ce6a4` `rgb(108,230,164)``\n• #66eca4\n``#66eca4` `rgb(102,236,164)``\n• #5ff3a3\n``#5ff3a3` `rgb(95,243,163)``\n• #58faa2\n``#58faa2` `rgb(88,250,162)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #5ff3a3 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.56051826,"math_prob":0.86236775,"size":3717,"snap":"2020-34-2020-40","text_gpt3_token_len":1698,"char_repetition_ratio":0.12361971,"word_repetition_ratio":0.011049724,"special_character_ratio":0.54183483,"punctuation_ratio":0.23581758,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98254883,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-05T11:28:23Z\",\"WARC-Record-ID\":\"<urn:uuid:5cf6a220-894c-4786-9132-b2f3e0091fbc>\",\"Content-Length\":\"36325\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b102730-b829-4561-b1d2-94edf4148e44>\",\"WARC-Concurrent-To\":\"<urn:uuid:2467d47b-d941-4f19-a185-849ea71835b0>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/5ff3a3\",\"WARC-Payload-Digest\":\"sha1:WB6YTZHXWYHDE4ZGTAHHTUC5DDA26J76\",\"WARC-Block-Digest\":\"sha1:ARRZPGOZCNX25THQWUEZVVEXM3I4AS34\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735939.26_warc_CC-MAIN-20200805094821-20200805124821-00330.warc.gz\"}"} |
https://www.ehow.co.uk/how_7427871_calculate-outside-diameter.html | [
"# How to calculate outside diameter\n\nAn outside diameter is the dimension that describes the size of a hollow cylindrical object. Pipes are a common example of such cylindrical objects. The outside diameter of a pipe should be always double-checked before installation to make sure of proper fit. Fortunately, this parameter is easy to compute if you know the inner diameter and the wall thickness of the pipe. Alternatively, calculate the outside diameter using the circumference of a pipe and the mathematical constant \"pi.\"\n\nDivide the inner diameter of a pipe by two to find the inner radius. For example, if the inner diameter is 3 inches then the radius = 3 / 2 = 1.5 inches.\n\nAdd up the inner radius and the pipe wall thickness to calculate the outside radius. For instance, if the wall thickness is 1/2 inches then the outside radius is 1.5 + 0.5 = 2 inches.\n\nMultiply the outside radius by two to calculate the outside diameter. In this example, the outside diameter is 2 inches x 2 = 4 inches.\n\nDivide the pipe circumference by the constant pi (3.142) to calculate the outside diameter as an alternate method. For example, if the circumference is 12.5 inches then the outside diameter = 12.5 / 3.142 = 3.978 inches."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8150047,"math_prob":0.99837726,"size":1444,"snap":"2019-51-2020-05","text_gpt3_token_len":331,"char_repetition_ratio":0.18333334,"word_repetition_ratio":0.0,"special_character_ratio":0.23822714,"punctuation_ratio":0.11660777,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992859,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T04:48:41Z\",\"WARC-Record-ID\":\"<urn:uuid:213ab343-3e7d-4d11-a4a6-41859c05a109>\",\"Content-Length\":\"177528\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:45c1c650-cae2-4acc-84bd-9510aa3dfe5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:eb4eb92e-1eb4-4d6f-8e19-9fb72f6e30d3>\",\"WARC-IP-Address\":\"104.76.198.192\",\"WARC-Target-URI\":\"https://www.ehow.co.uk/how_7427871_calculate-outside-diameter.html\",\"WARC-Payload-Digest\":\"sha1:CPTL4HXDIVTKETGO6ZQWZTH72DKJHDWI\",\"WARC-Block-Digest\":\"sha1:3NES2HRSVIDRCMA7W5TRDPG6KYEAKOFF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540536855.78_warc_CC-MAIN-20191212023648-20191212051648-00444.warc.gz\"}"} |
https://www.physicsforums.com/threads/statistics-width-of-a-confidence-interval.309912/ | [
"# Statistics: Width of a Confidence Interval\n\n• kingwinner\nIn summary, the homework statement is that the width of a confidence interval changes depending on the size of the sample. The attempt for part b shows that Sp is just a weighted average of Sx and Sy and therefore shouldn't change much. However, as the sample sizes increase, Sp might change.\n\n## Homework Statement\n\nhttp://www.geocities.com/asdfasdf23135/stat15.JPG\n\nI am OK with part a, but I am having some troubles with part b.\n\n## Homework Equations\n\nWidth of a Confidence Interval\n\n## The Attempt at a Solution\n\nAttempt for part b:\nhttp://www.geocities.com/asdfasdf23135/stat16.JPG\nnote: P(T>t_(n1+n2-2),alpha/2)=alpha/2 where T~t distribution with n1+n2-2 degrees of freedom.\n\nNow, as n1 increases and n2 increases,\n(i) t_(n1+n2-2),alpha/2 gets smaller\n(ii) denominator gets larger\n(iii) the ∑ terms gets larger because the upper indices of summation are n1 and n2, respectively\n\n(i) and (ii) push towards a narrower confidence interval, but (iii) pushes towards a wider confidence interval. How can we determine the ultimate result?\n\nAny help is greatly appreciated!\n\nI think the intuitive answer is that the CI will be \"narrower\", but how can I prove this more rigorously? My method above doesn't seem to work...\n\nOn your handwritten attempt for part b, use the second to last line instead of the last line. It shows that Sp is just a weighted average of Sx and Sy and therefore should not change much, since the true variances are assumed equal. For your problem, Sx=1.8 and Sy=2.6, so you should assume they are the same and just compute 2t*sqrt{Sp*(etc)} and see that the interval is narrower. Technically Sp might be slightly more or slightly less (and if you compute Sp with Sx=1.8 and Sy=2.6 and then again with interchanged 1.8 and 2.6 you should get an example of both possibilities). The change in t has much more of an effect than any slight change in Sp.\n\n\"Sp should not change much\"\n\nWhy?? As the sample sizes increase, wouldn't Sx and Sy change?\n\nThanks!\n\nAs the sample sizes increase, wouldn't Sx and Sy change?\n\nEven if the sample size stays the same, Sx and Sy probably would change with every experiment.\n\nBut you are using them to estimate the true sigma, which by assumption is the same for both Duracell and Energizer.\n\nAnd Sp is of the form (a*Sx + b*Sy)/(a+b), in other words just a weighted average of these two.\n\nQuestion (b) really doesn't seem to be posed as a deep question. In fact, the wording of question (b) suggests that you are supposed to assume that the sample means and sample standard deviations are the same as in (a), but the sample sizes are now different."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8170947,"math_prob":0.9100001,"size":781,"snap":"2023-40-2023-50","text_gpt3_token_len":221,"char_repetition_ratio":0.091377094,"word_repetition_ratio":0.0,"special_character_ratio":0.2765685,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98812777,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T13:44:20Z\",\"WARC-Record-ID\":\"<urn:uuid:ac98afee-b827-459f-8b62-9cab46ad5fe0>\",\"Content-Length\":\"70707\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c58d82bf-0757-41df-89ed-0c2fb86a3b9c>\",\"WARC-Concurrent-To\":\"<urn:uuid:72b1ac61-8284-4eae-a2df-09a745e12089>\",\"WARC-IP-Address\":\"104.26.15.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/statistics-width-of-a-confidence-interval.309912/\",\"WARC-Payload-Digest\":\"sha1:A4SQG6PNCTPMK7QLBOFUAOTBWPKYASIZ\",\"WARC-Block-Digest\":\"sha1:DM66W5X642MVCVQANFTQL4SOMT73Q7XS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100599.20_warc_CC-MAIN-20231206130723-20231206160723-00642.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.